Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

HCC surveillance after anti-HCV therapy cost effective only for patients with cirrhosis

Article Type
Changed

For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.

Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”

The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.

Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.

Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.

“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.

Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.

Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.

Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.

“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.

“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.

“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.

The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.

This story was updated on 7/12/2019.

SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.

Publications
Topics
Sections

For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.

Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”

The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.

Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.

Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.

“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.

Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.

Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.

Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.

“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.

“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.

“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.

The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.

This story was updated on 7/12/2019.

SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.

For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.

Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”

The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.

Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.

Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.

“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.

Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.

Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.

Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.

“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.

“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.

“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.

The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.

This story was updated on 7/12/2019.

SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Genomic study reveals five subtypes of colorectal cancer

Stage now set for functional studies
Article Type
Changed

Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.

In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.

Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.

Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).

With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).

“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”

In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.

The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.

Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.

Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.

“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”

Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”

The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.

SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.

Body

Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.

Dr. William M. Grady

The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.

William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
 

Publications
Topics
Sections
Body

Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.

Dr. William M. Grady

The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.

William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
 

Body

Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.

Dr. William M. Grady

The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.

William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
 

Title
Stage now set for functional studies
Stage now set for functional studies

Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.

In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.

Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.

Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).

With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).

“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”

In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.

The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.

Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.

Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.

“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”

Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”

The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.

SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.

Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.

In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.

Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.

“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.

Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).

With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).

“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”

In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.

The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.

Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.

Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.

“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”

Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”

The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.

SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Underwater endoscopic mucosal resection may be an option for colorectal lesions

Article Type
Changed

 

For intermediate-size colorectal lesions, underwater endoscopic mucosal resection (UEMR) may offer cleaner margins than conventional EMR without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.

UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.

Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology

Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”

To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.

The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.

Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.

The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”

During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.

Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”

“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”

Additional tips included using saline instead of distilled water, and employing thin, soft snares.

The investigators reported no external funding or conflicts of interest.

SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.

Publications
Topics
Sections

 

For intermediate-size colorectal lesions, underwater endoscopic mucosal resection (UEMR) may offer cleaner margins than conventional EMR without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.

UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.

Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology

Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”

To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.

The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.

Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.

The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”

During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.

Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”

“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”

Additional tips included using saline instead of distilled water, and employing thin, soft snares.

The investigators reported no external funding or conflicts of interest.

SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.

 

For intermediate-size colorectal lesions, underwater endoscopic mucosal resection (UEMR) may offer cleaner margins than conventional EMR without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.

UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.

Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology

Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”

To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.

The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.

Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.

The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”

During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.

Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”

“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”

Additional tips included using saline instead of distilled water, and employing thin, soft snares.

The investigators reported no external funding or conflicts of interest.

SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

AGA Clinical Practice Update: Coagulation in cirrhosis

Article Type
Changed

Cirrhosis can involve “precarious” changes in hemostatic pathways that tip the scales toward either bleeding or hypercoagulation, experts wrote in an American Gastroenterological Association Clinical Practice Update.

Based on current evidence, clinicians should not routinely correct thrombocytopenia and coagulopathy in patients with cirrhosis prior to low-risk procedures, such as therapeutic paracentesis, thoracentesis, and routine upper endoscopy for variceal ligation, Jacqueline G. O’Leary, MD, of Dallas VA Medical Center and her three coreviewers wrote in Gastroenterology.

To optimize clot formation prior to high-risk procedures, and in patients with active bleeding, a platelet count above 50,000 per mcL is still recommended. However, it may be more meaningful to couple that platelet target with a fibrinogen level above 120 mg/dL rather than rely on the international normalized ratio (INR), the experts wrote. Not only does INR vary significantly depending on which thromboplastin is used in the test, but “correcting” INR with a fresh frozen plasma infusion does not affect thrombin production and worsens portal hypertension. Using cryoprecipitate to replenish fibrinogen has less impact on portal hypertension. “Global tests of clot formation, such as rotational thromboelastometry (ROTEM), thromboelastography (TEG), sonorheometry, and thrombin generation may eventually have a role in the evaluation of clotting in patients with cirrhosis but currently lack validated target levels,” the experts wrote.

They advised clinicians to limit the use of blood products (such as fresh frozen plasma and pooled platelet transfusions) because of cost and the risk of exacerbated portal hypertension, infection, and immunologic complications. For severe anemia and uremia, red blood cell transfusion (250 mL) can be considered. Platelet-rich plasma from one donor is less immunologically risky than a pooled platelet transfusion. Thrombopoietin agonists are “a good alternative” to platelet transfusion but require about 10 days for response. Alternative prothrombotic therapies include oral thrombopoietin receptor agonists (avatrombopag and lusutrombopag) to boost platelet count before an invasive procedure, antifibrinolytic therapy (aminocaproic acid and tranexamic acid) for persistent bleeding from mucosal oozing or puncture wounds. Desmopressin should only be considered for patients with comorbid renal failure.

For anticoagulation, the practice update recommends considering systemic heparin infusion for cirrhotic patients with symptomatic deep venous thrombosis (DVT) or portal vein thrombosis (PVT). However, the anti–factor Xa assay will not reliably monitor response if patients have low liver-derived antithrombin III (heparin cofactor). “With newly diagnosed PVT, the decision to intervene with directed therapy rests on the extent of the thrombosis, presence or absence of attributable symptoms, and the risk of bleeding and falls,” the experts stated.

Six-month follow-up imaging is recommended to assess anticoagulation efficacy. More frequent imaging can be considered for PVT patients considered at high risk for therapeutic anticoagulation. If clots do not fully resolve after 6 months of treatment, options including extending therapy with the same agent, switching to a different anticoagulant class, or receiving transjugular intrahepatic portosystemic shunt (TIPS). “The role for TIPS in PVT is evolving and may address complications like portal hypertensive bleeding, medically refractory clot, and the need for repeated banding after variceal bleeding,” the experts noted.

Prophylaxis of DVT is recommended for all hospitalized patients with cirrhosis. Vitamin K antagonists and direct-acting oral anticoagulants (dabigatran, apixaban, rivaroxaban, and edoxaban) are alternatives to heparin for anticoagulation of cirrhotic patients with either PVT and DVT, the experts wrote. However, DOACs are not recommended for most Child-Pugh B patients or for any Child-Pugh C patients.

No funding sources or conflicts of interest were reported.

SOURCE: O’Leary JG et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.070.

Publications
Topics
Sections

Cirrhosis can involve “precarious” changes in hemostatic pathways that tip the scales toward either bleeding or hypercoagulation, experts wrote in an American Gastroenterological Association Clinical Practice Update.

Based on current evidence, clinicians should not routinely correct thrombocytopenia and coagulopathy in patients with cirrhosis prior to low-risk procedures, such as therapeutic paracentesis, thoracentesis, and routine upper endoscopy for variceal ligation, Jacqueline G. O’Leary, MD, of Dallas VA Medical Center and her three coreviewers wrote in Gastroenterology.

To optimize clot formation prior to high-risk procedures, and in patients with active bleeding, a platelet count above 50,000 per mcL is still recommended. However, it may be more meaningful to couple that platelet target with a fibrinogen level above 120 mg/dL rather than rely on the international normalized ratio (INR), the experts wrote. Not only does INR vary significantly depending on which thromboplastin is used in the test, but “correcting” INR with a fresh frozen plasma infusion does not affect thrombin production and worsens portal hypertension. Using cryoprecipitate to replenish fibrinogen has less impact on portal hypertension. “Global tests of clot formation, such as rotational thromboelastometry (ROTEM), thromboelastography (TEG), sonorheometry, and thrombin generation may eventually have a role in the evaluation of clotting in patients with cirrhosis but currently lack validated target levels,” the experts wrote.

They advised clinicians to limit the use of blood products (such as fresh frozen plasma and pooled platelet transfusions) because of cost and the risk of exacerbated portal hypertension, infection, and immunologic complications. For severe anemia and uremia, red blood cell transfusion (250 mL) can be considered. Platelet-rich plasma from one donor is less immunologically risky than a pooled platelet transfusion. Thrombopoietin agonists are “a good alternative” to platelet transfusion but require about 10 days for response. Alternative prothrombotic therapies include oral thrombopoietin receptor agonists (avatrombopag and lusutrombopag) to boost platelet count before an invasive procedure, antifibrinolytic therapy (aminocaproic acid and tranexamic acid) for persistent bleeding from mucosal oozing or puncture wounds. Desmopressin should only be considered for patients with comorbid renal failure.

For anticoagulation, the practice update recommends considering systemic heparin infusion for cirrhotic patients with symptomatic deep venous thrombosis (DVT) or portal vein thrombosis (PVT). However, the anti–factor Xa assay will not reliably monitor response if patients have low liver-derived antithrombin III (heparin cofactor). “With newly diagnosed PVT, the decision to intervene with directed therapy rests on the extent of the thrombosis, presence or absence of attributable symptoms, and the risk of bleeding and falls,” the experts stated.

Six-month follow-up imaging is recommended to assess anticoagulation efficacy. More frequent imaging can be considered for PVT patients considered at high risk for therapeutic anticoagulation. If clots do not fully resolve after 6 months of treatment, options including extending therapy with the same agent, switching to a different anticoagulant class, or receiving transjugular intrahepatic portosystemic shunt (TIPS). “The role for TIPS in PVT is evolving and may address complications like portal hypertensive bleeding, medically refractory clot, and the need for repeated banding after variceal bleeding,” the experts noted.

Prophylaxis of DVT is recommended for all hospitalized patients with cirrhosis. Vitamin K antagonists and direct-acting oral anticoagulants (dabigatran, apixaban, rivaroxaban, and edoxaban) are alternatives to heparin for anticoagulation of cirrhotic patients with either PVT and DVT, the experts wrote. However, DOACs are not recommended for most Child-Pugh B patients or for any Child-Pugh C patients.

No funding sources or conflicts of interest were reported.

SOURCE: O’Leary JG et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.070.

Cirrhosis can involve “precarious” changes in hemostatic pathways that tip the scales toward either bleeding or hypercoagulation, experts wrote in an American Gastroenterological Association Clinical Practice Update.

Based on current evidence, clinicians should not routinely correct thrombocytopenia and coagulopathy in patients with cirrhosis prior to low-risk procedures, such as therapeutic paracentesis, thoracentesis, and routine upper endoscopy for variceal ligation, Jacqueline G. O’Leary, MD, of Dallas VA Medical Center and her three coreviewers wrote in Gastroenterology.

To optimize clot formation prior to high-risk procedures, and in patients with active bleeding, a platelet count above 50,000 per mcL is still recommended. However, it may be more meaningful to couple that platelet target with a fibrinogen level above 120 mg/dL rather than rely on the international normalized ratio (INR), the experts wrote. Not only does INR vary significantly depending on which thromboplastin is used in the test, but “correcting” INR with a fresh frozen plasma infusion does not affect thrombin production and worsens portal hypertension. Using cryoprecipitate to replenish fibrinogen has less impact on portal hypertension. “Global tests of clot formation, such as rotational thromboelastometry (ROTEM), thromboelastography (TEG), sonorheometry, and thrombin generation may eventually have a role in the evaluation of clotting in patients with cirrhosis but currently lack validated target levels,” the experts wrote.

They advised clinicians to limit the use of blood products (such as fresh frozen plasma and pooled platelet transfusions) because of cost and the risk of exacerbated portal hypertension, infection, and immunologic complications. For severe anemia and uremia, red blood cell transfusion (250 mL) can be considered. Platelet-rich plasma from one donor is less immunologically risky than a pooled platelet transfusion. Thrombopoietin agonists are “a good alternative” to platelet transfusion but require about 10 days for response. Alternative prothrombotic therapies include oral thrombopoietin receptor agonists (avatrombopag and lusutrombopag) to boost platelet count before an invasive procedure, antifibrinolytic therapy (aminocaproic acid and tranexamic acid) for persistent bleeding from mucosal oozing or puncture wounds. Desmopressin should only be considered for patients with comorbid renal failure.

For anticoagulation, the practice update recommends considering systemic heparin infusion for cirrhotic patients with symptomatic deep venous thrombosis (DVT) or portal vein thrombosis (PVT). However, the anti–factor Xa assay will not reliably monitor response if patients have low liver-derived antithrombin III (heparin cofactor). “With newly diagnosed PVT, the decision to intervene with directed therapy rests on the extent of the thrombosis, presence or absence of attributable symptoms, and the risk of bleeding and falls,” the experts stated.

Six-month follow-up imaging is recommended to assess anticoagulation efficacy. More frequent imaging can be considered for PVT patients considered at high risk for therapeutic anticoagulation. If clots do not fully resolve after 6 months of treatment, options including extending therapy with the same agent, switching to a different anticoagulant class, or receiving transjugular intrahepatic portosystemic shunt (TIPS). “The role for TIPS in PVT is evolving and may address complications like portal hypertensive bleeding, medically refractory clot, and the need for repeated banding after variceal bleeding,” the experts noted.

Prophylaxis of DVT is recommended for all hospitalized patients with cirrhosis. Vitamin K antagonists and direct-acting oral anticoagulants (dabigatran, apixaban, rivaroxaban, and edoxaban) are alternatives to heparin for anticoagulation of cirrhotic patients with either PVT and DVT, the experts wrote. However, DOACs are not recommended for most Child-Pugh B patients or for any Child-Pugh C patients.

No funding sources or conflicts of interest were reported.

SOURCE: O’Leary JG et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.070.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Atypical food allergies common in IBS

Article Type
Changed

Among patients with irritable bowel syndrome (IBS) who tested negative for classic food allergies, confocal laser endomicroscopy showed that 70% had an immediate disruption of the intestinal barrier in response to at least one food challenge, with accompanying changes in epithelial tight junction proteins and eosinophils.

Among 108 patients who completed the study, 61% showed this atypical allergic response to wheat, wrote Annette Fritscher-Ravens, MD, PhD, of University Hospital Schleswig-Holstein in Kiel, Germany, and her associates. Strikingly, almost 70% of patients with atypical food allergies to wheat, yeast, milk, soy, or egg white who eliminated these foods from their diets showed at least an 80% improvement in IBS symptoms after 3 months. These findings were published in Gastroenterology.

Confocal laser endomicroscopy (CLE) “permits real-time detection and quantification of changes in intestinal tissues and cells, including increases in intraepithelial lymphocytes and fluid extravasation through epithelial leaks,” the investigators wrote. This approach helps clinicians objectively detect and measure gastrointestinal pathology in response to specific foods, potentially freeing IBS patients from highly restrictive diets that ease symptoms but are hard to follow, and are not meant for long-term use.

For the study, the researchers enrolled patients meeting Rome III IBS criteria who tested negative for common food antigens on immunoglobulin E serology and skin tests. During endoscopy, each patient underwent sequential duodenal challenges with 20-mL suspensions of wheat, yeast, milk, soy, and egg white, followed by CLE with biopsy.

Among 108 patients who finished the study, 76 (70%) were CLE positive. They and their first-degree relatives were significantly more likely to have atopic disorders than were CLE-negative patients (P = .001). The most common allergen was wheat (61% of patients), followed by yeast (20%), milk (9%), soy (7%), and egg white (4%). Also, nine patients reacted to two of the tested food antigens.

Compared with CLE-negative patients or controls, CLE-positive patients also had significantly more intraepithelial lymphocytes (P = .001) and postchallenge expression of claudin-2 (P = .023), which contributes to tight junction permeability and is known to be upregulated in intestinal barrier dysfunction, IBS, and inflammatory bowel disease. Conversely, levels of the tight junction protein occludin were significantly lower in duodenal biopsies from CLE-positive patients versus controls (P = .022). “Levels of mRNAs encoding inflammatory cytokines were unchanged in duodenal tissues after CLE challenge, but eosinophil degranulation increased,” the researchers wrote.

In a double-blind, randomized, crossover study, patients then excluded from their diet the antigen to which they had tested positive or consumed a sham (placebo) diet that excluded only some foods containing the antigen, with a 2-week washout period in between. The CLE-positive patients showed a 70% average improvement in Francis IBS severity score after 3 months of the intervention diet and a 76% improvement at 6 months. Strikingly, 68% of CLE-positive patients showed at least an 80% improvement in symptoms, while only 4% did not respond at all.

“Since we do not observe a histological mast cell/basophil increase or activation, and [we] do not find increased mast cell mediators (tryptase) in the duodenal fluid after positive challenge, we assume a nonclassical or atypical food allergy as cause of the mucosal reaction observed by CLE,” the researchers wrote. Other immune cell parameters remained unchanged, but additional studies are needed to see if these changes are truly absent or occur later after challenge. The researchers are conducting murine studies of eosinophilic food allergy to shed more light on these nonclassical food allergies.

Funders included the Rashid Hussein Charity Trust, the German Research Foundation, and the Leibniz Foundation. The researchers reported having no conflicts of interest.

SOURCE: Fritscher-Ravens A et al. Gastroenterology. 2019 May 14. doi: 10.1053/j.gastro.2019.03.046.

Publications
Topics
Sections

Among patients with irritable bowel syndrome (IBS) who tested negative for classic food allergies, confocal laser endomicroscopy showed that 70% had an immediate disruption of the intestinal barrier in response to at least one food challenge, with accompanying changes in epithelial tight junction proteins and eosinophils.

Among 108 patients who completed the study, 61% showed this atypical allergic response to wheat, wrote Annette Fritscher-Ravens, MD, PhD, of University Hospital Schleswig-Holstein in Kiel, Germany, and her associates. Strikingly, almost 70% of patients with atypical food allergies to wheat, yeast, milk, soy, or egg white who eliminated these foods from their diets showed at least an 80% improvement in IBS symptoms after 3 months. These findings were published in Gastroenterology.

Confocal laser endomicroscopy (CLE) “permits real-time detection and quantification of changes in intestinal tissues and cells, including increases in intraepithelial lymphocytes and fluid extravasation through epithelial leaks,” the investigators wrote. This approach helps clinicians objectively detect and measure gastrointestinal pathology in response to specific foods, potentially freeing IBS patients from highly restrictive diets that ease symptoms but are hard to follow, and are not meant for long-term use.

For the study, the researchers enrolled patients meeting Rome III IBS criteria who tested negative for common food antigens on immunoglobulin E serology and skin tests. During endoscopy, each patient underwent sequential duodenal challenges with 20-mL suspensions of wheat, yeast, milk, soy, and egg white, followed by CLE with biopsy.

Among 108 patients who finished the study, 76 (70%) were CLE positive. They and their first-degree relatives were significantly more likely to have atopic disorders than were CLE-negative patients (P = .001). The most common allergen was wheat (61% of patients), followed by yeast (20%), milk (9%), soy (7%), and egg white (4%). Also, nine patients reacted to two of the tested food antigens.

Compared with CLE-negative patients or controls, CLE-positive patients also had significantly more intraepithelial lymphocytes (P = .001) and postchallenge expression of claudin-2 (P = .023), which contributes to tight junction permeability and is known to be upregulated in intestinal barrier dysfunction, IBS, and inflammatory bowel disease. Conversely, levels of the tight junction protein occludin were significantly lower in duodenal biopsies from CLE-positive patients versus controls (P = .022). “Levels of mRNAs encoding inflammatory cytokines were unchanged in duodenal tissues after CLE challenge, but eosinophil degranulation increased,” the researchers wrote.

In a double-blind, randomized, crossover study, patients then excluded from their diet the antigen to which they had tested positive or consumed a sham (placebo) diet that excluded only some foods containing the antigen, with a 2-week washout period in between. The CLE-positive patients showed a 70% average improvement in Francis IBS severity score after 3 months of the intervention diet and a 76% improvement at 6 months. Strikingly, 68% of CLE-positive patients showed at least an 80% improvement in symptoms, while only 4% did not respond at all.

“Since we do not observe a histological mast cell/basophil increase or activation, and [we] do not find increased mast cell mediators (tryptase) in the duodenal fluid after positive challenge, we assume a nonclassical or atypical food allergy as cause of the mucosal reaction observed by CLE,” the researchers wrote. Other immune cell parameters remained unchanged, but additional studies are needed to see if these changes are truly absent or occur later after challenge. The researchers are conducting murine studies of eosinophilic food allergy to shed more light on these nonclassical food allergies.

Funders included the Rashid Hussein Charity Trust, the German Research Foundation, and the Leibniz Foundation. The researchers reported having no conflicts of interest.

SOURCE: Fritscher-Ravens A et al. Gastroenterology. 2019 May 14. doi: 10.1053/j.gastro.2019.03.046.

Among patients with irritable bowel syndrome (IBS) who tested negative for classic food allergies, confocal laser endomicroscopy showed that 70% had an immediate disruption of the intestinal barrier in response to at least one food challenge, with accompanying changes in epithelial tight junction proteins and eosinophils.

Among 108 patients who completed the study, 61% showed this atypical allergic response to wheat, wrote Annette Fritscher-Ravens, MD, PhD, of University Hospital Schleswig-Holstein in Kiel, Germany, and her associates. Strikingly, almost 70% of patients with atypical food allergies to wheat, yeast, milk, soy, or egg white who eliminated these foods from their diets showed at least an 80% improvement in IBS symptoms after 3 months. These findings were published in Gastroenterology.

Confocal laser endomicroscopy (CLE) “permits real-time detection and quantification of changes in intestinal tissues and cells, including increases in intraepithelial lymphocytes and fluid extravasation through epithelial leaks,” the investigators wrote. This approach helps clinicians objectively detect and measure gastrointestinal pathology in response to specific foods, potentially freeing IBS patients from highly restrictive diets that ease symptoms but are hard to follow, and are not meant for long-term use.

For the study, the researchers enrolled patients meeting Rome III IBS criteria who tested negative for common food antigens on immunoglobulin E serology and skin tests. During endoscopy, each patient underwent sequential duodenal challenges with 20-mL suspensions of wheat, yeast, milk, soy, and egg white, followed by CLE with biopsy.

Among 108 patients who finished the study, 76 (70%) were CLE positive. They and their first-degree relatives were significantly more likely to have atopic disorders than were CLE-negative patients (P = .001). The most common allergen was wheat (61% of patients), followed by yeast (20%), milk (9%), soy (7%), and egg white (4%). Also, nine patients reacted to two of the tested food antigens.

Compared with CLE-negative patients or controls, CLE-positive patients also had significantly more intraepithelial lymphocytes (P = .001) and postchallenge expression of claudin-2 (P = .023), which contributes to tight junction permeability and is known to be upregulated in intestinal barrier dysfunction, IBS, and inflammatory bowel disease. Conversely, levels of the tight junction protein occludin were significantly lower in duodenal biopsies from CLE-positive patients versus controls (P = .022). “Levels of mRNAs encoding inflammatory cytokines were unchanged in duodenal tissues after CLE challenge, but eosinophil degranulation increased,” the researchers wrote.

In a double-blind, randomized, crossover study, patients then excluded from their diet the antigen to which they had tested positive or consumed a sham (placebo) diet that excluded only some foods containing the antigen, with a 2-week washout period in between. The CLE-positive patients showed a 70% average improvement in Francis IBS severity score after 3 months of the intervention diet and a 76% improvement at 6 months. Strikingly, 68% of CLE-positive patients showed at least an 80% improvement in symptoms, while only 4% did not respond at all.

“Since we do not observe a histological mast cell/basophil increase or activation, and [we] do not find increased mast cell mediators (tryptase) in the duodenal fluid after positive challenge, we assume a nonclassical or atypical food allergy as cause of the mucosal reaction observed by CLE,” the researchers wrote. Other immune cell parameters remained unchanged, but additional studies are needed to see if these changes are truly absent or occur later after challenge. The researchers are conducting murine studies of eosinophilic food allergy to shed more light on these nonclassical food allergies.

Funders included the Rashid Hussein Charity Trust, the German Research Foundation, and the Leibniz Foundation. The researchers reported having no conflicts of interest.

SOURCE: Fritscher-Ravens A et al. Gastroenterology. 2019 May 14. doi: 10.1053/j.gastro.2019.03.046.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Algorithm predicts villous atrophy in children with potential celiac disease

Evidence-based prediction with a grain of salt
Article Type
Changed

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Publications
Topics
Sections
Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Body

While the simplification of the diagnostic process for celiac disease (CD), now heavily reliant on CD-specific autoantibodies, has made the life of clinicians easier in many respects, new scenarios also have emerged that are posing new challenges. One of them is that a substantial, growing portion of subjects (who may or may not have symptoms) present with positive CD autoantibodies but a normal duodenal mucosa (“potential celiac patient”). If left on gluten, with time some will develop villous atrophy, but some won’t. What is the clinician supposed to do with them?

Dr. Stefano Guandalini

 

The paper by Auricchio et al. addresses this issue in a rigorous, well-structured way by closely prospectively monitoring a large series of pediatric patients. Their conclusions have very useful implications for the clinician. In fact taking into consideration several criteria, they found valuable after a long observation period – such as age of the child, HLA status, persistence of elevated CD-specific autoantibodies, and presence or absence of intraepithelial lymphocytes in the initial biopsy – they concluded that one can correctly identify at the beginning four out of five potential celiac patients who will not develop villous atrophy, and thus do not need to follow a gluten-free diet.

Ultimately, however, let’s not forget that we are still dealing with percentages of risk to develop full-blown CD, not with definitive certainties. Hence, the decision of starting a gluten-free diet or not (and of how often and in which way to monitor those who remain on gluten) remains a mutually agreed upon plan sealed by two actors: on one side the patient (or the patient’s family); and on the other, an experienced health care provider who has clearly explained the facts. In other words, evidence-based criteria, good old medicine, and a grain of salt! 

Stefano Guandalini, MD, is a pediatric gastroenterologist at the University of Chicago Medical Center. He has no conflicts of interest.
 

Title
Evidence-based prediction with a grain of salt
Evidence-based prediction with a grain of salt

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

A new algorithm may be able to predict which children with potential celiac disease will go on to develop villous atrophy, according to investigators writing in Gastroenterology.

The risk model was developed from the largest cohort of its kind, with the longest follow-up to date, reported lead author Renata Auricchio, MD, PhD, of University Federico II in Naples, Italy, and colleagues. Using the algorithm, which relies most heavily on a baseline number of intraepithelial lymphocytes (IELs) in mucosa, followed by age at diagnosis and genetic profile, clinicians may now consider prescribing gluten-free diets to only the highest-risk patients, instead of all suspected cases, noting that more than half of potential cases do not develop flat mucosa within 12 years.

Development of the algorithm began with enrollment of 340 children aged 2-18 years who were positive for endomysial antibodies immunoglobulin A antibodies and had tested positive twice consecutively for antitissue transglutaminase antibodies. Additionally, children were required to possess HLA DQ2- or DQ8-positive haplotypes and have normal duodenal architecture in five biopsy samples. Because of symptoms suggestive of celiac disease or parental discretion, 60 patients were started on a gluten-free diet and excluded from the study, leaving 280 patients in the final cohort. These patients were kept on a gluten-containing diet and followed for up to 12 years. Every 6 months, the investigators checked antibodies and clinical status, and every 2 years, small bowel biopsy was performed, if symptoms had not necessitated this earlier.

After a median follow-up of 60 months, ranging from 18 months to 12 years, 39 patients (13.9%) developed symptoms of celiac disease and were placed on a gluten-free diet, although they declined confirmatory biopsy, disallowing classification of celiac disease. Another 33 patients (11.7%) were lost to follow-up and 89 (32%) stopped producing antibodies, with none going on to develop villous atrophy. In total, 42 patients (15%) developed flat mucosa during the follow-up period, with an estimated cumulative incidence of 43% at 12 years. The investigators noted that patients most frequently progressed within two time frames; at 24-48 months after enrollment, or at 96-120 months.

To develop the algorithm, the investigators performed multivariable analysis with several potential risk factors, including age, sex, genetic profile, mucosal characteristics, and concomitant autoimmune diseases. Of these, a high number of IELs upon first biopsy was most highly correlated with progression to celiac disease. Patients who developed villous atrophy had a mean value of 11.9 IELs at first biopsy, compared with 6.44 among those who remained potential (P = .05). The next strongest predictive factors were age and genetic profile. Just 7% of children less than 3 years developed flat mucosa, compared with 51% of patients aged 3-10 years and 55% of those older than 10 years (P = .007). HLA status was predictive in the group aged 3-10 years but not significant in the youngest or oldest patients. Therefore, HLA haplotype was included in the final algorithm, but with smaller contribution than five non-HLA genes, namely, IL12a, SH2B3, RGS1, CCR, and IL2/IL21.

“Combining these risk factors, we set up a model to predict the probability for a patient to evolve from potential celiac disease to villous atrophy,” the investigators wrote. “Overall, the discriminant analysis model allows us to correctly classify, at entry, 80% of the children who will not develop a flat mucosa over follow-up, while approximately 69% of those who will develop flat mucosa are correctly classified by the parameters we analyzed. This system is then more accurate to predict a child who will not develop flat mucosa and then can be monitored on a gluten-containing diet than a child who will become celiac.”

The investigators noted that IEL count may be an uncommon diagnostic; however, they recommended the test, even if it necessitates referral. “The [IEL] count turned out to be crucial for the prediction power of the discriminant analysis,” the investigators wrote.

“The long-term risks of potential celiac disease have never been accurately evaluated. Thus, before adopting a wait-and-see strategy on a gluten-containing diet, a final decision should always be shared with the family.”

Still, the investigators concluded that gluten-free diet “should not be prescribed indistinctly to all patients” with potential celiac disease, as it is a “very heterogenic condition and is not necessarily the first step of overt disease.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Auricchio R et al. Gastroenterology. 2019 Apr 9. doi: 10.1053/j.gastro.2019.04.004.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Immune modulators help anti-TNF agents battle Crohn’s disease, but not UC

Timely findings on treatment optimization
Article Type
Changed

 

Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.

The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.

“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”

The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.

The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.

The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.

Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).

“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.

Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.

In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.

“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”

“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.

Body

Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.

Dr. Millie Long
Patients with Crohn’s disease treated with combination therapy in this population-based cohort had improved efficacy including a significant decrease in treatment ineffectiveness, increased time to first hospitalization, and increased time to anti-TNF medication switch.

Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.

As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.

Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant. 
 

Publications
Topics
Sections
Body

Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.

Dr. Millie Long
Patients with Crohn’s disease treated with combination therapy in this population-based cohort had improved efficacy including a significant decrease in treatment ineffectiveness, increased time to first hospitalization, and increased time to anti-TNF medication switch.

Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.

As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.

Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant. 
 

Body

Twenty years after the approval of the first anti–tumor necrosis factor (TNF) biologic agent for the treatment of inflammatory bowel disease (IBD), patients and providers are still learning how to optimize these medications. One optimization is the use of combination therapy (immunomodulator and anti-TNF). Immunomodulators are used independently for maintenance of remission of IBD, and they have been shown to reduce immunogenicity and improve efficacy when used in combination with an anti-TNF agent in prior short-term randomized controlled trials. However, use of combination therapy in the real-world is not universally practiced. Data are lacking on the risks and benefits of long-term use of these agents. Therefore, this article by Targownik et al. is very timely.

Dr. Millie Long
Patients with Crohn’s disease treated with combination therapy in this population-based cohort had improved efficacy including a significant decrease in treatment ineffectiveness, increased time to first hospitalization, and increased time to anti-TNF medication switch.

Importantly, a mixed group of patients who had previously been on azathioprine monotherapy and those newly starting this therapy at the time of anti-TNF initiation were included in this cohort (a group similar to what we see in real-world practice). Data on risk factors for disease complications, such as disease phenotype or severity, were not available. By contrast, none of the efficacy associations were improved in the smaller group of patients with ulcerative colitis on combination therapy.

As providers counsel patients on the benefits and risks of various IBD treatment choices, these data by Targownik et al. will inform decisions. Future research should incorporate additional means of biologic optimization, such as the use of therapeutic drug monitoring and/or risk factor–based selection of therapeutic agents, to better inform individualized treatment choices.

Millie D. Long MD, MPH, is an associate professor of medicine in the division of gastroenterology and hepatology; Inflammatory Bowel Diseases Center; vice chief for education; director, Gastroenterology and Hepatology Fellowship Program at the University of North Carolina at Chapel Hill. She has the following conflicts of interest: AbbVie, Takeda, Pfizer, UCB, Janssen, Salix, Prometheus, Target Pharmasolutions, and Valeant. 
 

Title
Timely findings on treatment optimization
Timely findings on treatment optimization

 

Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.

The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.

“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”

The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.

The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.

The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.

Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).

“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.

Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.

In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.

“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”

“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.

 

Adding an immune modulator (IM) to anti–tumor necrosis factor (anti-TNF) initiation therapy benefits patients with Crohn’s disease (CD) but not those with ulcerative colitis (UC), according to a recent retrospective look at more than 1,000 cases.

The study showed that patients with CD who started combination therapy instead of monotherapy had lower rates of treatment ineffectiveness, experienced longer delays until hospitalization, and less often needed to switch their anti-TNF agent, reported lead author Laura E. Targownik, MD, of the University of Manitoba, in Winnipeg, Canada, and colleagues.

“Current guidelines on the medical management of IBD strongly support the use of IMs and anti-TNFs in combination over anti-TNF monotherapy,” the investigators wrote in Clinical Gastroenterology and Hepatology. “However, there is a sparsity of real-world data demonstrating the incremental benefits of combination therapy.”

The investigators noted that the SONIC trial, conducted in 2010, showed that patients treated with combination therapy were more likely to achieve corticosteroid-free remission at weeks 26 and 50; this became the basis of evidence leading multiple clinical guidelines to recommend combination therapy for patients with CD.

The present study involved 852 patients with CD and 303 with UC who began treatment with an anti-TNF agent during 2001-2016. Data were drawn from the Manitoba Inflammatory Bowel Disease (IBD) Epidemiology database.

The main outcome of interest was treatment ineffectiveness, which was defined by any of the following four events: acute, IBD-related hospital admission for more than 48 hours; resective intestinal surgery; corticosteroid use at least 14 days after initiating anti-TNF therapy, or, if corticosteroids were used within 16 weeks of anti-TNF initiation, then subsequent corticosteroid use occurring at least 16 weeks after initiation; or switching to a different anti-TNF agent. The investigators also looked for differences in effectiveness between two agents from each class: anti-TNF agents infliximab and adalimumab, and immunomodulators methotrexate and azathioprine.

Results showed that patients with CD had higher rates of ineffectiveness-free survival when treated with combination therapy instead of monotherapy at 1 year (74.2% vs. 68.6%) and 2 years (64.0% vs. 54.5%). Using a Cox proportional hazards model, this translated to a 38% reduced risk of treatment ineffectiveness (adjusted hazard ratio, 0.62).

“This suggests that the findings of the SONIC trial may extend to real-world clinical practice, even in patients who had previous IM exposure,” the investigators noted.

Combination therapy was also significantly associated with longer time to first IBD-related hospitalization (HR, 0.53) and the need to switch anti-TNF agent (HR, 0.63). However, no such relationships were found for time to resective surgery or corticosteroid use. Although combination therapy had no impact on the rate of primary treatment ineffectiveness in multivariable logistic regression, those who received anti-TNF therapy for more than 90 days had delayed secondary treatment ineffectiveness and fewer IBD-related hospitalizations. Choice of agent from either class had no influence on effectiveness of combination therapy.

In contrast with the above findings, combination therapy in patients with UC was less promising, which aligns with previous studies.

“[W]e were not able to demonstrate a significant advantage to combination therapy in persons with UC,” the investigators wrote. “In addition, all published cohort studies to date have not been able to confirm a significant benefit to combination therapy in UC. ... In light of the lower quality of prior evidence, combined with the results from our study, the indication for combination therapy in UC would appear to be weaker.”

“Further analyses in larger cohorts may clarify whether there is a clinically relevant benefit of combination therapy in persons with UC,” the investigators concluded. “Because of the discrepancy between our findings and those of a meta-analysis of cohort studies previously published on this topic, confirmation of our results is required in future studies.”

The investigators disclosed no funding or conflicts of interest.

SOURCE: Targownik LE et al. Clin Gastroenterol Hepatol. 2018 Nov 15. doi: 10.1016/j.cgh.2018.11.003.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Endoscopist personality linked to adenoma detection rate

Article Type
Changed

Endoscopists who described themselves as “compulsive” and “thorough” had significantly higher rates of adenoma detection, according to results from a self-reported survey of 117 physician endoscopists.

Financial incentives, malpractice concerns, and perceptions of adenoma detection rate as a quality metric were not associated with endoscopists’ detection rates in the survey.

“Adenoma detection rates were higher among physicians who described themselves as more compulsive or thorough, and among those who reported feeling rushed or having difficulty accomplishing goals,” Ghideon Ezaz, MD, of Beth Israel Deaconess Medical Center in Boston and associates wrote in Clinical Gastroenterology and Hepatology.

These feelings were related to withdrawal times rather than daily procedure volume. “We hypothesize that performing a meticulous examination is mentally taxing and can cause a physician to feel rushed or perceive that it is difficult to keep pace or accomplish goals,” the researchers wrote.

Adenoma detection rates vary widely among physicians – up to threefold in some studies. Researchers have failed to attribute most of this discrepancy to seemingly obvious factors such as the type of specialty training an endoscopist completes. The traditional fee-for-service payment model is likely a culprit since physicians are paid for performing as many colonoscopies as possible rather than for procedural quality. Other potential variables include personality traits and endoscopists’ knowledge and views on the importance of adenoma detection rates.

To examine the roles of these factors in adenoma detection rates, Dr. Ezaz and coinvestigators used electronic health records data from four health systems in Boston, Pittsburgh, North Carolina, and Seattle. Detection rates were adjusted to control for differences among patient populations. Next, the researchers surveyed the physicians who performed the endoscopies about their financial motivations, knowledge and perceptions of colonoscopy quality, and personality traits.

Among 117 physicians surveyed, the median risk-adjusted adenoma detection rate was 29.3%, with an interquartile range of 24.1%-35.5%. “We found no significant association between adenoma detection rate and financial incentives, malpractice concerns, or physicians’ perceptions of adenoma detection rate as a quality metric,” the researchers wrote.

In contrast, endoscopists who described themselves as either much or somewhat more compulsive than their peers had significantly higher median adjusted rates of adenoma detection than did endoscopists who described themselves as about the same or somewhat less compulsive than others. These adenoma detection rates, in respective order, were 33.1%, 32.9%, 26.4%, and 27.3% (P = .0019). Adenoma detection rates also were significantly higher among physicians who described themselves as more thorough than their peers, who said they felt rushed during endoscopy, and who reported having difficulty pacing themselves, accomplishing goals, or managing unforeseen situations.

A secondary analysis revealed the same links between personality traits and adenomas per colonoscopy. The findings support an expert’s prior assertion (Gastrointest Endosc. 2007 Jan;65[1]:145-50) that the best endoscopists are “slow, careful, and compulsive,” the researchers noted. They recommended nurturing “meticulousness and attention to detail” during training and evaluating trainees based on these characteristics.

The National Cancer Institute provided funding. The researchers reported having no conflicts of interest.
 

SOURCE: Ezaz G et al. Clin Gastroenterol Hepatol. 2018 Oct 13. doi: 10.1016/j.cgh.2018.10.019.

Publications
Topics
Sections

Endoscopists who described themselves as “compulsive” and “thorough” had significantly higher rates of adenoma detection, according to results from a self-reported survey of 117 physician endoscopists.

Financial incentives, malpractice concerns, and perceptions of adenoma detection rate as a quality metric were not associated with endoscopists’ detection rates in the survey.

“Adenoma detection rates were higher among physicians who described themselves as more compulsive or thorough, and among those who reported feeling rushed or having difficulty accomplishing goals,” Ghideon Ezaz, MD, of Beth Israel Deaconess Medical Center in Boston and associates wrote in Clinical Gastroenterology and Hepatology.

These feelings were related to withdrawal times rather than daily procedure volume. “We hypothesize that performing a meticulous examination is mentally taxing and can cause a physician to feel rushed or perceive that it is difficult to keep pace or accomplish goals,” the researchers wrote.

Adenoma detection rates vary widely among physicians – up to threefold in some studies. Researchers have failed to attribute most of this discrepancy to seemingly obvious factors such as the type of specialty training an endoscopist completes. The traditional fee-for-service payment model is likely a culprit since physicians are paid for performing as many colonoscopies as possible rather than for procedural quality. Other potential variables include personality traits and endoscopists’ knowledge and views on the importance of adenoma detection rates.

To examine the roles of these factors in adenoma detection rates, Dr. Ezaz and coinvestigators used electronic health records data from four health systems in Boston, Pittsburgh, North Carolina, and Seattle. Detection rates were adjusted to control for differences among patient populations. Next, the researchers surveyed the physicians who performed the endoscopies about their financial motivations, knowledge and perceptions of colonoscopy quality, and personality traits.

Among 117 physicians surveyed, the median risk-adjusted adenoma detection rate was 29.3%, with an interquartile range of 24.1%-35.5%. “We found no significant association between adenoma detection rate and financial incentives, malpractice concerns, or physicians’ perceptions of adenoma detection rate as a quality metric,” the researchers wrote.

In contrast, endoscopists who described themselves as either much or somewhat more compulsive than their peers had significantly higher median adjusted rates of adenoma detection than did endoscopists who described themselves as about the same or somewhat less compulsive than others. These adenoma detection rates, in respective order, were 33.1%, 32.9%, 26.4%, and 27.3% (P = .0019). Adenoma detection rates also were significantly higher among physicians who described themselves as more thorough than their peers, who said they felt rushed during endoscopy, and who reported having difficulty pacing themselves, accomplishing goals, or managing unforeseen situations.

A secondary analysis revealed the same links between personality traits and adenomas per colonoscopy. The findings support an expert’s prior assertion (Gastrointest Endosc. 2007 Jan;65[1]:145-50) that the best endoscopists are “slow, careful, and compulsive,” the researchers noted. They recommended nurturing “meticulousness and attention to detail” during training and evaluating trainees based on these characteristics.

The National Cancer Institute provided funding. The researchers reported having no conflicts of interest.
 

SOURCE: Ezaz G et al. Clin Gastroenterol Hepatol. 2018 Oct 13. doi: 10.1016/j.cgh.2018.10.019.

Endoscopists who described themselves as “compulsive” and “thorough” had significantly higher rates of adenoma detection, according to results from a self-reported survey of 117 physician endoscopists.

Financial incentives, malpractice concerns, and perceptions of adenoma detection rate as a quality metric were not associated with endoscopists’ detection rates in the survey.

“Adenoma detection rates were higher among physicians who described themselves as more compulsive or thorough, and among those who reported feeling rushed or having difficulty accomplishing goals,” Ghideon Ezaz, MD, of Beth Israel Deaconess Medical Center in Boston and associates wrote in Clinical Gastroenterology and Hepatology.

These feelings were related to withdrawal times rather than daily procedure volume. “We hypothesize that performing a meticulous examination is mentally taxing and can cause a physician to feel rushed or perceive that it is difficult to keep pace or accomplish goals,” the researchers wrote.

Adenoma detection rates vary widely among physicians – up to threefold in some studies. Researchers have failed to attribute most of this discrepancy to seemingly obvious factors such as the type of specialty training an endoscopist completes. The traditional fee-for-service payment model is likely a culprit since physicians are paid for performing as many colonoscopies as possible rather than for procedural quality. Other potential variables include personality traits and endoscopists’ knowledge and views on the importance of adenoma detection rates.

To examine the roles of these factors in adenoma detection rates, Dr. Ezaz and coinvestigators used electronic health records data from four health systems in Boston, Pittsburgh, North Carolina, and Seattle. Detection rates were adjusted to control for differences among patient populations. Next, the researchers surveyed the physicians who performed the endoscopies about their financial motivations, knowledge and perceptions of colonoscopy quality, and personality traits.

Among 117 physicians surveyed, the median risk-adjusted adenoma detection rate was 29.3%, with an interquartile range of 24.1%-35.5%. “We found no significant association between adenoma detection rate and financial incentives, malpractice concerns, or physicians’ perceptions of adenoma detection rate as a quality metric,” the researchers wrote.

In contrast, endoscopists who described themselves as either much or somewhat more compulsive than their peers had significantly higher median adjusted rates of adenoma detection than did endoscopists who described themselves as about the same or somewhat less compulsive than others. These adenoma detection rates, in respective order, were 33.1%, 32.9%, 26.4%, and 27.3% (P = .0019). Adenoma detection rates also were significantly higher among physicians who described themselves as more thorough than their peers, who said they felt rushed during endoscopy, and who reported having difficulty pacing themselves, accomplishing goals, or managing unforeseen situations.

A secondary analysis revealed the same links between personality traits and adenomas per colonoscopy. The findings support an expert’s prior assertion (Gastrointest Endosc. 2007 Jan;65[1]:145-50) that the best endoscopists are “slow, careful, and compulsive,” the researchers noted. They recommended nurturing “meticulousness and attention to detail” during training and evaluating trainees based on these characteristics.

The National Cancer Institute provided funding. The researchers reported having no conflicts of interest.
 

SOURCE: Ezaz G et al. Clin Gastroenterol Hepatol. 2018 Oct 13. doi: 10.1016/j.cgh.2018.10.019.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Endoscopists’ self-reported personality traits correlated significantly with their rates of adenoma detection.

Major finding: Self-reported compulsiveness, thoroughness, feeling rushed during endoscopy, and having difficulty pacing oneself, meeting goals, or managing unforeseen situations all correlated with significantly higher rates of adenoma detection, while financial incentives, malpractice concerns, and physicians’ perception of the value of adenoma detection did not.

Study details: Surveys of 117 physician endoscopists and analyses of electronic health record from four geographically diverse health centers where they worked.

Disclosures: The National Cancer Institute provided funding. The researchers reported having no conflicts of interest.

Source: Ezaz G et al. Clin Gastroenterol Hepatol. 2018 Oct 13. doi: 10.1016/j.cgh.2018.10.019.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

AGA introduces pathway to navigate IBD care

Article Type
Changed

 

Inflammatory bowel disease (IBD) treatment remains a challenge in part because care is often fragmented among providers in different specialties, according to the American Gastroenterological Association. To address the need for provider coordination, the AGA has issued a new referral pathway for IBD care, published in Gastroenterology.

“The goal of this pathway is to offer guidance to primary care, emergency department, and gastroenterology providers, by helping identify patients at risk of or diagnosed with IBD and provide direction on initiating appropriate patient referrals,” wrote lead author Jami Kinnucan, MD, of the University of Michigan, Ann Arbor, and members of the AGA workgroup.

In particular, the pathway focuses on gaps in IBD care related to inflammatory issues, mental health, and nutrition. The work group included not only gastroenterologists, but also a primary care physician, mental/behavioral health specialist, registered dietitian/nutritionist, critical care specialist, nurse practitioner, physician group representative, and a patient advocacy representative.

The pathway identifies the top three areas where IBD patients usually present with symptoms: the emergency department, primary care office, and gastroenterology office.

The work group developed a list of key characteristics associated with increased morbidity, established IBD, or IBD-related complications that can be separated into high-risk, moderate-risk, and low-risk groups to help clinicians determine the timing of and need for referrals.

The pathway uses a sample patient presenting with GI symptoms such as bloody diarrhea; GI bleeding; anemia; fecal urgency; fever; abdominal pain; weight loss; and pain, swelling, or redness in the joints. Clinicians then apply the key characteristics to triage the patients into the risk groups.

High-risk characteristics include history of perianal or severe rectal disease, or deep ulcers in the GI mucosa; two or more emergency department visits for GI problems within the past 6 months, severe anemia, inadequate response to outpatient IBD therapy, history of IBD-related surgery, and malnourishment.

Moderate-risk characteristics include anemia without clinical symptoms, chronic corticosteroid use, and no emergency department or other GI medical visits within the past year.

Low-risk characteristics include chronic narcotic use, one or more comorbidities (such as heart failure, active hepatitis B, oncologic malignancy, lupus, GI infections, primary sclerosing cholangitis, viral hepatitis, and celiac disease), one or more relevant mental health conditions (such as depression, anxiety, or chronic pain), and nonadherence to IBD medical therapies.

“Referrals should be based on the highest level of risk present, in the event that a patient has characteristics that fall in more than one risk category,” the work group wrote.

To further guide clinicians in referring patients with possible or diagnosed IBD to gastroenterology specialists and to mental health and nutrition specialists, the work group developed an IBD Characteristics Assessment Checklist and a Referral Feedback form to accompany the pathway.

The checklist is designed for use by any health care professional to help identify whether a patient needs to be referred based on the key characteristics; the feedback form gives gastroenterologists a template to communicate with referring physicians about comanagement strategies for the patient.

The pathway also includes more details on how clinicians can tackle barriers to mental health and nutrition care for IBD patients.

“Until further evaluations are conducted, the work group encourages the immediate use of the pathway to begin addressing the needed improvements for IBD care coordination and communication between the different IBD providers,” the authors wrote.

Dr. Kinnucan disclosed serving as a consultant for AbbVie, Janssen, and Pfizer and serving on the Patient Education Committee of the Crohn’s and Colitis Foundation.

SOURCE: Kinnucan J et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.064.

Publications
Topics
Sections

 

Inflammatory bowel disease (IBD) treatment remains a challenge in part because care is often fragmented among providers in different specialties, according to the American Gastroenterological Association. To address the need for provider coordination, the AGA has issued a new referral pathway for IBD care, published in Gastroenterology.

“The goal of this pathway is to offer guidance to primary care, emergency department, and gastroenterology providers, by helping identify patients at risk of or diagnosed with IBD and provide direction on initiating appropriate patient referrals,” wrote lead author Jami Kinnucan, MD, of the University of Michigan, Ann Arbor, and members of the AGA workgroup.

In particular, the pathway focuses on gaps in IBD care related to inflammatory issues, mental health, and nutrition. The work group included not only gastroenterologists, but also a primary care physician, mental/behavioral health specialist, registered dietitian/nutritionist, critical care specialist, nurse practitioner, physician group representative, and a patient advocacy representative.

The pathway identifies the top three areas where IBD patients usually present with symptoms: the emergency department, primary care office, and gastroenterology office.

The work group developed a list of key characteristics associated with increased morbidity, established IBD, or IBD-related complications that can be separated into high-risk, moderate-risk, and low-risk groups to help clinicians determine the timing of and need for referrals.

The pathway uses a sample patient presenting with GI symptoms such as bloody diarrhea; GI bleeding; anemia; fecal urgency; fever; abdominal pain; weight loss; and pain, swelling, or redness in the joints. Clinicians then apply the key characteristics to triage the patients into the risk groups.

High-risk characteristics include history of perianal or severe rectal disease, or deep ulcers in the GI mucosa; two or more emergency department visits for GI problems within the past 6 months, severe anemia, inadequate response to outpatient IBD therapy, history of IBD-related surgery, and malnourishment.

Moderate-risk characteristics include anemia without clinical symptoms, chronic corticosteroid use, and no emergency department or other GI medical visits within the past year.

Low-risk characteristics include chronic narcotic use, one or more comorbidities (such as heart failure, active hepatitis B, oncologic malignancy, lupus, GI infections, primary sclerosing cholangitis, viral hepatitis, and celiac disease), one or more relevant mental health conditions (such as depression, anxiety, or chronic pain), and nonadherence to IBD medical therapies.

“Referrals should be based on the highest level of risk present, in the event that a patient has characteristics that fall in more than one risk category,” the work group wrote.

To further guide clinicians in referring patients with possible or diagnosed IBD to gastroenterology specialists and to mental health and nutrition specialists, the work group developed an IBD Characteristics Assessment Checklist and a Referral Feedback form to accompany the pathway.

The checklist is designed for use by any health care professional to help identify whether a patient needs to be referred based on the key characteristics; the feedback form gives gastroenterologists a template to communicate with referring physicians about comanagement strategies for the patient.

The pathway also includes more details on how clinicians can tackle barriers to mental health and nutrition care for IBD patients.

“Until further evaluations are conducted, the work group encourages the immediate use of the pathway to begin addressing the needed improvements for IBD care coordination and communication between the different IBD providers,” the authors wrote.

Dr. Kinnucan disclosed serving as a consultant for AbbVie, Janssen, and Pfizer and serving on the Patient Education Committee of the Crohn’s and Colitis Foundation.

SOURCE: Kinnucan J et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.064.

 

Inflammatory bowel disease (IBD) treatment remains a challenge in part because care is often fragmented among providers in different specialties, according to the American Gastroenterological Association. To address the need for provider coordination, the AGA has issued a new referral pathway for IBD care, published in Gastroenterology.

“The goal of this pathway is to offer guidance to primary care, emergency department, and gastroenterology providers, by helping identify patients at risk of or diagnosed with IBD and provide direction on initiating appropriate patient referrals,” wrote lead author Jami Kinnucan, MD, of the University of Michigan, Ann Arbor, and members of the AGA workgroup.

In particular, the pathway focuses on gaps in IBD care related to inflammatory issues, mental health, and nutrition. The work group included not only gastroenterologists, but also a primary care physician, mental/behavioral health specialist, registered dietitian/nutritionist, critical care specialist, nurse practitioner, physician group representative, and a patient advocacy representative.

The pathway identifies the top three areas where IBD patients usually present with symptoms: the emergency department, primary care office, and gastroenterology office.

The work group developed a list of key characteristics associated with increased morbidity, established IBD, or IBD-related complications that can be separated into high-risk, moderate-risk, and low-risk groups to help clinicians determine the timing of and need for referrals.

The pathway uses a sample patient presenting with GI symptoms such as bloody diarrhea; GI bleeding; anemia; fecal urgency; fever; abdominal pain; weight loss; and pain, swelling, or redness in the joints. Clinicians then apply the key characteristics to triage the patients into the risk groups.

High-risk characteristics include history of perianal or severe rectal disease, or deep ulcers in the GI mucosa; two or more emergency department visits for GI problems within the past 6 months, severe anemia, inadequate response to outpatient IBD therapy, history of IBD-related surgery, and malnourishment.

Moderate-risk characteristics include anemia without clinical symptoms, chronic corticosteroid use, and no emergency department or other GI medical visits within the past year.

Low-risk characteristics include chronic narcotic use, one or more comorbidities (such as heart failure, active hepatitis B, oncologic malignancy, lupus, GI infections, primary sclerosing cholangitis, viral hepatitis, and celiac disease), one or more relevant mental health conditions (such as depression, anxiety, or chronic pain), and nonadherence to IBD medical therapies.

“Referrals should be based on the highest level of risk present, in the event that a patient has characteristics that fall in more than one risk category,” the work group wrote.

To further guide clinicians in referring patients with possible or diagnosed IBD to gastroenterology specialists and to mental health and nutrition specialists, the work group developed an IBD Characteristics Assessment Checklist and a Referral Feedback form to accompany the pathway.

The checklist is designed for use by any health care professional to help identify whether a patient needs to be referred based on the key characteristics; the feedback form gives gastroenterologists a template to communicate with referring physicians about comanagement strategies for the patient.

The pathway also includes more details on how clinicians can tackle barriers to mental health and nutrition care for IBD patients.

“Until further evaluations are conducted, the work group encourages the immediate use of the pathway to begin addressing the needed improvements for IBD care coordination and communication between the different IBD providers,” the authors wrote.

Dr. Kinnucan disclosed serving as a consultant for AbbVie, Janssen, and Pfizer and serving on the Patient Education Committee of the Crohn’s and Colitis Foundation.

SOURCE: Kinnucan J et al. Gastroenterology. 2019. doi: 10.1053/j.gastro.2019.03.064.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Tofacitinib upped herpes zoster risk in ulcerative colitis

How safe is tofacitinib?
Article Type
Changed

 

Among patients with moderate to severe ulcerative colitis, a median of 1.4 years and up to 4.4 years of tofacitinib therapy was safe apart from a dose-related increase in risk of herpes zoster infection, according to an integrated analysis of data from five clinical trials.

clsgraphics/iStockphoto

Compared with placebo, a 5-mg twice-daily maintenance dose of tofacitinib (Xeljanz) produced a 2.1-fold greater risk of herpes zoster infection (95% confidence interval, 0.4-6.0), while a 10-mg, twice-daily dose produced a statistically significant 6.6-fold increase in incidence (95% CI, 3.2-12.2).

With the exception of the higher incidence rate of herpes zoster, “in the overall cohort, the safety profile of tofacitinib was generally similar to that of tumor necrosis factor inhibitor therapies,” wrote William J. Sandborn, MD, director of the inflammatory bowel disease center and professor of medicine, at the University of California, San Diego, and associates. The findings were published in Clinical Gastroenterology and Hepatology.

Tofacitinib is an oral, small-molecular Janus kinase inhibitor approved in the United States for treating moderate to severe ulcerative colitis, as well as rheumatoid and psoriatic arthritis. The recommended ulcerative colitis dose is 10 mg twice daily for at least 8 weeks (induction therapy) followed by 5 or 10 mg twice daily (maintenance). The safety of tofacitinib has been studied in patients with rheumatoid arthritis through 9 years of treatment. To begin a similar undertaking in ulcerative colitis, Dr. Sandborn and associates pooled data from three 8-week, double-blind, placebo-controlled induction trials, as well as one 52-week, double-blind, placebo-controlled maintenance trial and one ongoing open-label trial. All patients received twice-daily tofacitinib (5 mg or 10 mg) or placebo.

Among 1,157 tofacitinib recipients in the pooled analysis, 84% received an average of 10 mg twice daily. For every 100 person-years of tofacitinib exposure, there were an estimated 2.0 serious infections, 1.3 opportunistic infections, 4.1 herpes zoster infections, 1.4 malignancies (including nonmelanoma skin cancer, which had an incidence of 0.7), 0.2 major adverse cardiovascular events, and 0.2 gastrointestinal perforations. The likelihood of these events did not increase with time on tofacitinib, the researchers said.

 

 


Worsening ulcerative colitis was the most common serious adverse event for patients who received both induction and maintenance therapy. For patients on maintenance therapy, only herpes zoster infection had a higher incidence than placebo, which reached statistical significance at the 10-mg dose. These safety findings resemble those in rheumatoid arthritis trials of tofacitinib, and apart from herpes zoster, they also resemble safety data for vedolizumab (an integrin receptor antagonist), and anti-tumor necrosis factor agents in ulcerative colitis, the researchers wrote.

There were four deaths during the entire tofacitinib ulcerative colitis program, for an incidence rate of 0.2 per 100 person-years of exposure. All occurred in patients receiving 10 mg twice daily. Causes of death were dissecting aortic aneurysm, hepatic angiosarcoma, acute myeloid leukemia, and pulmonary embolism in a patient with cholangiocarcinoma that had metastasized to the peritoneum. Recently, concerns about pulmonary embolism have led the European Medicines Agency (EMA) to recommend against the use of 10-mg twice daily tofacitinib dose in patients at increased risk for pulmonary embolism.

“Compared with prior experience with tofacitinib in rheumatoid arthritis, no new or unexpected safety signals were identified,” the researchers concluded. “These safety findings support the long-term use of tofacitinib 5 and 10 mg twice daily in patients with moderately to severely active” ulcerative colitis.

Pfizer makes tofacitinib, funded the individual trials, and paid for medical writing. Dr. Sandborn disclosed grants, personal fees, and nonfinancial support from Pfizer and many other pharmaceutical companies.

SOURCE: Sandborn WJ et al. Clin Gastroenterol Hepatol. 2018 Nov 23. doi: 10.1016/j.cgh.2018.11.035.

Body

 

As new mechanisms of action become available for ulcerative colitis (UC) drugs, clinicians must weigh the risks versus benefits (i.e., safety vs. efficacy). In this article, Sandborn and colleagues provide additional information on the safety profile of tofacitinib. They report an increased risk of herpes zoster that was dose dependent (sixfold increase on 10 mg twice daily). The overall safety profile was reassuring, is similar to the rheumatoid arthritis population treated with tofacitinib, and is in line with the safety profile of anti-TNF antibodies (excluding the increase risk of zoster). With a nonlive zoster vaccine now available, some have advocated vaccinating all patients being started on tofacitinib. However, there is a theoretical risk of disease exacerbation and ongoing studies that will hopefully answer this important question.

Dr. David A. Schwartz
Another emerging safety concern with tofacitinib involves venous thromboembolism (VTE). The Food and Drug Administration recently issued a warning based on the findings of a safety trial in rheumatoid arthritis in which they found an increased risk of PE and death in those on 10-mg twice-daily dose. The exact details of the risk have yet to be released. Enrollment in the trial required patients aged over 50 years with at least one cardiovascular risk factor. The European regulatory body (EMA) recently forbade the use of the 10-mg dose of tofacitinib for anyone at increased risk for VTE. It is unclear if this risk applies to those younger than 50 years without cardiovascular risk factors or the UC population. In the current study of UC patients, the rate of a major cardiovascular event was rare (n = 4; IR, 0.2). In the short term, it may be prudent to restrict the 10-mg twice-daily dose to those who do not fall into the high-risk category, or try to reduce the dose to 5 mg twice daily if possible.

David A. Schwartz, MD, professor of medicine, division of gastroenterology, hepatology and nutrition, Inflammatory Bowel Disease Center, Vanderbilt University, Nashville.

Publications
Topics
Sections
Body

 

As new mechanisms of action become available for ulcerative colitis (UC) drugs, clinicians must weigh the risks versus benefits (i.e., safety vs. efficacy). In this article, Sandborn and colleagues provide additional information on the safety profile of tofacitinib. They report an increased risk of herpes zoster that was dose dependent (sixfold increase on 10 mg twice daily). The overall safety profile was reassuring, is similar to the rheumatoid arthritis population treated with tofacitinib, and is in line with the safety profile of anti-TNF antibodies (excluding the increase risk of zoster). With a nonlive zoster vaccine now available, some have advocated vaccinating all patients being started on tofacitinib. However, there is a theoretical risk of disease exacerbation and ongoing studies that will hopefully answer this important question.

Dr. David A. Schwartz
Another emerging safety concern with tofacitinib involves venous thromboembolism (VTE). The Food and Drug Administration recently issued a warning based on the findings of a safety trial in rheumatoid arthritis in which they found an increased risk of PE and death in those on 10-mg twice-daily dose. The exact details of the risk have yet to be released. Enrollment in the trial required patients aged over 50 years with at least one cardiovascular risk factor. The European regulatory body (EMA) recently forbade the use of the 10-mg dose of tofacitinib for anyone at increased risk for VTE. It is unclear if this risk applies to those younger than 50 years without cardiovascular risk factors or the UC population. In the current study of UC patients, the rate of a major cardiovascular event was rare (n = 4; IR, 0.2). In the short term, it may be prudent to restrict the 10-mg twice-daily dose to those who do not fall into the high-risk category, or try to reduce the dose to 5 mg twice daily if possible.

David A. Schwartz, MD, professor of medicine, division of gastroenterology, hepatology and nutrition, Inflammatory Bowel Disease Center, Vanderbilt University, Nashville.

Body

 

As new mechanisms of action become available for ulcerative colitis (UC) drugs, clinicians must weigh the risks versus benefits (i.e., safety vs. efficacy). In this article, Sandborn and colleagues provide additional information on the safety profile of tofacitinib. They report an increased risk of herpes zoster that was dose dependent (sixfold increase on 10 mg twice daily). The overall safety profile was reassuring, is similar to the rheumatoid arthritis population treated with tofacitinib, and is in line with the safety profile of anti-TNF antibodies (excluding the increase risk of zoster). With a nonlive zoster vaccine now available, some have advocated vaccinating all patients being started on tofacitinib. However, there is a theoretical risk of disease exacerbation and ongoing studies that will hopefully answer this important question.

Dr. David A. Schwartz
Another emerging safety concern with tofacitinib involves venous thromboembolism (VTE). The Food and Drug Administration recently issued a warning based on the findings of a safety trial in rheumatoid arthritis in which they found an increased risk of PE and death in those on 10-mg twice-daily dose. The exact details of the risk have yet to be released. Enrollment in the trial required patients aged over 50 years with at least one cardiovascular risk factor. The European regulatory body (EMA) recently forbade the use of the 10-mg dose of tofacitinib for anyone at increased risk for VTE. It is unclear if this risk applies to those younger than 50 years without cardiovascular risk factors or the UC population. In the current study of UC patients, the rate of a major cardiovascular event was rare (n = 4; IR, 0.2). In the short term, it may be prudent to restrict the 10-mg twice-daily dose to those who do not fall into the high-risk category, or try to reduce the dose to 5 mg twice daily if possible.

David A. Schwartz, MD, professor of medicine, division of gastroenterology, hepatology and nutrition, Inflammatory Bowel Disease Center, Vanderbilt University, Nashville.

Title
How safe is tofacitinib?
How safe is tofacitinib?

 

Among patients with moderate to severe ulcerative colitis, a median of 1.4 years and up to 4.4 years of tofacitinib therapy was safe apart from a dose-related increase in risk of herpes zoster infection, according to an integrated analysis of data from five clinical trials.

clsgraphics/iStockphoto

Compared with placebo, a 5-mg twice-daily maintenance dose of tofacitinib (Xeljanz) produced a 2.1-fold greater risk of herpes zoster infection (95% confidence interval, 0.4-6.0), while a 10-mg, twice-daily dose produced a statistically significant 6.6-fold increase in incidence (95% CI, 3.2-12.2).

With the exception of the higher incidence rate of herpes zoster, “in the overall cohort, the safety profile of tofacitinib was generally similar to that of tumor necrosis factor inhibitor therapies,” wrote William J. Sandborn, MD, director of the inflammatory bowel disease center and professor of medicine, at the University of California, San Diego, and associates. The findings were published in Clinical Gastroenterology and Hepatology.

Tofacitinib is an oral, small-molecular Janus kinase inhibitor approved in the United States for treating moderate to severe ulcerative colitis, as well as rheumatoid and psoriatic arthritis. The recommended ulcerative colitis dose is 10 mg twice daily for at least 8 weeks (induction therapy) followed by 5 or 10 mg twice daily (maintenance). The safety of tofacitinib has been studied in patients with rheumatoid arthritis through 9 years of treatment. To begin a similar undertaking in ulcerative colitis, Dr. Sandborn and associates pooled data from three 8-week, double-blind, placebo-controlled induction trials, as well as one 52-week, double-blind, placebo-controlled maintenance trial and one ongoing open-label trial. All patients received twice-daily tofacitinib (5 mg or 10 mg) or placebo.

Among 1,157 tofacitinib recipients in the pooled analysis, 84% received an average of 10 mg twice daily. For every 100 person-years of tofacitinib exposure, there were an estimated 2.0 serious infections, 1.3 opportunistic infections, 4.1 herpes zoster infections, 1.4 malignancies (including nonmelanoma skin cancer, which had an incidence of 0.7), 0.2 major adverse cardiovascular events, and 0.2 gastrointestinal perforations. The likelihood of these events did not increase with time on tofacitinib, the researchers said.

 

 


Worsening ulcerative colitis was the most common serious adverse event for patients who received both induction and maintenance therapy. For patients on maintenance therapy, only herpes zoster infection had a higher incidence than placebo, which reached statistical significance at the 10-mg dose. These safety findings resemble those in rheumatoid arthritis trials of tofacitinib, and apart from herpes zoster, they also resemble safety data for vedolizumab (an integrin receptor antagonist), and anti-tumor necrosis factor agents in ulcerative colitis, the researchers wrote.

There were four deaths during the entire tofacitinib ulcerative colitis program, for an incidence rate of 0.2 per 100 person-years of exposure. All occurred in patients receiving 10 mg twice daily. Causes of death were dissecting aortic aneurysm, hepatic angiosarcoma, acute myeloid leukemia, and pulmonary embolism in a patient with cholangiocarcinoma that had metastasized to the peritoneum. Recently, concerns about pulmonary embolism have led the European Medicines Agency (EMA) to recommend against the use of 10-mg twice daily tofacitinib dose in patients at increased risk for pulmonary embolism.

“Compared with prior experience with tofacitinib in rheumatoid arthritis, no new or unexpected safety signals were identified,” the researchers concluded. “These safety findings support the long-term use of tofacitinib 5 and 10 mg twice daily in patients with moderately to severely active” ulcerative colitis.

Pfizer makes tofacitinib, funded the individual trials, and paid for medical writing. Dr. Sandborn disclosed grants, personal fees, and nonfinancial support from Pfizer and many other pharmaceutical companies.

SOURCE: Sandborn WJ et al. Clin Gastroenterol Hepatol. 2018 Nov 23. doi: 10.1016/j.cgh.2018.11.035.

 

Among patients with moderate to severe ulcerative colitis, a median of 1.4 years and up to 4.4 years of tofacitinib therapy was safe apart from a dose-related increase in risk of herpes zoster infection, according to an integrated analysis of data from five clinical trials.

clsgraphics/iStockphoto

Compared with placebo, a 5-mg twice-daily maintenance dose of tofacitinib (Xeljanz) produced a 2.1-fold greater risk of herpes zoster infection (95% confidence interval, 0.4-6.0), while a 10-mg, twice-daily dose produced a statistically significant 6.6-fold increase in incidence (95% CI, 3.2-12.2).

With the exception of the higher incidence rate of herpes zoster, “in the overall cohort, the safety profile of tofacitinib was generally similar to that of tumor necrosis factor inhibitor therapies,” wrote William J. Sandborn, MD, director of the inflammatory bowel disease center and professor of medicine, at the University of California, San Diego, and associates. The findings were published in Clinical Gastroenterology and Hepatology.

Tofacitinib is an oral, small-molecular Janus kinase inhibitor approved in the United States for treating moderate to severe ulcerative colitis, as well as rheumatoid and psoriatic arthritis. The recommended ulcerative colitis dose is 10 mg twice daily for at least 8 weeks (induction therapy) followed by 5 or 10 mg twice daily (maintenance). The safety of tofacitinib has been studied in patients with rheumatoid arthritis through 9 years of treatment. To begin a similar undertaking in ulcerative colitis, Dr. Sandborn and associates pooled data from three 8-week, double-blind, placebo-controlled induction trials, as well as one 52-week, double-blind, placebo-controlled maintenance trial and one ongoing open-label trial. All patients received twice-daily tofacitinib (5 mg or 10 mg) or placebo.

Among 1,157 tofacitinib recipients in the pooled analysis, 84% received an average of 10 mg twice daily. For every 100 person-years of tofacitinib exposure, there were an estimated 2.0 serious infections, 1.3 opportunistic infections, 4.1 herpes zoster infections, 1.4 malignancies (including nonmelanoma skin cancer, which had an incidence of 0.7), 0.2 major adverse cardiovascular events, and 0.2 gastrointestinal perforations. The likelihood of these events did not increase with time on tofacitinib, the researchers said.

 

 


Worsening ulcerative colitis was the most common serious adverse event for patients who received both induction and maintenance therapy. For patients on maintenance therapy, only herpes zoster infection had a higher incidence than placebo, which reached statistical significance at the 10-mg dose. These safety findings resemble those in rheumatoid arthritis trials of tofacitinib, and apart from herpes zoster, they also resemble safety data for vedolizumab (an integrin receptor antagonist), and anti-tumor necrosis factor agents in ulcerative colitis, the researchers wrote.

There were four deaths during the entire tofacitinib ulcerative colitis program, for an incidence rate of 0.2 per 100 person-years of exposure. All occurred in patients receiving 10 mg twice daily. Causes of death were dissecting aortic aneurysm, hepatic angiosarcoma, acute myeloid leukemia, and pulmonary embolism in a patient with cholangiocarcinoma that had metastasized to the peritoneum. Recently, concerns about pulmonary embolism have led the European Medicines Agency (EMA) to recommend against the use of 10-mg twice daily tofacitinib dose in patients at increased risk for pulmonary embolism.

“Compared with prior experience with tofacitinib in rheumatoid arthritis, no new or unexpected safety signals were identified,” the researchers concluded. “These safety findings support the long-term use of tofacitinib 5 and 10 mg twice daily in patients with moderately to severely active” ulcerative colitis.

Pfizer makes tofacitinib, funded the individual trials, and paid for medical writing. Dr. Sandborn disclosed grants, personal fees, and nonfinancial support from Pfizer and many other pharmaceutical companies.

SOURCE: Sandborn WJ et al. Clin Gastroenterol Hepatol. 2018 Nov 23. doi: 10.1016/j.cgh.2018.11.035.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Tofacitinib therapy shows a dose-related increase in risk of herpes zoster in patients with ulcerative colitis.

Major finding: Compared with placebo, a 5-mg twice-daily maintenance dose of tofacitinib produced a 2.1-fold greater risk of herpes zoster infection (95% CI, 0.4-6.0), while a 10-mg twice-daily dose produced a statistically significant 6.6-fold increase in incidence (95% CI, 3.2–12.2).

Study details: Integrated safety analysis of five clinical trials (four randomized, double-blinded, and placebo-controlled) with 1,612.8 total years of exposure (median treatment duration, 1.4 years).

Disclosures: Pfizer makes tofacitinib, funded the individual trials, and paid for medical writing. Dr. Sandborn disclosed grants, personal fees, and nonfinancial support from Pfizer and many other pharmaceutical companies.

Source: Sandborn WJ et al. Clin Gastroenterol Hepatol. 2018 Nov 23. doi: 10.1016/j.cgh.2018.11.035.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.