User login
Study eyed natural history of branch-duct intraductal papillary mucinous neoplasms
Branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs) grew at a median annual rate of 0.8 mm in a retrospective study of 1,369 patients.
While most of these cysts were “indolent and dormant,” some grew rapidly and developed “other worrisome features,” Youngmin Han, MS, of Seoul (South Korea) National University reported with his associates in the February issue of Gastroenterology. Therefore, clinicians should plan follow-up surveillance based on initial cyst size and growth rate, they concluded.
Based on their findings, the researchers recommended surgery for young, fit, asymptomatic patients who have BD-IPMNs with a diameter of least 30 mm or with thickened cyst walls or those who have a main pancreatic duct measuring 5-9 mm. Surgery also should be considered when patients have lymphadenopathy, high tumor marker levels, or an abrupt change in pancreatic duct caliber with distal pancreatic atrophy or a rapidly growing cyst, they said.
For asymptomatic patients whose cysts are under 10 mm and who do not have worrisome features, they recommended follow-up with CT or MRI at 6 months and then every 2 years after that. Cysts of 10-20 mm should be imaged at 6 months, at 12 months, and then every 1.5-2 years after that, they said. Patients with cyst diameters greater than 20 mm “should undergo MRI or CT or EUS [endoscopic ultrasound] every 6 months for 1 year and then annually thereafter, until the cyst size and features become stable,” they added. Patients whose cysts have a diameter of 30 mm or greater “should be closely monitored with MRI or CT or EUS every 6 months. Surgical resection can be considered in younger patients or those with other combined worrisome features.”
To characterize the natural history of BD-IPMN, the investigators evaluated clinical and imaging data collected between 2001 and 2016 from patients with classical features of BD-IPMN. Each patient included in the study provided 3 or more years of CT, MRI, EUS, and endoscopic retrograde cholangiopancreatography data. The researchers used regression models to estimate changes in sizes of cysts and main pancreatic ducts.
Median follow-up time was 61 months (range, 36-189 months). Cyst diameter averaged 12.8 mm (standard deviation, 6.5 mm) at baseline and 17 mm (SD, 9.2 mm) at final measurement. Larger baseline diameter was associated with faster growth (P = .046): Cysts measuring less than 10 mm at baseline grew at a median annual rate of 0.8 mm (SD, 1.1 mm), while those measuring at least 30 mm grew at a median annual rate of 1.2 mm (SD, 2.1 mm).
Worrisome features were present in 59 patients at baseline and emerged in another 150 patients during follow-up. At baseline, only 2.3% of cysts exceeded 30 mm in diameter, but 8.0% did at final measurement. Cyst wall thickening was found in 0.5% of patients at baseline and 3.7% of patients at final measurement. Main pancreatic ducts measured 5-9 mm in 1.9% of patients at baseline and in 5.6% of patients at final measurement. Additionally, the prevalence of mural nodules rose from 0.4% at baseline to 3.1% at final measurement.
Main pancreatic ducts averaged 1.8 mm (SD, 1.0 mm) at baseline and 2.4 mm (SD, 1.8 mm) at final measurement. Compared with the values seen with smaller cysts, larger baseline cyst diameter correlated significantly with larger main pancreatic ducts, more cases of cyst wall thickening, and more cases with mural nodules (P less than .001 for all comparisons).
The study was funded by a grant from Korean Health Technology R&D Project of Ministry of Health and Welfare, Republic of Korea. The investigators reported having no conflicts of interest.
SOURCE: Han Y et al. Gastroenterology. 2018. doi: 10.1053/j.gastro.2017.10.013.
The appropriate management of branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs), a precursor cystic lesion to pancreatic cancer, has been a controversial issue since their initial description in 1982. Current national and international guidelines are primarily based on surgical series with potential selection bias and on observational studies with short surveillance periods. Consequently, there is limited information on the natural history and, more importantly, the malignant potential of BD-IPMNs.
The study by Youngmin Han and colleagues represents a comprehensive analysis of over 1,000 patients, each with at least 3 years of follow-up for a suspected BD-IPMN. In addition, the authors identified an optimal screening method for patients based on cyst size. Their data largely validates prior reports and will undoubtedly serve as the basis for future pancreatic cyst guidelines.
However, as the authors note, limitations of their study include its retrospective design and validation of their screening protocol. Moreover, several lingering questions remain for patients with BD-IPMNs: What is the best method of measuring a BD-IPMN (for example, CT, MRI, or endoscopic ultrasound)? How long should surveillance continue? And what is the role for cytopathology and ancillary studies, such as carcinoembryonic antigen testing, molecular testing, and testing for other pancreatic cyst biomarkers? At the risk of enouncing a cliché, “further studies are needed” to identify an optimal treatment algorithm and, considering the increasingly frequent detection of pancreatic cysts, a cost-effective approach to the evaluation of patients with BD-IPMNs.
Aatur D. Singhi, MD, PhD, is in the division of anatomic pathology in the department of pathology at the University of Pittsburgh Medical Center. He has no conflicts of interest.
The appropriate management of branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs), a precursor cystic lesion to pancreatic cancer, has been a controversial issue since their initial description in 1982. Current national and international guidelines are primarily based on surgical series with potential selection bias and on observational studies with short surveillance periods. Consequently, there is limited information on the natural history and, more importantly, the malignant potential of BD-IPMNs.
The study by Youngmin Han and colleagues represents a comprehensive analysis of over 1,000 patients, each with at least 3 years of follow-up for a suspected BD-IPMN. In addition, the authors identified an optimal screening method for patients based on cyst size. Their data largely validates prior reports and will undoubtedly serve as the basis for future pancreatic cyst guidelines.
However, as the authors note, limitations of their study include its retrospective design and validation of their screening protocol. Moreover, several lingering questions remain for patients with BD-IPMNs: What is the best method of measuring a BD-IPMN (for example, CT, MRI, or endoscopic ultrasound)? How long should surveillance continue? And what is the role for cytopathology and ancillary studies, such as carcinoembryonic antigen testing, molecular testing, and testing for other pancreatic cyst biomarkers? At the risk of enouncing a cliché, “further studies are needed” to identify an optimal treatment algorithm and, considering the increasingly frequent detection of pancreatic cysts, a cost-effective approach to the evaluation of patients with BD-IPMNs.
Aatur D. Singhi, MD, PhD, is in the division of anatomic pathology in the department of pathology at the University of Pittsburgh Medical Center. He has no conflicts of interest.
The appropriate management of branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs), a precursor cystic lesion to pancreatic cancer, has been a controversial issue since their initial description in 1982. Current national and international guidelines are primarily based on surgical series with potential selection bias and on observational studies with short surveillance periods. Consequently, there is limited information on the natural history and, more importantly, the malignant potential of BD-IPMNs.
The study by Youngmin Han and colleagues represents a comprehensive analysis of over 1,000 patients, each with at least 3 years of follow-up for a suspected BD-IPMN. In addition, the authors identified an optimal screening method for patients based on cyst size. Their data largely validates prior reports and will undoubtedly serve as the basis for future pancreatic cyst guidelines.
However, as the authors note, limitations of their study include its retrospective design and validation of their screening protocol. Moreover, several lingering questions remain for patients with BD-IPMNs: What is the best method of measuring a BD-IPMN (for example, CT, MRI, or endoscopic ultrasound)? How long should surveillance continue? And what is the role for cytopathology and ancillary studies, such as carcinoembryonic antigen testing, molecular testing, and testing for other pancreatic cyst biomarkers? At the risk of enouncing a cliché, “further studies are needed” to identify an optimal treatment algorithm and, considering the increasingly frequent detection of pancreatic cysts, a cost-effective approach to the evaluation of patients with BD-IPMNs.
Aatur D. Singhi, MD, PhD, is in the division of anatomic pathology in the department of pathology at the University of Pittsburgh Medical Center. He has no conflicts of interest.
Branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs) grew at a median annual rate of 0.8 mm in a retrospective study of 1,369 patients.
While most of these cysts were “indolent and dormant,” some grew rapidly and developed “other worrisome features,” Youngmin Han, MS, of Seoul (South Korea) National University reported with his associates in the February issue of Gastroenterology. Therefore, clinicians should plan follow-up surveillance based on initial cyst size and growth rate, they concluded.
Based on their findings, the researchers recommended surgery for young, fit, asymptomatic patients who have BD-IPMNs with a diameter of least 30 mm or with thickened cyst walls or those who have a main pancreatic duct measuring 5-9 mm. Surgery also should be considered when patients have lymphadenopathy, high tumor marker levels, or an abrupt change in pancreatic duct caliber with distal pancreatic atrophy or a rapidly growing cyst, they said.
For asymptomatic patients whose cysts are under 10 mm and who do not have worrisome features, they recommended follow-up with CT or MRI at 6 months and then every 2 years after that. Cysts of 10-20 mm should be imaged at 6 months, at 12 months, and then every 1.5-2 years after that, they said. Patients with cyst diameters greater than 20 mm “should undergo MRI or CT or EUS [endoscopic ultrasound] every 6 months for 1 year and then annually thereafter, until the cyst size and features become stable,” they added. Patients whose cysts have a diameter of 30 mm or greater “should be closely monitored with MRI or CT or EUS every 6 months. Surgical resection can be considered in younger patients or those with other combined worrisome features.”
To characterize the natural history of BD-IPMN, the investigators evaluated clinical and imaging data collected between 2001 and 2016 from patients with classical features of BD-IPMN. Each patient included in the study provided 3 or more years of CT, MRI, EUS, and endoscopic retrograde cholangiopancreatography data. The researchers used regression models to estimate changes in sizes of cysts and main pancreatic ducts.
Median follow-up time was 61 months (range, 36-189 months). Cyst diameter averaged 12.8 mm (standard deviation, 6.5 mm) at baseline and 17 mm (SD, 9.2 mm) at final measurement. Larger baseline diameter was associated with faster growth (P = .046): Cysts measuring less than 10 mm at baseline grew at a median annual rate of 0.8 mm (SD, 1.1 mm), while those measuring at least 30 mm grew at a median annual rate of 1.2 mm (SD, 2.1 mm).
Worrisome features were present in 59 patients at baseline and emerged in another 150 patients during follow-up. At baseline, only 2.3% of cysts exceeded 30 mm in diameter, but 8.0% did at final measurement. Cyst wall thickening was found in 0.5% of patients at baseline and 3.7% of patients at final measurement. Main pancreatic ducts measured 5-9 mm in 1.9% of patients at baseline and in 5.6% of patients at final measurement. Additionally, the prevalence of mural nodules rose from 0.4% at baseline to 3.1% at final measurement.
Main pancreatic ducts averaged 1.8 mm (SD, 1.0 mm) at baseline and 2.4 mm (SD, 1.8 mm) at final measurement. Compared with the values seen with smaller cysts, larger baseline cyst diameter correlated significantly with larger main pancreatic ducts, more cases of cyst wall thickening, and more cases with mural nodules (P less than .001 for all comparisons).
The study was funded by a grant from Korean Health Technology R&D Project of Ministry of Health and Welfare, Republic of Korea. The investigators reported having no conflicts of interest.
SOURCE: Han Y et al. Gastroenterology. 2018. doi: 10.1053/j.gastro.2017.10.013.
Branch-duct intraductal papillary mucinous neoplasms (BD-IPMNs) grew at a median annual rate of 0.8 mm in a retrospective study of 1,369 patients.
While most of these cysts were “indolent and dormant,” some grew rapidly and developed “other worrisome features,” Youngmin Han, MS, of Seoul (South Korea) National University reported with his associates in the February issue of Gastroenterology. Therefore, clinicians should plan follow-up surveillance based on initial cyst size and growth rate, they concluded.
Based on their findings, the researchers recommended surgery for young, fit, asymptomatic patients who have BD-IPMNs with a diameter of least 30 mm or with thickened cyst walls or those who have a main pancreatic duct measuring 5-9 mm. Surgery also should be considered when patients have lymphadenopathy, high tumor marker levels, or an abrupt change in pancreatic duct caliber with distal pancreatic atrophy or a rapidly growing cyst, they said.
For asymptomatic patients whose cysts are under 10 mm and who do not have worrisome features, they recommended follow-up with CT or MRI at 6 months and then every 2 years after that. Cysts of 10-20 mm should be imaged at 6 months, at 12 months, and then every 1.5-2 years after that, they said. Patients with cyst diameters greater than 20 mm “should undergo MRI or CT or EUS [endoscopic ultrasound] every 6 months for 1 year and then annually thereafter, until the cyst size and features become stable,” they added. Patients whose cysts have a diameter of 30 mm or greater “should be closely monitored with MRI or CT or EUS every 6 months. Surgical resection can be considered in younger patients or those with other combined worrisome features.”
To characterize the natural history of BD-IPMN, the investigators evaluated clinical and imaging data collected between 2001 and 2016 from patients with classical features of BD-IPMN. Each patient included in the study provided 3 or more years of CT, MRI, EUS, and endoscopic retrograde cholangiopancreatography data. The researchers used regression models to estimate changes in sizes of cysts and main pancreatic ducts.
Median follow-up time was 61 months (range, 36-189 months). Cyst diameter averaged 12.8 mm (standard deviation, 6.5 mm) at baseline and 17 mm (SD, 9.2 mm) at final measurement. Larger baseline diameter was associated with faster growth (P = .046): Cysts measuring less than 10 mm at baseline grew at a median annual rate of 0.8 mm (SD, 1.1 mm), while those measuring at least 30 mm grew at a median annual rate of 1.2 mm (SD, 2.1 mm).
Worrisome features were present in 59 patients at baseline and emerged in another 150 patients during follow-up. At baseline, only 2.3% of cysts exceeded 30 mm in diameter, but 8.0% did at final measurement. Cyst wall thickening was found in 0.5% of patients at baseline and 3.7% of patients at final measurement. Main pancreatic ducts measured 5-9 mm in 1.9% of patients at baseline and in 5.6% of patients at final measurement. Additionally, the prevalence of mural nodules rose from 0.4% at baseline to 3.1% at final measurement.
Main pancreatic ducts averaged 1.8 mm (SD, 1.0 mm) at baseline and 2.4 mm (SD, 1.8 mm) at final measurement. Compared with the values seen with smaller cysts, larger baseline cyst diameter correlated significantly with larger main pancreatic ducts, more cases of cyst wall thickening, and more cases with mural nodules (P less than .001 for all comparisons).
The study was funded by a grant from Korean Health Technology R&D Project of Ministry of Health and Welfare, Republic of Korea. The investigators reported having no conflicts of interest.
SOURCE: Han Y et al. Gastroenterology. 2018. doi: 10.1053/j.gastro.2017.10.013.
FROM GASTROENTEROLOGY
Key clinical point: Tailor the surveillance of BD-IPMNs based on initial diameter and the presence or absence of high-risk features.
Major finding: Median annual growth rate was 0.8 mm.
Data source: A retrospective study of 1,369 patients with BD-IPMNs.
Disclosures: The study was funded by a grant from the Korean Health Technology R&D Project of the Ministry of Health and Welfare, Republic of Korea. The investigators reported having no conflicts of interest.
Source: Han Y et al. Gastroenterology. 2018. doi: 10.1053/j.gastro.2017.10.013.
One in five Crohn’s disease patients have major complications after infliximab withdrawal
About doi: 10.1016/j.cgh.2017.09.061).
, according to research published in the February issue of Clinical Gastroenterology and Hepatology (About 70% of patients remained free of both infliximab restart failure and major complications, said Catherine Reenaers, MD, PhD, of Centre Hospitalier Universitaire de Liège (Belgium), and her associates. Significant predictors of major complications included upper gastrointestinal disease at the time of infliximab withdrawal, white blood cell count of at least 5.0 x 109 per L, and hemoglobin level under 12.5 g per dL. “Patients with at least two of these factors had a more than 40% risk of major complication in the 7 years following infliximab withdrawal,” the researchers reported.
Little is known about long-term outcomes after patients with Crohn’s disease withdraw from infliximab. Therefore, Dr. Reenaers and her associates retrospectively studied 102 patients with Crohn’s disease who had received infliximab and an antimetabolite (azathioprine, mercaptopurine, or methotrexate) for at least 12 months, had been in steroid-free clinical remission for at least 6 months, and then withdrew from infliximab. Patients were recruited from 19 centers in Belgium and France and were originally part of a prospective cohort study of infliximab withdrawal in Crohn’s disease (Gastroenterology. 2012;142[1]:63-70.e5).
About half of patients relapsed and restarted infliximab within 12 months, which is in line with other studies, the researchers noted. Over a median follow-up of 83 months (interquartile range, 71-93 months), 21% (95% confidence interval, 13.1%-30.3%) of patients had no complications, did not restart infliximab, and started no other biologics. In all, 70.2% of patients (95% CI, 60.2%-80.1%) had no major complications and did not fail to respond after restarting infliximab.
Eighteen patients (19%; 95% CI, 10%-27%) developed major complications: 14 who required surgery and 4 who developed new complex perianal lesions. In a multivariable model, the strongest independent predictor of major complications was leukocytosis (hazard ratio, 10.5; 95% CI, 1.3-83; P less than .002), followed by upper gastrointestinal disease (HR, 5.8; 95% CI, 1.5-22) and low hemoglobin level (HR, 4.1; 95% CI, 1.5-21.8; P less than .01). The 13 patients who lacked these risk factors had no major complications of infliximab withdrawal. Among 72 patients who had at least one risk factor, 16.3% (95% CI, 7%-25%) developed major complications over 7 years. Strikingly, among 17 with at least two risk factors, 43% (95% CI, 17%-69%) developed major complications over 7 years, the researchers noted.
Complications emerged a median of 50 months (interquartile range, 41-73 months) after patients received their last infliximab infusion, highlighting the need for close long-term monitoring even if patients show no signs of early clinical relapse after infliximab withdrawal, the investigators said. “One strength of this cohort was the homogeneity of the population,” they stressed. “Most studies of anti–tumor necrosis factor withdrawal after clinical remission were limited by heterogeneous populations, variable lengths of infliximab treatment before discontinuation, and variable use of immunomodulators and corticosteroids. In [our] cohort, the population was homogenous, infliximab withdrawal was standardized, and the disease characteristics at the time of stopping were collected prospectively.” Although follow-up times varied, less than 5% of patients were followed for less than 3 years, they noted.
The researchers did not acknowledge external funding sources. Dr. Reenaers disclosed ties to AbbVie, Takeda, MSD, Mundipharma, Hospira, and Ferring.
SOURCE: Reenaers C et al. Clin Gastro Hepatol 2018 February (in press).
The option of stopping a biologic agent is an attractive prospect for most Crohn's disease (CD) patients in stable clinical remission. The STORI trial, published in 2012, was among the earliest and select few studies addressing withdrawal of biologic therapy in CD among patients in sustained clinical remission with combination therapy (infliximab and thiopurine/methotrexate) for at least 6 months. Almost 50% of patients experienced disease relapse within a year of stopping infliximab in the trial.
Reenaers et al. recently published long-term follow-up of the original STORI cohort. After a median follow-up time of 7 years; four out five patients previously in clinical remission with combination therapy experienced worsening disease activity following withdrawal of infliximab. While the majority (70%) were able to resume infliximab and recapture disease response without any untoward adverse effects; one in five patients experienced major disease-related complications such as complex perianal disease or need for abdominal surgery. Upper GI tract involvement, high white blood cell count, and low hemoglobin concentration were associated with increased likelihood of a major complication. Notably, median time to a major complication was almost 4 years.
These results are similar to long-term relapse rates reported in other studies of withdrawal of therapy in CD. While biomarkers such as C-reactive protein, fecal calprotectin, along with endoscopic disease activity are reliable predictors of short-term relapse; clinical factors such as family history of CD, disease extent, stricturing or penetrating disease, and cigarette smoking are more relevant predictors of long-term disease activity. It is important to consider both types of predictors when considering withdrawal of therapy in CD.
Lastly, while the majority of patients who relapse following withdrawal of a biologic agent will do so within a year or two, a subset may not experience disease-related complications for several years - underscoring the need for long-term follow-up.
Manreet Kaur, MD, is assistant professor in the division of gastroenterology and hepatology; medical director, Inflammatory Bowel Disease Center, and medical director, faculty group practice, Baylor College of Medicine, Houston.
The option of stopping a biologic agent is an attractive prospect for most Crohn's disease (CD) patients in stable clinical remission. The STORI trial, published in 2012, was among the earliest and select few studies addressing withdrawal of biologic therapy in CD among patients in sustained clinical remission with combination therapy (infliximab and thiopurine/methotrexate) for at least 6 months. Almost 50% of patients experienced disease relapse within a year of stopping infliximab in the trial.
Reenaers et al. recently published long-term follow-up of the original STORI cohort. After a median follow-up time of 7 years; four out five patients previously in clinical remission with combination therapy experienced worsening disease activity following withdrawal of infliximab. While the majority (70%) were able to resume infliximab and recapture disease response without any untoward adverse effects; one in five patients experienced major disease-related complications such as complex perianal disease or need for abdominal surgery. Upper GI tract involvement, high white blood cell count, and low hemoglobin concentration were associated with increased likelihood of a major complication. Notably, median time to a major complication was almost 4 years.
These results are similar to long-term relapse rates reported in other studies of withdrawal of therapy in CD. While biomarkers such as C-reactive protein, fecal calprotectin, along with endoscopic disease activity are reliable predictors of short-term relapse; clinical factors such as family history of CD, disease extent, stricturing or penetrating disease, and cigarette smoking are more relevant predictors of long-term disease activity. It is important to consider both types of predictors when considering withdrawal of therapy in CD.
Lastly, while the majority of patients who relapse following withdrawal of a biologic agent will do so within a year or two, a subset may not experience disease-related complications for several years - underscoring the need for long-term follow-up.
Manreet Kaur, MD, is assistant professor in the division of gastroenterology and hepatology; medical director, Inflammatory Bowel Disease Center, and medical director, faculty group practice, Baylor College of Medicine, Houston.
The option of stopping a biologic agent is an attractive prospect for most Crohn's disease (CD) patients in stable clinical remission. The STORI trial, published in 2012, was among the earliest and select few studies addressing withdrawal of biologic therapy in CD among patients in sustained clinical remission with combination therapy (infliximab and thiopurine/methotrexate) for at least 6 months. Almost 50% of patients experienced disease relapse within a year of stopping infliximab in the trial.
Reenaers et al. recently published long-term follow-up of the original STORI cohort. After a median follow-up time of 7 years; four out five patients previously in clinical remission with combination therapy experienced worsening disease activity following withdrawal of infliximab. While the majority (70%) were able to resume infliximab and recapture disease response without any untoward adverse effects; one in five patients experienced major disease-related complications such as complex perianal disease or need for abdominal surgery. Upper GI tract involvement, high white blood cell count, and low hemoglobin concentration were associated with increased likelihood of a major complication. Notably, median time to a major complication was almost 4 years.
These results are similar to long-term relapse rates reported in other studies of withdrawal of therapy in CD. While biomarkers such as C-reactive protein, fecal calprotectin, along with endoscopic disease activity are reliable predictors of short-term relapse; clinical factors such as family history of CD, disease extent, stricturing or penetrating disease, and cigarette smoking are more relevant predictors of long-term disease activity. It is important to consider both types of predictors when considering withdrawal of therapy in CD.
Lastly, while the majority of patients who relapse following withdrawal of a biologic agent will do so within a year or two, a subset may not experience disease-related complications for several years - underscoring the need for long-term follow-up.
Manreet Kaur, MD, is assistant professor in the division of gastroenterology and hepatology; medical director, Inflammatory Bowel Disease Center, and medical director, faculty group practice, Baylor College of Medicine, Houston.
About doi: 10.1016/j.cgh.2017.09.061).
, according to research published in the February issue of Clinical Gastroenterology and Hepatology (About 70% of patients remained free of both infliximab restart failure and major complications, said Catherine Reenaers, MD, PhD, of Centre Hospitalier Universitaire de Liège (Belgium), and her associates. Significant predictors of major complications included upper gastrointestinal disease at the time of infliximab withdrawal, white blood cell count of at least 5.0 x 109 per L, and hemoglobin level under 12.5 g per dL. “Patients with at least two of these factors had a more than 40% risk of major complication in the 7 years following infliximab withdrawal,” the researchers reported.
Little is known about long-term outcomes after patients with Crohn’s disease withdraw from infliximab. Therefore, Dr. Reenaers and her associates retrospectively studied 102 patients with Crohn’s disease who had received infliximab and an antimetabolite (azathioprine, mercaptopurine, or methotrexate) for at least 12 months, had been in steroid-free clinical remission for at least 6 months, and then withdrew from infliximab. Patients were recruited from 19 centers in Belgium and France and were originally part of a prospective cohort study of infliximab withdrawal in Crohn’s disease (Gastroenterology. 2012;142[1]:63-70.e5).
About half of patients relapsed and restarted infliximab within 12 months, which is in line with other studies, the researchers noted. Over a median follow-up of 83 months (interquartile range, 71-93 months), 21% (95% confidence interval, 13.1%-30.3%) of patients had no complications, did not restart infliximab, and started no other biologics. In all, 70.2% of patients (95% CI, 60.2%-80.1%) had no major complications and did not fail to respond after restarting infliximab.
Eighteen patients (19%; 95% CI, 10%-27%) developed major complications: 14 who required surgery and 4 who developed new complex perianal lesions. In a multivariable model, the strongest independent predictor of major complications was leukocytosis (hazard ratio, 10.5; 95% CI, 1.3-83; P less than .002), followed by upper gastrointestinal disease (HR, 5.8; 95% CI, 1.5-22) and low hemoglobin level (HR, 4.1; 95% CI, 1.5-21.8; P less than .01). The 13 patients who lacked these risk factors had no major complications of infliximab withdrawal. Among 72 patients who had at least one risk factor, 16.3% (95% CI, 7%-25%) developed major complications over 7 years. Strikingly, among 17 with at least two risk factors, 43% (95% CI, 17%-69%) developed major complications over 7 years, the researchers noted.
Complications emerged a median of 50 months (interquartile range, 41-73 months) after patients received their last infliximab infusion, highlighting the need for close long-term monitoring even if patients show no signs of early clinical relapse after infliximab withdrawal, the investigators said. “One strength of this cohort was the homogeneity of the population,” they stressed. “Most studies of anti–tumor necrosis factor withdrawal after clinical remission were limited by heterogeneous populations, variable lengths of infliximab treatment before discontinuation, and variable use of immunomodulators and corticosteroids. In [our] cohort, the population was homogenous, infliximab withdrawal was standardized, and the disease characteristics at the time of stopping were collected prospectively.” Although follow-up times varied, less than 5% of patients were followed for less than 3 years, they noted.
The researchers did not acknowledge external funding sources. Dr. Reenaers disclosed ties to AbbVie, Takeda, MSD, Mundipharma, Hospira, and Ferring.
SOURCE: Reenaers C et al. Clin Gastro Hepatol 2018 February (in press).
About doi: 10.1016/j.cgh.2017.09.061).
, according to research published in the February issue of Clinical Gastroenterology and Hepatology (About 70% of patients remained free of both infliximab restart failure and major complications, said Catherine Reenaers, MD, PhD, of Centre Hospitalier Universitaire de Liège (Belgium), and her associates. Significant predictors of major complications included upper gastrointestinal disease at the time of infliximab withdrawal, white blood cell count of at least 5.0 x 109 per L, and hemoglobin level under 12.5 g per dL. “Patients with at least two of these factors had a more than 40% risk of major complication in the 7 years following infliximab withdrawal,” the researchers reported.
Little is known about long-term outcomes after patients with Crohn’s disease withdraw from infliximab. Therefore, Dr. Reenaers and her associates retrospectively studied 102 patients with Crohn’s disease who had received infliximab and an antimetabolite (azathioprine, mercaptopurine, or methotrexate) for at least 12 months, had been in steroid-free clinical remission for at least 6 months, and then withdrew from infliximab. Patients were recruited from 19 centers in Belgium and France and were originally part of a prospective cohort study of infliximab withdrawal in Crohn’s disease (Gastroenterology. 2012;142[1]:63-70.e5).
About half of patients relapsed and restarted infliximab within 12 months, which is in line with other studies, the researchers noted. Over a median follow-up of 83 months (interquartile range, 71-93 months), 21% (95% confidence interval, 13.1%-30.3%) of patients had no complications, did not restart infliximab, and started no other biologics. In all, 70.2% of patients (95% CI, 60.2%-80.1%) had no major complications and did not fail to respond after restarting infliximab.
Eighteen patients (19%; 95% CI, 10%-27%) developed major complications: 14 who required surgery and 4 who developed new complex perianal lesions. In a multivariable model, the strongest independent predictor of major complications was leukocytosis (hazard ratio, 10.5; 95% CI, 1.3-83; P less than .002), followed by upper gastrointestinal disease (HR, 5.8; 95% CI, 1.5-22) and low hemoglobin level (HR, 4.1; 95% CI, 1.5-21.8; P less than .01). The 13 patients who lacked these risk factors had no major complications of infliximab withdrawal. Among 72 patients who had at least one risk factor, 16.3% (95% CI, 7%-25%) developed major complications over 7 years. Strikingly, among 17 with at least two risk factors, 43% (95% CI, 17%-69%) developed major complications over 7 years, the researchers noted.
Complications emerged a median of 50 months (interquartile range, 41-73 months) after patients received their last infliximab infusion, highlighting the need for close long-term monitoring even if patients show no signs of early clinical relapse after infliximab withdrawal, the investigators said. “One strength of this cohort was the homogeneity of the population,” they stressed. “Most studies of anti–tumor necrosis factor withdrawal after clinical remission were limited by heterogeneous populations, variable lengths of infliximab treatment before discontinuation, and variable use of immunomodulators and corticosteroids. In [our] cohort, the population was homogenous, infliximab withdrawal was standardized, and the disease characteristics at the time of stopping were collected prospectively.” Although follow-up times varied, less than 5% of patients were followed for less than 3 years, they noted.
The researchers did not acknowledge external funding sources. Dr. Reenaers disclosed ties to AbbVie, Takeda, MSD, Mundipharma, Hospira, and Ferring.
SOURCE: Reenaers C et al. Clin Gastro Hepatol 2018 February (in press).
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Over 7 years, about one in five patients with remitted Crohn’s disease developed a major complication after withdrawing from infliximab, despite remaining on an antimetabolite.
Major finding: Eighteen patients (19%; 95% CI, 10%-27%) developed major complications: Fourteen needed surgery and four developed new complex perianal lesions.
Data source: A cohort study of 102 patients with Crohn’s disease who had received infliximab and an antimetabolite for at least 12 months, had been in steroid-free clinical remission for at least 6 months, and who then withdrew from infliximab.
Disclosures: The researchers did not acknowledge external funding sources. Dr. Reenaers disclosed ties to AbbVie, Takeda, MSD, Mundipharma, Hospira, and Ferring.
Source: Reenaers C et al. Clin Gastroenterol Hepatol. 2018 February (in press).
Eradicating HCV significantly improved liver stiffness in meta-analysis
Eradicating chronic hepatitis C virus (HCV) infection led to significant decreases in liver stiffness in a systematic review and meta-analysis of nearly 3,000 patients.
Mean liver stiffness fell by 4.1 kPa (kilopascals) (95% confidence interval, 3.3-4.9 kPa) 12 or more months after patients achieved sustained virologic response to treatment, but did not significantly change in patients who did not achieve SVR, reported Siddharth Singh, MD, of the University of San Diego, La Jolla, Calif., and his associates in the January issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.04.038). The results were especially striking among patients who received direct-acting antiviral agents (DAAs) or who had high baseline levels of inflammation, the investigators added.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Based on these findings, about 47% of patients with advanced fibrosis or cirrhosis at baseline will drop below 9.5 kPa after achieving SVR, they reported. “With this decline in liver stiffness, it is conceivable that risk of liver-related complications would decrease, particularly in patients without cirrhosis,” they added. “Future research is warranted on the impact of magnitude and kinetics of decline in liver stiffness on improvement in liver-related outcomes.”
Eradicating HCV infection was known to decrease liver stiffness, but the magnitude of decline was not well understood. Therefore, the reviewers searched the literature through October 2016 for studies of HCV-infected adults who underwent liver stiffness measurement by vibration-controlled transient elastography before and at least once after completing HCV treatment. All studies also included data on median liver stiffness among patients who did and did not achieve SVR. The search identified 23 observational studies and one post hoc analysis of a randomized controlled trial, for a total of 2,934 patients, of whom 2,214 achieved SVR.
Among patients who achieved SVR, mean liver stiffness dropped by 2.4 kPa at the end of treatment (95% CI, 1.7-3.0 kPa), by 3.1 kPa 1-6 months later (95% CI, 1.6-4.7 kPa), and by 3.2 kPa 6-12 months after completing treatment (90% CI, 2.6-3.9 kPa). A year or more after finishing treatment, patients who achieved SVR had a 28% median decrease in liver stiffness (interquartile range, 22%-35%). However, liver stiffness did not significantly change among patients who did not achieve SVR, the reviewers reported.
Mean liver stiffness declined significantly more among patients who received DAAs (4.5 kPa) than among recipients of interferon-based regimens (2.6 kPa; P = .03). However, studies of DAAs included patients with greater liver stiffness at baseline, which could at least partially explain this discrepancy, the investigators said. Baseline cirrhosis also was associated with a greater decline in liver stiffness (mean, 5.1 kPa, vs. 2.8 kPa in patients without cirrhosis; P = .02), as was high baseline alanine aminotransferase level (P less than .01). Among patients whose baseline liver stiffness measurement exceeded 9.5 kPa, 47% had their liver stiffness drop to less than 9.5 kPa after achieving SVR.
Coinfection with HIV did not significantly alter the magnitude of decline in liver stiffness 6-12 months after treatment in patients who achieved SVR, the reviewers noted. “[Follow-up] assessment after SVR was relatively short; hence, long-term evolution of liver stiffness after antiviral therapy and impact of decline in liver stiffness on patient clinical outcomes could not be ascertained,” they wrote. The studies also did not consistently assess potential confounders such as nonalcoholic fatty liver disease, diabetes, and alcohol consumption.
One reviewer disclosed funding from the National Institutes of Health/National Library of Medicine. None had conflicts of interest.
The current era of new-generation direct-acting antiviral agents have revolutionized the treatment landscape of chronic hepatitis C virus infection, providing short-duration, safe, and consistently effective regimens that achieve SVR or cure in nearly 100% of patients. While achieving SVR is important, even more important is the long-term impact of SVR and whether cure translates into outcomes such as improved mortality or a reduced risk of disease progression. Although improved mortality after SVR has been demonstrated, one of the main drivers of risk of disease progression is the severity of hepatic fibrosis.
Robert J. Wong, MD, MS, is with the department of medicine and is director of research and education, division of gastroenterology and hepatology, Alameda Health System – Highland Hospital, Oakland, Calif. He has received a 2017-2019 Clinical Translational Research Award from AASLD, has received research funding from Gilead and AbbVie, and is on the speakers bureau of Gilead, Salix, and Bayer. He has also done consulting for and been an advisory board member for Gilead.
The current era of new-generation direct-acting antiviral agents have revolutionized the treatment landscape of chronic hepatitis C virus infection, providing short-duration, safe, and consistently effective regimens that achieve SVR or cure in nearly 100% of patients. While achieving SVR is important, even more important is the long-term impact of SVR and whether cure translates into outcomes such as improved mortality or a reduced risk of disease progression. Although improved mortality after SVR has been demonstrated, one of the main drivers of risk of disease progression is the severity of hepatic fibrosis.
Robert J. Wong, MD, MS, is with the department of medicine and is director of research and education, division of gastroenterology and hepatology, Alameda Health System – Highland Hospital, Oakland, Calif. He has received a 2017-2019 Clinical Translational Research Award from AASLD, has received research funding from Gilead and AbbVie, and is on the speakers bureau of Gilead, Salix, and Bayer. He has also done consulting for and been an advisory board member for Gilead.
The current era of new-generation direct-acting antiviral agents have revolutionized the treatment landscape of chronic hepatitis C virus infection, providing short-duration, safe, and consistently effective regimens that achieve SVR or cure in nearly 100% of patients. While achieving SVR is important, even more important is the long-term impact of SVR and whether cure translates into outcomes such as improved mortality or a reduced risk of disease progression. Although improved mortality after SVR has been demonstrated, one of the main drivers of risk of disease progression is the severity of hepatic fibrosis.
Robert J. Wong, MD, MS, is with the department of medicine and is director of research and education, division of gastroenterology and hepatology, Alameda Health System – Highland Hospital, Oakland, Calif. He has received a 2017-2019 Clinical Translational Research Award from AASLD, has received research funding from Gilead and AbbVie, and is on the speakers bureau of Gilead, Salix, and Bayer. He has also done consulting for and been an advisory board member for Gilead.
Eradicating chronic hepatitis C virus (HCV) infection led to significant decreases in liver stiffness in a systematic review and meta-analysis of nearly 3,000 patients.
Mean liver stiffness fell by 4.1 kPa (kilopascals) (95% confidence interval, 3.3-4.9 kPa) 12 or more months after patients achieved sustained virologic response to treatment, but did not significantly change in patients who did not achieve SVR, reported Siddharth Singh, MD, of the University of San Diego, La Jolla, Calif., and his associates in the January issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.04.038). The results were especially striking among patients who received direct-acting antiviral agents (DAAs) or who had high baseline levels of inflammation, the investigators added.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Based on these findings, about 47% of patients with advanced fibrosis or cirrhosis at baseline will drop below 9.5 kPa after achieving SVR, they reported. “With this decline in liver stiffness, it is conceivable that risk of liver-related complications would decrease, particularly in patients without cirrhosis,” they added. “Future research is warranted on the impact of magnitude and kinetics of decline in liver stiffness on improvement in liver-related outcomes.”
Eradicating HCV infection was known to decrease liver stiffness, but the magnitude of decline was not well understood. Therefore, the reviewers searched the literature through October 2016 for studies of HCV-infected adults who underwent liver stiffness measurement by vibration-controlled transient elastography before and at least once after completing HCV treatment. All studies also included data on median liver stiffness among patients who did and did not achieve SVR. The search identified 23 observational studies and one post hoc analysis of a randomized controlled trial, for a total of 2,934 patients, of whom 2,214 achieved SVR.
Among patients who achieved SVR, mean liver stiffness dropped by 2.4 kPa at the end of treatment (95% CI, 1.7-3.0 kPa), by 3.1 kPa 1-6 months later (95% CI, 1.6-4.7 kPa), and by 3.2 kPa 6-12 months after completing treatment (90% CI, 2.6-3.9 kPa). A year or more after finishing treatment, patients who achieved SVR had a 28% median decrease in liver stiffness (interquartile range, 22%-35%). However, liver stiffness did not significantly change among patients who did not achieve SVR, the reviewers reported.
Mean liver stiffness declined significantly more among patients who received DAAs (4.5 kPa) than among recipients of interferon-based regimens (2.6 kPa; P = .03). However, studies of DAAs included patients with greater liver stiffness at baseline, which could at least partially explain this discrepancy, the investigators said. Baseline cirrhosis also was associated with a greater decline in liver stiffness (mean, 5.1 kPa, vs. 2.8 kPa in patients without cirrhosis; P = .02), as was high baseline alanine aminotransferase level (P less than .01). Among patients whose baseline liver stiffness measurement exceeded 9.5 kPa, 47% had their liver stiffness drop to less than 9.5 kPa after achieving SVR.
Coinfection with HIV did not significantly alter the magnitude of decline in liver stiffness 6-12 months after treatment in patients who achieved SVR, the reviewers noted. “[Follow-up] assessment after SVR was relatively short; hence, long-term evolution of liver stiffness after antiviral therapy and impact of decline in liver stiffness on patient clinical outcomes could not be ascertained,” they wrote. The studies also did not consistently assess potential confounders such as nonalcoholic fatty liver disease, diabetes, and alcohol consumption.
One reviewer disclosed funding from the National Institutes of Health/National Library of Medicine. None had conflicts of interest.
Eradicating chronic hepatitis C virus (HCV) infection led to significant decreases in liver stiffness in a systematic review and meta-analysis of nearly 3,000 patients.
Mean liver stiffness fell by 4.1 kPa (kilopascals) (95% confidence interval, 3.3-4.9 kPa) 12 or more months after patients achieved sustained virologic response to treatment, but did not significantly change in patients who did not achieve SVR, reported Siddharth Singh, MD, of the University of San Diego, La Jolla, Calif., and his associates in the January issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.04.038). The results were especially striking among patients who received direct-acting antiviral agents (DAAs) or who had high baseline levels of inflammation, the investigators added.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Based on these findings, about 47% of patients with advanced fibrosis or cirrhosis at baseline will drop below 9.5 kPa after achieving SVR, they reported. “With this decline in liver stiffness, it is conceivable that risk of liver-related complications would decrease, particularly in patients without cirrhosis,” they added. “Future research is warranted on the impact of magnitude and kinetics of decline in liver stiffness on improvement in liver-related outcomes.”
Eradicating HCV infection was known to decrease liver stiffness, but the magnitude of decline was not well understood. Therefore, the reviewers searched the literature through October 2016 for studies of HCV-infected adults who underwent liver stiffness measurement by vibration-controlled transient elastography before and at least once after completing HCV treatment. All studies also included data on median liver stiffness among patients who did and did not achieve SVR. The search identified 23 observational studies and one post hoc analysis of a randomized controlled trial, for a total of 2,934 patients, of whom 2,214 achieved SVR.
Among patients who achieved SVR, mean liver stiffness dropped by 2.4 kPa at the end of treatment (95% CI, 1.7-3.0 kPa), by 3.1 kPa 1-6 months later (95% CI, 1.6-4.7 kPa), and by 3.2 kPa 6-12 months after completing treatment (90% CI, 2.6-3.9 kPa). A year or more after finishing treatment, patients who achieved SVR had a 28% median decrease in liver stiffness (interquartile range, 22%-35%). However, liver stiffness did not significantly change among patients who did not achieve SVR, the reviewers reported.
Mean liver stiffness declined significantly more among patients who received DAAs (4.5 kPa) than among recipients of interferon-based regimens (2.6 kPa; P = .03). However, studies of DAAs included patients with greater liver stiffness at baseline, which could at least partially explain this discrepancy, the investigators said. Baseline cirrhosis also was associated with a greater decline in liver stiffness (mean, 5.1 kPa, vs. 2.8 kPa in patients without cirrhosis; P = .02), as was high baseline alanine aminotransferase level (P less than .01). Among patients whose baseline liver stiffness measurement exceeded 9.5 kPa, 47% had their liver stiffness drop to less than 9.5 kPa after achieving SVR.
Coinfection with HIV did not significantly alter the magnitude of decline in liver stiffness 6-12 months after treatment in patients who achieved SVR, the reviewers noted. “[Follow-up] assessment after SVR was relatively short; hence, long-term evolution of liver stiffness after antiviral therapy and impact of decline in liver stiffness on patient clinical outcomes could not be ascertained,” they wrote. The studies also did not consistently assess potential confounders such as nonalcoholic fatty liver disease, diabetes, and alcohol consumption.
One reviewer disclosed funding from the National Institutes of Health/National Library of Medicine. None had conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Eradicating chronic hepatitis C virus infection led to significant decreases in liver stiffness.
Major finding: Mean liver stiffness decreased by 4.1 kPa 12 or more months after patients achieved sustained virologic response to treatment, but did not significantly improve in patients who lacked SVR.
Data source: A systematic review and meta-analysis of 2,934 patients from 23 observational studies and one post hoc analysis of a randomized controlled trial.
Disclosures: One reviewer disclosed funding from the National Institutes of Health/National Library of Medicine. The reviewers reported having no conflicts of interest.
VIDEO: Project ECHO would cost-effectively expand HCV treatment
Training community health providers to treat chronic hepatitis C virus infection is a cost-effective way to expand treatment access and reduce the national burden of this prevalent condition, according to research published in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.10.016).
The model, dubbed Project ECHO, “is the best way, to our knowledge, to cost-effectively find and treat HCV patients at scale,” wrote Thilo Rattay, MPH, of the University of Michigan School of Public Health, Ann Arbor, and his associates. “Our analysis demonstrates that fundamentally changing the care delivery model for HCV enables unparalleled reach, in contrast to simply using ever more cost-effective drugs in an inefficient system. Project ECHO can quickly reduce the burden of disease from HCV and accelerate the impact of the new generation of highly effective medications.”
Project ECHO (echo.unm.edu) links multidisciplinary teams of specialists (hubs) to physicians and nurse practitioners in community practice (spokes). Each hub, which is usually based at an academic medical center, holds video conferences to mentor and teach providers about best practices for managing conditions ranging from autism to Zika virus infection. Initial reports suggest that Project ECHO can improve health care quality and access as well as job satisfaction among primary care providers, the researchers noted.
Project ECHO has 127 hubs globally, including 77 in the United States, and receives support from foundations, state legislatures, and government agencies. Because patients with chronic HCV vastly outnumber gastroenterologists in the United States, Mr. Rattay and his coinvestigators used Markov models to evaluate Project ECHO’s cost-effectiveness in the HCV setting. To do so, they created a decision tree and Markov models with Microsoft Excel, PrecisionTree, and @RISK by using data from the U.S. Census Bureau, MarketScan, and an extensive literature review
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The models yielded an incremental cost-effectiveness ratio of $10,351 per quality-adjusted life year when compared with the status quo, said the researchers. Commonly cited willingness-to-pay thresholds are $50,000 and $100,000, indicating that Project ECHO is a cost-effective way to expand HCV treatment, they added. However, insurers would pay substantially more during the first 5 years of rollout – about $708 million versus $368 million with the status quo. During the first year, ECHO would cost payers about $350.5 million more than would the status quo, but 4,446 more patients would be treated, drastically reducing prevalence in the insurance pool. Consequently, subsequent costs would drop by nearly $11 million over the first 5 years of ECHO. Nonetheless, the “high budgetary costs suggest that incremental rollout of [Project] ECHO may be best,” the investigators wrote.
They were unable to determine whether increased treatment under ECHO relates to expanded screening, treatment adherence, or access, but sensitivity analyses suggested that “results are largely independent of the cause,” the researchers wrote. “Importantly, most of the financial benefits of treating HCV are not immediate, while a majority of the costs are upfront,” they stressed. Stakeholders therefore need to adopt a long-term view and consider population-based health care models and reimbursement strategies that “capture the full benefit of this type of ecosystem.”
The investigators had no external funding sources and no conflicts of interest.
Training community health providers to treat chronic hepatitis C virus infection is a cost-effective way to expand treatment access and reduce the national burden of this prevalent condition, according to research published in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.10.016).
The model, dubbed Project ECHO, “is the best way, to our knowledge, to cost-effectively find and treat HCV patients at scale,” wrote Thilo Rattay, MPH, of the University of Michigan School of Public Health, Ann Arbor, and his associates. “Our analysis demonstrates that fundamentally changing the care delivery model for HCV enables unparalleled reach, in contrast to simply using ever more cost-effective drugs in an inefficient system. Project ECHO can quickly reduce the burden of disease from HCV and accelerate the impact of the new generation of highly effective medications.”
Project ECHO (echo.unm.edu) links multidisciplinary teams of specialists (hubs) to physicians and nurse practitioners in community practice (spokes). Each hub, which is usually based at an academic medical center, holds video conferences to mentor and teach providers about best practices for managing conditions ranging from autism to Zika virus infection. Initial reports suggest that Project ECHO can improve health care quality and access as well as job satisfaction among primary care providers, the researchers noted.
Project ECHO has 127 hubs globally, including 77 in the United States, and receives support from foundations, state legislatures, and government agencies. Because patients with chronic HCV vastly outnumber gastroenterologists in the United States, Mr. Rattay and his coinvestigators used Markov models to evaluate Project ECHO’s cost-effectiveness in the HCV setting. To do so, they created a decision tree and Markov models with Microsoft Excel, PrecisionTree, and @RISK by using data from the U.S. Census Bureau, MarketScan, and an extensive literature review
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The models yielded an incremental cost-effectiveness ratio of $10,351 per quality-adjusted life year when compared with the status quo, said the researchers. Commonly cited willingness-to-pay thresholds are $50,000 and $100,000, indicating that Project ECHO is a cost-effective way to expand HCV treatment, they added. However, insurers would pay substantially more during the first 5 years of rollout – about $708 million versus $368 million with the status quo. During the first year, ECHO would cost payers about $350.5 million more than would the status quo, but 4,446 more patients would be treated, drastically reducing prevalence in the insurance pool. Consequently, subsequent costs would drop by nearly $11 million over the first 5 years of ECHO. Nonetheless, the “high budgetary costs suggest that incremental rollout of [Project] ECHO may be best,” the investigators wrote.
They were unable to determine whether increased treatment under ECHO relates to expanded screening, treatment adherence, or access, but sensitivity analyses suggested that “results are largely independent of the cause,” the researchers wrote. “Importantly, most of the financial benefits of treating HCV are not immediate, while a majority of the costs are upfront,” they stressed. Stakeholders therefore need to adopt a long-term view and consider population-based health care models and reimbursement strategies that “capture the full benefit of this type of ecosystem.”
The investigators had no external funding sources and no conflicts of interest.
Training community health providers to treat chronic hepatitis C virus infection is a cost-effective way to expand treatment access and reduce the national burden of this prevalent condition, according to research published in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.10.016).
The model, dubbed Project ECHO, “is the best way, to our knowledge, to cost-effectively find and treat HCV patients at scale,” wrote Thilo Rattay, MPH, of the University of Michigan School of Public Health, Ann Arbor, and his associates. “Our analysis demonstrates that fundamentally changing the care delivery model for HCV enables unparalleled reach, in contrast to simply using ever more cost-effective drugs in an inefficient system. Project ECHO can quickly reduce the burden of disease from HCV and accelerate the impact of the new generation of highly effective medications.”
Project ECHO (echo.unm.edu) links multidisciplinary teams of specialists (hubs) to physicians and nurse practitioners in community practice (spokes). Each hub, which is usually based at an academic medical center, holds video conferences to mentor and teach providers about best practices for managing conditions ranging from autism to Zika virus infection. Initial reports suggest that Project ECHO can improve health care quality and access as well as job satisfaction among primary care providers, the researchers noted.
Project ECHO has 127 hubs globally, including 77 in the United States, and receives support from foundations, state legislatures, and government agencies. Because patients with chronic HCV vastly outnumber gastroenterologists in the United States, Mr. Rattay and his coinvestigators used Markov models to evaluate Project ECHO’s cost-effectiveness in the HCV setting. To do so, they created a decision tree and Markov models with Microsoft Excel, PrecisionTree, and @RISK by using data from the U.S. Census Bureau, MarketScan, and an extensive literature review
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The models yielded an incremental cost-effectiveness ratio of $10,351 per quality-adjusted life year when compared with the status quo, said the researchers. Commonly cited willingness-to-pay thresholds are $50,000 and $100,000, indicating that Project ECHO is a cost-effective way to expand HCV treatment, they added. However, insurers would pay substantially more during the first 5 years of rollout – about $708 million versus $368 million with the status quo. During the first year, ECHO would cost payers about $350.5 million more than would the status quo, but 4,446 more patients would be treated, drastically reducing prevalence in the insurance pool. Consequently, subsequent costs would drop by nearly $11 million over the first 5 years of ECHO. Nonetheless, the “high budgetary costs suggest that incremental rollout of [Project] ECHO may be best,” the investigators wrote.
They were unable to determine whether increased treatment under ECHO relates to expanded screening, treatment adherence, or access, but sensitivity analyses suggested that “results are largely independent of the cause,” the researchers wrote. “Importantly, most of the financial benefits of treating HCV are not immediate, while a majority of the costs are upfront,” they stressed. Stakeholders therefore need to adopt a long-term view and consider population-based health care models and reimbursement strategies that “capture the full benefit of this type of ecosystem.”
The investigators had no external funding sources and no conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: A teletraining model called Project ECHO is a cost-effective way to expand access to treatment for chronic hepatitis C virus infection.
Major finding: The incremental cost-effectiveness ratio was $10,351 per quality-adjusted life year, compared with the status quo. Commonly cited willingness-to-pay thresholds are $50,000 and $100,000.
Data source: A decision tree and Markov models created with Microsoft Excel, PrecisionTree, and @RISK using data from the U.S. Census Bureau, MarketScan, and an extensive literature review.
Disclosures: The investigators had no external funding sources and no conflicts of interest.
Biologics during pregnancy did not affect infant vaccine response
The use of biologic therapy during pregnancy did not lower antibody titers among infants vaccinated against Haemophilus influenzae B (HiB) or tetanus toxin, according to the results of a study of 179 mothers reported in the January issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.08.041).
Additionally, there was no link between median infliximab concentration in uterine cord blood and antibody titers among infants aged 7 months and older, wrote Dawn B. Beaulieu, MD, with her associates. “In a limited cohort of exposed infants given the rotavirus vaccine, there was no association with significant adverse reactions,” they also reported.
Experts now recommend against live vaccinations for infants who may have detectable concentrations of biologics, but it remained unclear whether these infants can mount adequate responses to inactive vaccines. Therefore, the researchers analyzed data from the Pregnancy in IBD and Neonatal Outcomes (PIANO) registry collected between 2007 and 2016 and surveyed women about their infants’ vaccination history. They also quantified antibodies in serum samples from infants aged 7 months and older and analyzed measured concentrations of biologics in cord blood.
Among 179 mothers with IBD, most had inactive (77%) or mild disease activity (18%) during pregnancy, the researchers said. Eleven (6%) mothers were not on immunosuppressives while pregnant, 15 (8%) were on an immunomodulator, and the rest were on biologic monotherapy (65%) or a biologic plus an immunomodulator (21%). A total of 46 infants had available HiB titer data, of whom 38 were potentially exposed to biologics; among 49 infants with available tetanus titers, 41 were potentially exposed. In all, 71% of exposed infants had protective levels of antibodies against HiB, and 80% had protective titers to tetanus toxoid. Proportions among unexposed infants were 50% and 75%, respectively. Proportions of protective antibody titers did not significantly differ between groups even after excluding infants whose mothers received certolizumab pegol, which has negligible rates of placental transfer.
A total of 39 infants received live rotavirus vaccine despite having detectable levels of biologics in cord blood at birth. Seven developed mild vaccine reactions consisting of fever (six infants) or diarrhea (one infant). This proportion (18%) resembles that from a large study (N Engl J Med. 2006;354:23-33) of healthy infants who were vaccinated against rotavirus, the researchers noted. “Despite our data suggesting a lack of severe side effects with the rotavirus vaccine in these infants, in the absence of robust evidence, one should continue to avoid live vaccines in infants born to mothers on biologic therapy (excluding certolizumab) during the first year of life or until drug clearance is confirmed,” they suggested. “With the growing availability of tests, one conceivably could test serum drug concentration in infants, and, if undetectable, consider live vaccination at that time, if appropriate for the vaccine, particularly in infants most likely to benefit from such vaccines.”
The Crohn’s and Colitis Foundation provided funding. Dr. Beaulieu disclosed a consulting relationship with AbbVie, and four coinvestigators also reported ties to pharmaceutical companies.
The use of biologic therapy during pregnancy did not lower antibody titers among infants vaccinated against Haemophilus influenzae B (HiB) or tetanus toxin, according to the results of a study of 179 mothers reported in the January issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.08.041).
Additionally, there was no link between median infliximab concentration in uterine cord blood and antibody titers among infants aged 7 months and older, wrote Dawn B. Beaulieu, MD, with her associates. “In a limited cohort of exposed infants given the rotavirus vaccine, there was no association with significant adverse reactions,” they also reported.
Experts now recommend against live vaccinations for infants who may have detectable concentrations of biologics, but it remained unclear whether these infants can mount adequate responses to inactive vaccines. Therefore, the researchers analyzed data from the Pregnancy in IBD and Neonatal Outcomes (PIANO) registry collected between 2007 and 2016 and surveyed women about their infants’ vaccination history. They also quantified antibodies in serum samples from infants aged 7 months and older and analyzed measured concentrations of biologics in cord blood.
Among 179 mothers with IBD, most had inactive (77%) or mild disease activity (18%) during pregnancy, the researchers said. Eleven (6%) mothers were not on immunosuppressives while pregnant, 15 (8%) were on an immunomodulator, and the rest were on biologic monotherapy (65%) or a biologic plus an immunomodulator (21%). A total of 46 infants had available HiB titer data, of whom 38 were potentially exposed to biologics; among 49 infants with available tetanus titers, 41 were potentially exposed. In all, 71% of exposed infants had protective levels of antibodies against HiB, and 80% had protective titers to tetanus toxoid. Proportions among unexposed infants were 50% and 75%, respectively. Proportions of protective antibody titers did not significantly differ between groups even after excluding infants whose mothers received certolizumab pegol, which has negligible rates of placental transfer.
A total of 39 infants received live rotavirus vaccine despite having detectable levels of biologics in cord blood at birth. Seven developed mild vaccine reactions consisting of fever (six infants) or diarrhea (one infant). This proportion (18%) resembles that from a large study (N Engl J Med. 2006;354:23-33) of healthy infants who were vaccinated against rotavirus, the researchers noted. “Despite our data suggesting a lack of severe side effects with the rotavirus vaccine in these infants, in the absence of robust evidence, one should continue to avoid live vaccines in infants born to mothers on biologic therapy (excluding certolizumab) during the first year of life or until drug clearance is confirmed,” they suggested. “With the growing availability of tests, one conceivably could test serum drug concentration in infants, and, if undetectable, consider live vaccination at that time, if appropriate for the vaccine, particularly in infants most likely to benefit from such vaccines.”
The Crohn’s and Colitis Foundation provided funding. Dr. Beaulieu disclosed a consulting relationship with AbbVie, and four coinvestigators also reported ties to pharmaceutical companies.
The use of biologic therapy during pregnancy did not lower antibody titers among infants vaccinated against Haemophilus influenzae B (HiB) or tetanus toxin, according to the results of a study of 179 mothers reported in the January issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.08.041).
Additionally, there was no link between median infliximab concentration in uterine cord blood and antibody titers among infants aged 7 months and older, wrote Dawn B. Beaulieu, MD, with her associates. “In a limited cohort of exposed infants given the rotavirus vaccine, there was no association with significant adverse reactions,” they also reported.
Experts now recommend against live vaccinations for infants who may have detectable concentrations of biologics, but it remained unclear whether these infants can mount adequate responses to inactive vaccines. Therefore, the researchers analyzed data from the Pregnancy in IBD and Neonatal Outcomes (PIANO) registry collected between 2007 and 2016 and surveyed women about their infants’ vaccination history. They also quantified antibodies in serum samples from infants aged 7 months and older and analyzed measured concentrations of biologics in cord blood.
Among 179 mothers with IBD, most had inactive (77%) or mild disease activity (18%) during pregnancy, the researchers said. Eleven (6%) mothers were not on immunosuppressives while pregnant, 15 (8%) were on an immunomodulator, and the rest were on biologic monotherapy (65%) or a biologic plus an immunomodulator (21%). A total of 46 infants had available HiB titer data, of whom 38 were potentially exposed to biologics; among 49 infants with available tetanus titers, 41 were potentially exposed. In all, 71% of exposed infants had protective levels of antibodies against HiB, and 80% had protective titers to tetanus toxoid. Proportions among unexposed infants were 50% and 75%, respectively. Proportions of protective antibody titers did not significantly differ between groups even after excluding infants whose mothers received certolizumab pegol, which has negligible rates of placental transfer.
A total of 39 infants received live rotavirus vaccine despite having detectable levels of biologics in cord blood at birth. Seven developed mild vaccine reactions consisting of fever (six infants) or diarrhea (one infant). This proportion (18%) resembles that from a large study (N Engl J Med. 2006;354:23-33) of healthy infants who were vaccinated against rotavirus, the researchers noted. “Despite our data suggesting a lack of severe side effects with the rotavirus vaccine in these infants, in the absence of robust evidence, one should continue to avoid live vaccines in infants born to mothers on biologic therapy (excluding certolizumab) during the first year of life or until drug clearance is confirmed,” they suggested. “With the growing availability of tests, one conceivably could test serum drug concentration in infants, and, if undetectable, consider live vaccination at that time, if appropriate for the vaccine, particularly in infants most likely to benefit from such vaccines.”
The Crohn’s and Colitis Foundation provided funding. Dr. Beaulieu disclosed a consulting relationship with AbbVie, and four coinvestigators also reported ties to pharmaceutical companies.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: In utero biologic exposure did not prevent immune response to Haemophilus influenzae B and tetanus vaccines during infancy.
Major finding: Proportions of protective antibody titers did not significantly differ among groups.
Data source: A prospective study of 179 mothers with IBD and their infants.
Disclosures: The Crohn’s and Colitis Foundation provided funding. Dr. Beaulieu disclosed a consulting relationship with AbbVie, and four coinvestigators also reported ties to pharmaceutical companies.
VIDEO: Study supports close follow-up of patients with high-risk adenomas plus serrated polyps
The simultaneous colonoscopic presence of serrated polyps and high-risk adenomas led to a fivefold increase in the odds of metachronous high-risk adenomas in a large population-based registry study reported in Gastroenterology (2017. doi: 10.1053/j.gastro.2017.09.011).
The data “support the recommendation that individuals with large and high-risk serrated lesions require closer surveillance,” said Joseph C. Anderson, MD, of White River Junction Department of Veterans Affairs Medical Center, Vt., with his associates. When discounting size and histology, the presence of serrated polyps alone was not associated with an increased risk for metachronous high-risk adenoma, they also reported. Although serrated polyps are important precursors of colorectal cancer, relevant longitudinal surveillance data are sparse. Therefore, the investigators studied 5,433 adults who underwent index and follow-up colonoscopies a median of 4.9 years apart and were tracked in the population-based New Hampshire Colonoscopy Registry. The cohort had a median age of 61 years and half of individuals were male.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
After adjusting for age, sex, smoking status, body mass index, and median interval between colonoscopies, individuals were at significantly increased risk of metachronous high-risk adenoma if their baseline colonoscopy showed high-risk adenoma and synchronous serrated polyps (odds ratio, 5.6; 95% confidence interval, 1.7-18.3), high-risk adenoma with synchronous sessile serrated adenomas (or polyps) or traditional serrated adenomas (OR, 16.0; 95% CI, 7.0-37.0), or high-risk adenoma alone (OR, 3.9; 95% CI, 2.8-5.4), vs. participants with no findings.
The researchers also found that the index presence of large (at least 1-cm) serrated polyps greatly increased the likelihood of finding large metachronous serrated polyps on subsequent colonoscopy (OR, 14.0; 95% CI, 5.0-40.9). “This has clinical relevance, since previous studies have demonstrated an increased risk for colorectal cancer in individuals with large serrated polyps,” the researchers wrote. “However, this increased risk may occur over a protracted time period of 10 years or more, and addressing variation in serrated polyp detection rates and completeness of resection may be more effective than a shorter surveillance interval at reducing risk in these individuals.”
The index presence of sessile serrated adenomas or polyps, or traditional serrated adenomas, also predicted the subsequent development of large serrated polyps (OR, 9.7; 95% CI, 3.6-25.9). The study did not examine polyp location or morphology (flat versus polypoid), but the association might be related to right-sided or flat lesions, which colonoscopists are more likely to miss or to incompletely excise than more defined polypoid lesions, the researchers commented. “Additional research is needed to further clarify the associations between index patient characteristics, polyp location, size, endoscopic appearance and histology, and the metachronous risk of advanced lesions and colorectal cancer in order to refine current surveillance recommendations for individuals undergoing colonoscopy,” they commented.
The study spanned January 2004 to June 2015, and awareness about the importance of serrated polyps rose during this period, they also noted.
The National Cancer Institute and the Norris Cotton Cancer Center provided funding. The researchers reported having no conflicts of interest.
The simultaneous colonoscopic presence of serrated polyps and high-risk adenomas led to a fivefold increase in the odds of metachronous high-risk adenomas in a large population-based registry study reported in Gastroenterology (2017. doi: 10.1053/j.gastro.2017.09.011).
The data “support the recommendation that individuals with large and high-risk serrated lesions require closer surveillance,” said Joseph C. Anderson, MD, of White River Junction Department of Veterans Affairs Medical Center, Vt., with his associates. When discounting size and histology, the presence of serrated polyps alone was not associated with an increased risk for metachronous high-risk adenoma, they also reported. Although serrated polyps are important precursors of colorectal cancer, relevant longitudinal surveillance data are sparse. Therefore, the investigators studied 5,433 adults who underwent index and follow-up colonoscopies a median of 4.9 years apart and were tracked in the population-based New Hampshire Colonoscopy Registry. The cohort had a median age of 61 years and half of individuals were male.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
After adjusting for age, sex, smoking status, body mass index, and median interval between colonoscopies, individuals were at significantly increased risk of metachronous high-risk adenoma if their baseline colonoscopy showed high-risk adenoma and synchronous serrated polyps (odds ratio, 5.6; 95% confidence interval, 1.7-18.3), high-risk adenoma with synchronous sessile serrated adenomas (or polyps) or traditional serrated adenomas (OR, 16.0; 95% CI, 7.0-37.0), or high-risk adenoma alone (OR, 3.9; 95% CI, 2.8-5.4), vs. participants with no findings.
The researchers also found that the index presence of large (at least 1-cm) serrated polyps greatly increased the likelihood of finding large metachronous serrated polyps on subsequent colonoscopy (OR, 14.0; 95% CI, 5.0-40.9). “This has clinical relevance, since previous studies have demonstrated an increased risk for colorectal cancer in individuals with large serrated polyps,” the researchers wrote. “However, this increased risk may occur over a protracted time period of 10 years or more, and addressing variation in serrated polyp detection rates and completeness of resection may be more effective than a shorter surveillance interval at reducing risk in these individuals.”
The index presence of sessile serrated adenomas or polyps, or traditional serrated adenomas, also predicted the subsequent development of large serrated polyps (OR, 9.7; 95% CI, 3.6-25.9). The study did not examine polyp location or morphology (flat versus polypoid), but the association might be related to right-sided or flat lesions, which colonoscopists are more likely to miss or to incompletely excise than more defined polypoid lesions, the researchers commented. “Additional research is needed to further clarify the associations between index patient characteristics, polyp location, size, endoscopic appearance and histology, and the metachronous risk of advanced lesions and colorectal cancer in order to refine current surveillance recommendations for individuals undergoing colonoscopy,” they commented.
The study spanned January 2004 to June 2015, and awareness about the importance of serrated polyps rose during this period, they also noted.
The National Cancer Institute and the Norris Cotton Cancer Center provided funding. The researchers reported having no conflicts of interest.
The simultaneous colonoscopic presence of serrated polyps and high-risk adenomas led to a fivefold increase in the odds of metachronous high-risk adenomas in a large population-based registry study reported in Gastroenterology (2017. doi: 10.1053/j.gastro.2017.09.011).
The data “support the recommendation that individuals with large and high-risk serrated lesions require closer surveillance,” said Joseph C. Anderson, MD, of White River Junction Department of Veterans Affairs Medical Center, Vt., with his associates. When discounting size and histology, the presence of serrated polyps alone was not associated with an increased risk for metachronous high-risk adenoma, they also reported. Although serrated polyps are important precursors of colorectal cancer, relevant longitudinal surveillance data are sparse. Therefore, the investigators studied 5,433 adults who underwent index and follow-up colonoscopies a median of 4.9 years apart and were tracked in the population-based New Hampshire Colonoscopy Registry. The cohort had a median age of 61 years and half of individuals were male.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
After adjusting for age, sex, smoking status, body mass index, and median interval between colonoscopies, individuals were at significantly increased risk of metachronous high-risk adenoma if their baseline colonoscopy showed high-risk adenoma and synchronous serrated polyps (odds ratio, 5.6; 95% confidence interval, 1.7-18.3), high-risk adenoma with synchronous sessile serrated adenomas (or polyps) or traditional serrated adenomas (OR, 16.0; 95% CI, 7.0-37.0), or high-risk adenoma alone (OR, 3.9; 95% CI, 2.8-5.4), vs. participants with no findings.
The researchers also found that the index presence of large (at least 1-cm) serrated polyps greatly increased the likelihood of finding large metachronous serrated polyps on subsequent colonoscopy (OR, 14.0; 95% CI, 5.0-40.9). “This has clinical relevance, since previous studies have demonstrated an increased risk for colorectal cancer in individuals with large serrated polyps,” the researchers wrote. “However, this increased risk may occur over a protracted time period of 10 years or more, and addressing variation in serrated polyp detection rates and completeness of resection may be more effective than a shorter surveillance interval at reducing risk in these individuals.”
The index presence of sessile serrated adenomas or polyps, or traditional serrated adenomas, also predicted the subsequent development of large serrated polyps (OR, 9.7; 95% CI, 3.6-25.9). The study did not examine polyp location or morphology (flat versus polypoid), but the association might be related to right-sided or flat lesions, which colonoscopists are more likely to miss or to incompletely excise than more defined polypoid lesions, the researchers commented. “Additional research is needed to further clarify the associations between index patient characteristics, polyp location, size, endoscopic appearance and histology, and the metachronous risk of advanced lesions and colorectal cancer in order to refine current surveillance recommendations for individuals undergoing colonoscopy,” they commented.
The study spanned January 2004 to June 2015, and awareness about the importance of serrated polyps rose during this period, they also noted.
The National Cancer Institute and the Norris Cotton Cancer Center provided funding. The researchers reported having no conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: High-risk adenomas and the synchronous presence of serrated polyps significantly increased the risk of metachronous high-risk adenomas.
Major finding: Compared with individuals with unremarkable colonoscopies, the odds ratio was 5.6 after adjusting for age, sex, smoking status, body mass index, and median interval between colonoscopies.
Data source: Analyses of index and follow-up colonoscopies of 5,433 individuals from a population-based surveillance registry.
Disclosures: The National Cancer Institute and the Norris Cotton Cancer Center provided funding. The researchers reported having no conflicts of interest.
Adjusting fecal immunochemical test thresholds improved their performance
Physicians can minimize the heterogeneity of fecal immunochemical colorectal cancer screening tests by adjusting thresholds for positivity, according to researchers. The report is in the January issue of Gastroenterology (doi: 10.1053/j.gastro.2017.09.018).
“Rather than simply using thresholds recommended by the manufacturer, screening programs should choose thresholds based on intended levels of specificity and manageable positivity rates,” wrote PhD student Anton Gies of the German Cancer Research Center and the National Center for Tumor Diseases in Heidelberg, Germany, with his associates.
The investigators directly compared nine different fecal immunochemical assays using stool samples from 516 individuals, of whom 216 had advanced adenoma or colorectal cancer. Using thresholds recommended by manufacturers (2-17 mcg Hb/g feces) produced widely ranging sensitivities (22%-46%) and specificities (86%-98%). Using a uniform threshold of 15 mcg Hb/g feces narrowed the range of specificity (94%-98%), but sensitivities remained quite variable (16%-34%). Adjusting detection thresholds to obtain preset specificities (99%, 97%, or 93%) greatly narrowed both sensitivity (14%-18%, 21%-24%, and 30%-35%, respectively) and rates of positivity (2.8%-3.4%, 5.8%-6.1%, and 10%-11%, respectively), the researchers reported.
Increasingly, physicians are using fecal immunochemical testing to screen for colorectal neoplasia. In a prior study (Ann Intern Med. 2009 Feb 3;150[3]:162-90) investigators evaluated the diagnostic performance of six qualitative point-of-care fecal immunochemical tests among screening colonoscopy patients in Germany, and found that the tests had highly variable sensitivities and specificities for the detection of colorectal neoplasia. Not surprisingly, the most sensitive tests were the least specific, and vice versa, which is the problem with using fixed thresholds in qualitative fecal immunochemical tests, the researchers asserted.
Quantitative fecal immunochemical tests are more flexible than qualitative assays because users can adjust thresholds based on fecal hemoglobin concentrations. However, very few studies had directly compared sensitivities and specificities among quantitative fecal immunochemical tests, and “it is unclear to what extent differences ... reflect true heterogeneity in test performance or differences in study populations or varying pre-analytical conditions,” the investigators wrote. Patients in their study underwent colonoscopies in Germany between 2005 and 2010, and fecal samples were stored at –80 °C until analysis. The researchers calculated test sensitivities and specificities by using colonoscopy and histology reports evaluated by blinded, trained research assistants.
“Apparent heterogeneity in diagnostic performance of quantitative fecal immunochemical tests can be overcome to a large extent by adjusting thresholds to yield defined levels of specificity or positivity rates,” the investigators concluded. Only 16 patients in this study had colorectal cancer, which made it difficult to pinpoint test sensitivity for this finding, they noted. However, they found similar sensitivity estimates for colorectal cancer in an ancillary clinical study.
Manufacturers provided test kits free of charge. There were no external funding sources, and the researchers reported having no conflicts of interest.
The fecal immunochemical test (FIT) is an important option for colorectal cancer screening, endorsed by guidelines and effective for mass screening using mailed outreach. Patients offered FIT or a choice between FIT and colonoscopy are more likely to be screened.
In the United States, FIT is a qualitative test (reported as positive or negative), based on Food and Drug Administration regulations, in an attempt to simplify clinical decision making. In Europe, FIT has been used quantitatively, with adjustable positivity rate and sensitivity pegged to available colonoscopy resources. Adding complexity, there are multiple FIT brands, each with varying performance, even at similar hemoglobin concentrations. Each brand has a different sensitivity, specificity, and positivity rate, because reagents, buffers, and collection devices vary. Ambient temperature during mailing and transport time to processing labs can also affect test performance.
Theodore R. Levin, MD, is chief of gastroenterology, Kaiser Permanente Medical Center, Walnut Creek, Calif. He has no conflicts of interest.
The fecal immunochemical test (FIT) is an important option for colorectal cancer screening, endorsed by guidelines and effective for mass screening using mailed outreach. Patients offered FIT or a choice between FIT and colonoscopy are more likely to be screened.
In the United States, FIT is a qualitative test (reported as positive or negative), based on Food and Drug Administration regulations, in an attempt to simplify clinical decision making. In Europe, FIT has been used quantitatively, with adjustable positivity rate and sensitivity pegged to available colonoscopy resources. Adding complexity, there are multiple FIT brands, each with varying performance, even at similar hemoglobin concentrations. Each brand has a different sensitivity, specificity, and positivity rate, because reagents, buffers, and collection devices vary. Ambient temperature during mailing and transport time to processing labs can also affect test performance.
Theodore R. Levin, MD, is chief of gastroenterology, Kaiser Permanente Medical Center, Walnut Creek, Calif. He has no conflicts of interest.
The fecal immunochemical test (FIT) is an important option for colorectal cancer screening, endorsed by guidelines and effective for mass screening using mailed outreach. Patients offered FIT or a choice between FIT and colonoscopy are more likely to be screened.
In the United States, FIT is a qualitative test (reported as positive or negative), based on Food and Drug Administration regulations, in an attempt to simplify clinical decision making. In Europe, FIT has been used quantitatively, with adjustable positivity rate and sensitivity pegged to available colonoscopy resources. Adding complexity, there are multiple FIT brands, each with varying performance, even at similar hemoglobin concentrations. Each brand has a different sensitivity, specificity, and positivity rate, because reagents, buffers, and collection devices vary. Ambient temperature during mailing and transport time to processing labs can also affect test performance.
Theodore R. Levin, MD, is chief of gastroenterology, Kaiser Permanente Medical Center, Walnut Creek, Calif. He has no conflicts of interest.
Physicians can minimize the heterogeneity of fecal immunochemical colorectal cancer screening tests by adjusting thresholds for positivity, according to researchers. The report is in the January issue of Gastroenterology (doi: 10.1053/j.gastro.2017.09.018).
“Rather than simply using thresholds recommended by the manufacturer, screening programs should choose thresholds based on intended levels of specificity and manageable positivity rates,” wrote PhD student Anton Gies of the German Cancer Research Center and the National Center for Tumor Diseases in Heidelberg, Germany, with his associates.
The investigators directly compared nine different fecal immunochemical assays using stool samples from 516 individuals, of whom 216 had advanced adenoma or colorectal cancer. Using thresholds recommended by manufacturers (2-17 mcg Hb/g feces) produced widely ranging sensitivities (22%-46%) and specificities (86%-98%). Using a uniform threshold of 15 mcg Hb/g feces narrowed the range of specificity (94%-98%), but sensitivities remained quite variable (16%-34%). Adjusting detection thresholds to obtain preset specificities (99%, 97%, or 93%) greatly narrowed both sensitivity (14%-18%, 21%-24%, and 30%-35%, respectively) and rates of positivity (2.8%-3.4%, 5.8%-6.1%, and 10%-11%, respectively), the researchers reported.
Increasingly, physicians are using fecal immunochemical testing to screen for colorectal neoplasia. In a prior study (Ann Intern Med. 2009 Feb 3;150[3]:162-90) investigators evaluated the diagnostic performance of six qualitative point-of-care fecal immunochemical tests among screening colonoscopy patients in Germany, and found that the tests had highly variable sensitivities and specificities for the detection of colorectal neoplasia. Not surprisingly, the most sensitive tests were the least specific, and vice versa, which is the problem with using fixed thresholds in qualitative fecal immunochemical tests, the researchers asserted.
Quantitative fecal immunochemical tests are more flexible than qualitative assays because users can adjust thresholds based on fecal hemoglobin concentrations. However, very few studies had directly compared sensitivities and specificities among quantitative fecal immunochemical tests, and “it is unclear to what extent differences ... reflect true heterogeneity in test performance or differences in study populations or varying pre-analytical conditions,” the investigators wrote. Patients in their study underwent colonoscopies in Germany between 2005 and 2010, and fecal samples were stored at –80 °C until analysis. The researchers calculated test sensitivities and specificities by using colonoscopy and histology reports evaluated by blinded, trained research assistants.
“Apparent heterogeneity in diagnostic performance of quantitative fecal immunochemical tests can be overcome to a large extent by adjusting thresholds to yield defined levels of specificity or positivity rates,” the investigators concluded. Only 16 patients in this study had colorectal cancer, which made it difficult to pinpoint test sensitivity for this finding, they noted. However, they found similar sensitivity estimates for colorectal cancer in an ancillary clinical study.
Manufacturers provided test kits free of charge. There were no external funding sources, and the researchers reported having no conflicts of interest.
Physicians can minimize the heterogeneity of fecal immunochemical colorectal cancer screening tests by adjusting thresholds for positivity, according to researchers. The report is in the January issue of Gastroenterology (doi: 10.1053/j.gastro.2017.09.018).
“Rather than simply using thresholds recommended by the manufacturer, screening programs should choose thresholds based on intended levels of specificity and manageable positivity rates,” wrote PhD student Anton Gies of the German Cancer Research Center and the National Center for Tumor Diseases in Heidelberg, Germany, with his associates.
The investigators directly compared nine different fecal immunochemical assays using stool samples from 516 individuals, of whom 216 had advanced adenoma or colorectal cancer. Using thresholds recommended by manufacturers (2-17 mcg Hb/g feces) produced widely ranging sensitivities (22%-46%) and specificities (86%-98%). Using a uniform threshold of 15 mcg Hb/g feces narrowed the range of specificity (94%-98%), but sensitivities remained quite variable (16%-34%). Adjusting detection thresholds to obtain preset specificities (99%, 97%, or 93%) greatly narrowed both sensitivity (14%-18%, 21%-24%, and 30%-35%, respectively) and rates of positivity (2.8%-3.4%, 5.8%-6.1%, and 10%-11%, respectively), the researchers reported.
Increasingly, physicians are using fecal immunochemical testing to screen for colorectal neoplasia. In a prior study (Ann Intern Med. 2009 Feb 3;150[3]:162-90) investigators evaluated the diagnostic performance of six qualitative point-of-care fecal immunochemical tests among screening colonoscopy patients in Germany, and found that the tests had highly variable sensitivities and specificities for the detection of colorectal neoplasia. Not surprisingly, the most sensitive tests were the least specific, and vice versa, which is the problem with using fixed thresholds in qualitative fecal immunochemical tests, the researchers asserted.
Quantitative fecal immunochemical tests are more flexible than qualitative assays because users can adjust thresholds based on fecal hemoglobin concentrations. However, very few studies had directly compared sensitivities and specificities among quantitative fecal immunochemical tests, and “it is unclear to what extent differences ... reflect true heterogeneity in test performance or differences in study populations or varying pre-analytical conditions,” the investigators wrote. Patients in their study underwent colonoscopies in Germany between 2005 and 2010, and fecal samples were stored at –80 °C until analysis. The researchers calculated test sensitivities and specificities by using colonoscopy and histology reports evaluated by blinded, trained research assistants.
“Apparent heterogeneity in diagnostic performance of quantitative fecal immunochemical tests can be overcome to a large extent by adjusting thresholds to yield defined levels of specificity or positivity rates,” the investigators concluded. Only 16 patients in this study had colorectal cancer, which made it difficult to pinpoint test sensitivity for this finding, they noted. However, they found similar sensitivity estimates for colorectal cancer in an ancillary clinical study.
Manufacturers provided test kits free of charge. There were no external funding sources, and the researchers reported having no conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: To minimize the heterogeneity of fecal immunochemical screening tests, adjust thresholds to produce a predetermined specificity or a manageable rate of positivity.
Major finding: Adjusting detection thresholds to obtain preset specificities (99%, 97%, or 93%) greatly narrowed both sensitivity (14%-18%, 21%-24%, and 30%-35%, respectively) and rates of positivity (2.8%-3.4%, 5.8%-6.1%, and 10%-11%, respectively).
Data source: A comparison of nine different fecal immunochemical assays in 516 patients, of whom 216 had colorectal neoplasias.
Disclosures: Manufacturers provided test kits free of charge. There were no other external sources of support, and the researchers reported having no conflicts of interest.
AGA Clinical Practice Update: Best practices for POEM in achalasia
Peroral endoscopic myotomy, or POEM, should be considered as primary therapy for type III achalasia and as a treatment option comparable with laparoscopic Heller myotomy for any of the achalasia syndromes – but only when physicians with expertise are available, according to a clinical practice update from the American Gastroenterological Association.
Further, post-POEM patients should be considered at high risk of developing reflux esophagitis and should be advised of the management considerations, including potential indefinite proton pump inhibitor therapy and/or surveillance endoscopy, prior to undergoing the procedure, Peter J. Kahrilas, MD, of Northwestern University, Chicago, and his colleagues wrote in the update, which is published in the November issue of Gastroenterology (2017. doi: 10.1053/j.gastro.2017.10.001).
In an effort to describe the place for POEM among the currently available robust treatments for achalasia, the authors conducted a literature review – their “best practice” recommendations are based on the findings from relevant publications and on expert opinion.
Additionally, they said POEM should be performed by experienced physicians in high-volume centers since the procedure is complex and an estimated 20-30 procedures are needed to achieve competence.
The update and these proposed best practices follow the evolution of POEM over the last decade: it began as an exciting concept and is now a mainstream treatment option for achalasia, the authors said.
“Uncontrolled outcome data have been very promising comparing POEM with the standard surgical treatment for achalasia, laparoscopic Heller myotomy (LHM). However, concerns remain regarding post-POEM reflux, the durability of the procedure, and the learning curve for endoscopists adopting the technique,” they wrote, which, when coupled with recent randomized controlled study data showing excellent and equivalent 5-year outcomes with pneumatic dilation and LHM, make the role of POEM somewhat controversial.
As part of the review, they considered the strengths and weaknesses of both POEM and LHM. The data comparing POEM with LHM or pneumatic dilation remain very limited, but based on those that do exist, the authors concluded that “POEM appears to be a safe, effective, and minimally invasive management option in achalasia in the short term.”
Long-term durability data are not yet available, they noted.
Dr. Kahrilas received funding from the U.S. Public Health Service.
Peroral endoscopic myotomy, or POEM, should be considered as primary therapy for type III achalasia and as a treatment option comparable with laparoscopic Heller myotomy for any of the achalasia syndromes – but only when physicians with expertise are available, according to a clinical practice update from the American Gastroenterological Association.
Further, post-POEM patients should be considered at high risk of developing reflux esophagitis and should be advised of the management considerations, including potential indefinite proton pump inhibitor therapy and/or surveillance endoscopy, prior to undergoing the procedure, Peter J. Kahrilas, MD, of Northwestern University, Chicago, and his colleagues wrote in the update, which is published in the November issue of Gastroenterology (2017. doi: 10.1053/j.gastro.2017.10.001).
In an effort to describe the place for POEM among the currently available robust treatments for achalasia, the authors conducted a literature review – their “best practice” recommendations are based on the findings from relevant publications and on expert opinion.
Additionally, they said POEM should be performed by experienced physicians in high-volume centers since the procedure is complex and an estimated 20-30 procedures are needed to achieve competence.
The update and these proposed best practices follow the evolution of POEM over the last decade: it began as an exciting concept and is now a mainstream treatment option for achalasia, the authors said.
“Uncontrolled outcome data have been very promising comparing POEM with the standard surgical treatment for achalasia, laparoscopic Heller myotomy (LHM). However, concerns remain regarding post-POEM reflux, the durability of the procedure, and the learning curve for endoscopists adopting the technique,” they wrote, which, when coupled with recent randomized controlled study data showing excellent and equivalent 5-year outcomes with pneumatic dilation and LHM, make the role of POEM somewhat controversial.
As part of the review, they considered the strengths and weaknesses of both POEM and LHM. The data comparing POEM with LHM or pneumatic dilation remain very limited, but based on those that do exist, the authors concluded that “POEM appears to be a safe, effective, and minimally invasive management option in achalasia in the short term.”
Long-term durability data are not yet available, they noted.
Dr. Kahrilas received funding from the U.S. Public Health Service.
Peroral endoscopic myotomy, or POEM, should be considered as primary therapy for type III achalasia and as a treatment option comparable with laparoscopic Heller myotomy for any of the achalasia syndromes – but only when physicians with expertise are available, according to a clinical practice update from the American Gastroenterological Association.
Further, post-POEM patients should be considered at high risk of developing reflux esophagitis and should be advised of the management considerations, including potential indefinite proton pump inhibitor therapy and/or surveillance endoscopy, prior to undergoing the procedure, Peter J. Kahrilas, MD, of Northwestern University, Chicago, and his colleagues wrote in the update, which is published in the November issue of Gastroenterology (2017. doi: 10.1053/j.gastro.2017.10.001).
In an effort to describe the place for POEM among the currently available robust treatments for achalasia, the authors conducted a literature review – their “best practice” recommendations are based on the findings from relevant publications and on expert opinion.
Additionally, they said POEM should be performed by experienced physicians in high-volume centers since the procedure is complex and an estimated 20-30 procedures are needed to achieve competence.
The update and these proposed best practices follow the evolution of POEM over the last decade: it began as an exciting concept and is now a mainstream treatment option for achalasia, the authors said.
“Uncontrolled outcome data have been very promising comparing POEM with the standard surgical treatment for achalasia, laparoscopic Heller myotomy (LHM). However, concerns remain regarding post-POEM reflux, the durability of the procedure, and the learning curve for endoscopists adopting the technique,” they wrote, which, when coupled with recent randomized controlled study data showing excellent and equivalent 5-year outcomes with pneumatic dilation and LHM, make the role of POEM somewhat controversial.
As part of the review, they considered the strengths and weaknesses of both POEM and LHM. The data comparing POEM with LHM or pneumatic dilation remain very limited, but based on those that do exist, the authors concluded that “POEM appears to be a safe, effective, and minimally invasive management option in achalasia in the short term.”
Long-term durability data are not yet available, they noted.
Dr. Kahrilas received funding from the U.S. Public Health Service.
FROM GASTROENTEROLOGY
Low tryptophan levels linked to IBD
Patients with inflammatory bowel disease (IBD) had significantly lower serum levels of the essential amino acid tryptophan than healthy controls in a large study reported in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.08.028).
Serum tryptophan levels also correlated inversely with both disease activity and C-reactive protein levels in patients with IBD, reported Susanna Nikolaus, MD, of University Hospital Schleswig-Holstein, Kiel, Germany, with her associates. “Tryptophan deficiency could contribute to development of IBD. Studies are needed to determine whether modification of intestinal tryptophan pathways affects [its] severity,” they wrote.
Several small case series have reported low levels of tryptophan in IBD and other autoimmune disorders, the investigators noted. Removing tryptophan from the diet has been found to increase susceptibility to colitis in mice, and supplementing with tryptophan or some of its metabolites has the opposite effect. For this study, the researchers used high-performance liquid chromatography to quantify tryptophan levels in serum samples from 535 consecutive patients with IBD and 100 matched controls. They used mass spectrometry to measure metabolites of tryptophan, enzyme-linked immunosorbent assay to measure interleukin-22 (IL-22) levels, and 16S rDNA amplicon sequencing to correlate tryptophan levels with fecal microbiota species. Finally, they used real-time polymerase chain reaction to measure levels of mRNA encoding tryptophan metabolites in colonic biopsy specimens.
Serum tryptophan levels were significantly lower in patients with IBD than controls (P = 5.3 x 10–6). The difference was starker in patients with Crohn’s disease (P = 1.1 x 10–10 vs. controls) compared with those with ulcerative colitis (P = 2.8 x 10–3 vs. controls), the investigators noted. Serum tryptophan levels also correlated inversely with disease activity in patients with Crohn’s disease (P = .01), while patients with ulcerative colitis showed a similar but nonsignificant trend (P = .07). Low tryptophan levels were associated with marked, statistically significant increases in C-reactive protein levels in both Crohn’s disease and ulcerative colitis. Tryptophan level also correlated inversely with leukocyte count, although the trend was less pronounced (P = .04).IBD was associated with several aberrations in the tryptophan kynurenine pathway, which is the primary means of catabolizing the amino acid. For example, compared with controls, patients with active IBD had significantly lower levels of mRNA encoding tryptophan 2,3-dioxygenase-2 (TDO2, a key enzyme in the kynurenine pathway) and solute carrier family 6 member 19 (SLC6A19, also called B0AT1, a neutral amino acid transporter). Patients with IBD also had significantly higher levels of indoleamine 2,3-dioxygenase 1 (IDO1), which catalyzes the initial, rate-limiting oxidation of tryptophan to kynurenine. Accordingly, patients with IBD had a significantly higher ratio of kynurenine to tryptophan than did controls, and this abnormality was associated with disease activity, especially in Crohn’s disease (P = .03).
Patients with IBD who had relatively higher tryptophan levels also tended to have more diverse gut microbiota than did patients with lower serum tryptophan levels, although differences among groups were not statistically significant, the investigators said. Serum concentration of IL-22 also correlated with disease activity in patients with IBD, and infliximab responders had a “significant and sustained increase” of tryptophan levels over time, compared with nonresponders.
Potsdam dietary questionnaires found no link between disease activity and dietary consumption of tryptophan, the researchers said. Additionally, they found no links between serum tryptophan levels and age, smoking status, or disease complications, such as fistulae or abscess formation.The investigators acknowledged grant support from the DFG Excellence Cluster “Inflammation at Interfaces” and BMBF e-med SYSINFLAME and H2020 SysCID. One coinvestigator reported employment by CONARIS Research Institute AG, which helps develop drugs with inflammatory indications. The other investigators had no conflicts of interest.
In this interesting study, Nikolaus et al. found an association of decreased serum tryptophan in patients with inflammatory bowel disease (IBD), compared with control subjects. The authors also found an inverse correlation of serum tryptophan levels in patients with C-reactive protein in both ulcerative colitis and Crohn's disease and with active disease as defined by clinical disease activity scores in Crohn's disease. A validated food-frequency questionnaire found no difference in tryptophan consumption based on disease activity in a subset of patients, decreasing the likelihood that this association is secondary to altered dietary intake only and may be related to other mechanisms.
Sara Horst, MD, MPH, is an assistant professor, division of gastroenterology, hepatology & nutrition, Inflammatory Bowel Disease Center, Vanderbilt University Medical Center, Nashville, Tenn. She had no relevant conflicts of interest.
In this interesting study, Nikolaus et al. found an association of decreased serum tryptophan in patients with inflammatory bowel disease (IBD), compared with control subjects. The authors also found an inverse correlation of serum tryptophan levels in patients with C-reactive protein in both ulcerative colitis and Crohn's disease and with active disease as defined by clinical disease activity scores in Crohn's disease. A validated food-frequency questionnaire found no difference in tryptophan consumption based on disease activity in a subset of patients, decreasing the likelihood that this association is secondary to altered dietary intake only and may be related to other mechanisms.
Sara Horst, MD, MPH, is an assistant professor, division of gastroenterology, hepatology & nutrition, Inflammatory Bowel Disease Center, Vanderbilt University Medical Center, Nashville, Tenn. She had no relevant conflicts of interest.
In this interesting study, Nikolaus et al. found an association of decreased serum tryptophan in patients with inflammatory bowel disease (IBD), compared with control subjects. The authors also found an inverse correlation of serum tryptophan levels in patients with C-reactive protein in both ulcerative colitis and Crohn's disease and with active disease as defined by clinical disease activity scores in Crohn's disease. A validated food-frequency questionnaire found no difference in tryptophan consumption based on disease activity in a subset of patients, decreasing the likelihood that this association is secondary to altered dietary intake only and may be related to other mechanisms.
Sara Horst, MD, MPH, is an assistant professor, division of gastroenterology, hepatology & nutrition, Inflammatory Bowel Disease Center, Vanderbilt University Medical Center, Nashville, Tenn. She had no relevant conflicts of interest.
Patients with inflammatory bowel disease (IBD) had significantly lower serum levels of the essential amino acid tryptophan than healthy controls in a large study reported in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.08.028).
Serum tryptophan levels also correlated inversely with both disease activity and C-reactive protein levels in patients with IBD, reported Susanna Nikolaus, MD, of University Hospital Schleswig-Holstein, Kiel, Germany, with her associates. “Tryptophan deficiency could contribute to development of IBD. Studies are needed to determine whether modification of intestinal tryptophan pathways affects [its] severity,” they wrote.
Several small case series have reported low levels of tryptophan in IBD and other autoimmune disorders, the investigators noted. Removing tryptophan from the diet has been found to increase susceptibility to colitis in mice, and supplementing with tryptophan or some of its metabolites has the opposite effect. For this study, the researchers used high-performance liquid chromatography to quantify tryptophan levels in serum samples from 535 consecutive patients with IBD and 100 matched controls. They used mass spectrometry to measure metabolites of tryptophan, enzyme-linked immunosorbent assay to measure interleukin-22 (IL-22) levels, and 16S rDNA amplicon sequencing to correlate tryptophan levels with fecal microbiota species. Finally, they used real-time polymerase chain reaction to measure levels of mRNA encoding tryptophan metabolites in colonic biopsy specimens.
Serum tryptophan levels were significantly lower in patients with IBD than controls (P = 5.3 x 10–6). The difference was starker in patients with Crohn’s disease (P = 1.1 x 10–10 vs. controls) compared with those with ulcerative colitis (P = 2.8 x 10–3 vs. controls), the investigators noted. Serum tryptophan levels also correlated inversely with disease activity in patients with Crohn’s disease (P = .01), while patients with ulcerative colitis showed a similar but nonsignificant trend (P = .07). Low tryptophan levels were associated with marked, statistically significant increases in C-reactive protein levels in both Crohn’s disease and ulcerative colitis. Tryptophan level also correlated inversely with leukocyte count, although the trend was less pronounced (P = .04).IBD was associated with several aberrations in the tryptophan kynurenine pathway, which is the primary means of catabolizing the amino acid. For example, compared with controls, patients with active IBD had significantly lower levels of mRNA encoding tryptophan 2,3-dioxygenase-2 (TDO2, a key enzyme in the kynurenine pathway) and solute carrier family 6 member 19 (SLC6A19, also called B0AT1, a neutral amino acid transporter). Patients with IBD also had significantly higher levels of indoleamine 2,3-dioxygenase 1 (IDO1), which catalyzes the initial, rate-limiting oxidation of tryptophan to kynurenine. Accordingly, patients with IBD had a significantly higher ratio of kynurenine to tryptophan than did controls, and this abnormality was associated with disease activity, especially in Crohn’s disease (P = .03).
Patients with IBD who had relatively higher tryptophan levels also tended to have more diverse gut microbiota than did patients with lower serum tryptophan levels, although differences among groups were not statistically significant, the investigators said. Serum concentration of IL-22 also correlated with disease activity in patients with IBD, and infliximab responders had a “significant and sustained increase” of tryptophan levels over time, compared with nonresponders.
Potsdam dietary questionnaires found no link between disease activity and dietary consumption of tryptophan, the researchers said. Additionally, they found no links between serum tryptophan levels and age, smoking status, or disease complications, such as fistulae or abscess formation.The investigators acknowledged grant support from the DFG Excellence Cluster “Inflammation at Interfaces” and BMBF e-med SYSINFLAME and H2020 SysCID. One coinvestigator reported employment by CONARIS Research Institute AG, which helps develop drugs with inflammatory indications. The other investigators had no conflicts of interest.
Patients with inflammatory bowel disease (IBD) had significantly lower serum levels of the essential amino acid tryptophan than healthy controls in a large study reported in the December issue of Gastroenterology (doi: 10.1053/j.gastro.2017.08.028).
Serum tryptophan levels also correlated inversely with both disease activity and C-reactive protein levels in patients with IBD, reported Susanna Nikolaus, MD, of University Hospital Schleswig-Holstein, Kiel, Germany, with her associates. “Tryptophan deficiency could contribute to development of IBD. Studies are needed to determine whether modification of intestinal tryptophan pathways affects [its] severity,” they wrote.
Several small case series have reported low levels of tryptophan in IBD and other autoimmune disorders, the investigators noted. Removing tryptophan from the diet has been found to increase susceptibility to colitis in mice, and supplementing with tryptophan or some of its metabolites has the opposite effect. For this study, the researchers used high-performance liquid chromatography to quantify tryptophan levels in serum samples from 535 consecutive patients with IBD and 100 matched controls. They used mass spectrometry to measure metabolites of tryptophan, enzyme-linked immunosorbent assay to measure interleukin-22 (IL-22) levels, and 16S rDNA amplicon sequencing to correlate tryptophan levels with fecal microbiota species. Finally, they used real-time polymerase chain reaction to measure levels of mRNA encoding tryptophan metabolites in colonic biopsy specimens.
Serum tryptophan levels were significantly lower in patients with IBD than controls (P = 5.3 x 10–6). The difference was starker in patients with Crohn’s disease (P = 1.1 x 10–10 vs. controls) compared with those with ulcerative colitis (P = 2.8 x 10–3 vs. controls), the investigators noted. Serum tryptophan levels also correlated inversely with disease activity in patients with Crohn’s disease (P = .01), while patients with ulcerative colitis showed a similar but nonsignificant trend (P = .07). Low tryptophan levels were associated with marked, statistically significant increases in C-reactive protein levels in both Crohn’s disease and ulcerative colitis. Tryptophan level also correlated inversely with leukocyte count, although the trend was less pronounced (P = .04).IBD was associated with several aberrations in the tryptophan kynurenine pathway, which is the primary means of catabolizing the amino acid. For example, compared with controls, patients with active IBD had significantly lower levels of mRNA encoding tryptophan 2,3-dioxygenase-2 (TDO2, a key enzyme in the kynurenine pathway) and solute carrier family 6 member 19 (SLC6A19, also called B0AT1, a neutral amino acid transporter). Patients with IBD also had significantly higher levels of indoleamine 2,3-dioxygenase 1 (IDO1), which catalyzes the initial, rate-limiting oxidation of tryptophan to kynurenine. Accordingly, patients with IBD had a significantly higher ratio of kynurenine to tryptophan than did controls, and this abnormality was associated with disease activity, especially in Crohn’s disease (P = .03).
Patients with IBD who had relatively higher tryptophan levels also tended to have more diverse gut microbiota than did patients with lower serum tryptophan levels, although differences among groups were not statistically significant, the investigators said. Serum concentration of IL-22 also correlated with disease activity in patients with IBD, and infliximab responders had a “significant and sustained increase” of tryptophan levels over time, compared with nonresponders.
Potsdam dietary questionnaires found no link between disease activity and dietary consumption of tryptophan, the researchers said. Additionally, they found no links between serum tryptophan levels and age, smoking status, or disease complications, such as fistulae or abscess formation.The investigators acknowledged grant support from the DFG Excellence Cluster “Inflammation at Interfaces” and BMBF e-med SYSINFLAME and H2020 SysCID. One coinvestigator reported employment by CONARIS Research Institute AG, which helps develop drugs with inflammatory indications. The other investigators had no conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: Patients with inflammatory bowel disease had significantly lower serum tryptophan levels than healthy controls.
Major finding: Serum tryptophan levels also correlated inversely with disease activity and C-reactive protein levels in patients with IBD.
Data source: An analysis of serum samples from 535 consecutive patients with IBD and 100 matched controls.
Disclosures: The investigators acknowledged grant support from the DFG Excellence Cluster “Inflammation at Interfaces” and BMBF e-med SYSINFLAME and H2020 SysCID. One coinvestigator reported employment by CONARIS Research Institute AG, which helps develop therapies with inflammatory indications. The other investigators had no conflicts of interest.
VIDEO: High-volume endoscopists, centers produced better ERCP outcomes
Endoscopists who performed endoscopic retrograde cholangiopancreatography (ERCP) at high-volume centers had a 60% greater odds of procedure success compared with those at low-volume centers, according to the results of a systematic review and meta-analysis.
High-volume endoscopists also had a 30% lower odds of performing ERCP that led to adverse events such as pancreatitis, perforation, and bleeding, reported Rajesh N. Keswani, MD, MS, of Northwestern University, Chicago, and his associates. High-volume centers themselves also were associated with a significantly higher odds of successful ERCP (odds ratio, 2.0; 95% CI, 1.6 to 2.5), although they were not associated with a significantly lower risk of adverse events, the reviewers wrote. The study was published in the December issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.06.002).
Diagnostic ERCP has fallen sevenfold in the past 30 years while therapeutic use has increased 30-fold, the researchers noted. Therapeutic use spans several complex pancreaticobiliary conditions, including chronic pancreatitis, malignant jaundice, and complications of liver transplantation. This shift from diagnostic to therapeutic has naturally increased the complexity of ERCP, the need for expert endoscopy, and the potential risk of adverse events. “As health care continues to shift toward rewarding value rather than volume, it will be increasingly important to deliver care that is effective and efficient,” the reviewers wrote. “Thus, understanding factors associated with unsuccessful interventions, such as a failed ERCP, will be of critical importance to payers and patients (Clin Gastroenterol Hepatol. 2017 Jun 7;218:237-45).
Therefore, they searched MEDLINE, EMBASE, and the Cochrane register of controlled trials for prospective and retrospective studies published through January 2017. In all, the researchers identified 13 studies that stratified outcomes by volume per endoscopist or center. These studies comprised 59,437 procedures and patients. Definitions of low volume varied by study, ranging from less than 25 to less than 156 annual ERCPs per endoscopist and from less than 87 to less than 200 annual ERCPs per center. Endoscopists who achieved this threshold were significantly more likely to perform successful ERCPs than were low-volume endoscopists (OR, 1.6; 95% CI, 1.2 to 2.1), and were significantly less likely to have patients develop pancreatitis, perforation, or bleeding after ERCP (OR, 0.7; 95% CI, 0.5 to 0.8).
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
“Given these compelling findings, we propose that providers and payers consider consolidating ERCP to high-volume endoscopists and centers to improve ERCP outcomes and value,” the reviewers wrote. Minimum thresholds for endoscopists and centers to maintain ERCP skills and optimize outcomes have not been defined, they noted. Intuitively, there is no “critical volume threshold” at which “outcomes suddenly improve,” but the studies in this analysis used widely varying definitions of low volume, they added. It also remains unclear whether a low-volume endoscopist can achieve optimal outcomes at a high-volume center, or vice versa, they said. They recommended studies to better define procedure success and the appropriate use of ERCP in therapeutic settings.
One reviewer acknowledged support from the University of Colorado Department of Medicine Outstanding Early Career Faculty Program. The reviewers reported having no conflicts of interest.
With the increasing proportion of complex therapeutic ERCPs, the field is shifting toward performance of these procedures by those who have had advanced training and who make them the focus of their clinical practice. Consistent with this, the meta-analysis by Keswani et al. highlights benefits of higher-volume centers and endoscopists - improved ERCP success rate (at the provider and practice level) and reduced adverse events (provider level only). It is unclear, however, if higher-volume endoscopists received additional training that translated into better outcomes. Other variables, including case complexity and provider experience, could not be fully assessed in this study.
Overall, however, this large, well-performed meta-analysis adds to the growing chorus that endoscopists and endoscopic centers will have better results if the endoscopists are specially trained and routinely perform these procedures. Future studies are needed to more accurately define procedure success (significant variation in the meta-analysis) and assess other variables that affect outcomes for which volume may only be a proxy. In an era of reporting and demonstrating value in endoscopic care, quality metrics for ERCP performance may not be fully appreciated but eventually may become the driving force in consolidation of these procedures to particular centers or providers, regardless of volume.
Avinash Ketwaroo, MD, MSc, is assistant professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston, and an associate editor of GI & Hepatology News. He has no relevant conflicts of interest.
With the increasing proportion of complex therapeutic ERCPs, the field is shifting toward performance of these procedures by those who have had advanced training and who make them the focus of their clinical practice. Consistent with this, the meta-analysis by Keswani et al. highlights benefits of higher-volume centers and endoscopists - improved ERCP success rate (at the provider and practice level) and reduced adverse events (provider level only). It is unclear, however, if higher-volume endoscopists received additional training that translated into better outcomes. Other variables, including case complexity and provider experience, could not be fully assessed in this study.
Overall, however, this large, well-performed meta-analysis adds to the growing chorus that endoscopists and endoscopic centers will have better results if the endoscopists are specially trained and routinely perform these procedures. Future studies are needed to more accurately define procedure success (significant variation in the meta-analysis) and assess other variables that affect outcomes for which volume may only be a proxy. In an era of reporting and demonstrating value in endoscopic care, quality metrics for ERCP performance may not be fully appreciated but eventually may become the driving force in consolidation of these procedures to particular centers or providers, regardless of volume.
Avinash Ketwaroo, MD, MSc, is assistant professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston, and an associate editor of GI & Hepatology News. He has no relevant conflicts of interest.
With the increasing proportion of complex therapeutic ERCPs, the field is shifting toward performance of these procedures by those who have had advanced training and who make them the focus of their clinical practice. Consistent with this, the meta-analysis by Keswani et al. highlights benefits of higher-volume centers and endoscopists - improved ERCP success rate (at the provider and practice level) and reduced adverse events (provider level only). It is unclear, however, if higher-volume endoscopists received additional training that translated into better outcomes. Other variables, including case complexity and provider experience, could not be fully assessed in this study.
Overall, however, this large, well-performed meta-analysis adds to the growing chorus that endoscopists and endoscopic centers will have better results if the endoscopists are specially trained and routinely perform these procedures. Future studies are needed to more accurately define procedure success (significant variation in the meta-analysis) and assess other variables that affect outcomes for which volume may only be a proxy. In an era of reporting and demonstrating value in endoscopic care, quality metrics for ERCP performance may not be fully appreciated but eventually may become the driving force in consolidation of these procedures to particular centers or providers, regardless of volume.
Avinash Ketwaroo, MD, MSc, is assistant professor in the division of gastroenterology and hepatology at Baylor College of Medicine, Houston, and an associate editor of GI & Hepatology News. He has no relevant conflicts of interest.
Endoscopists who performed endoscopic retrograde cholangiopancreatography (ERCP) at high-volume centers had a 60% greater odds of procedure success compared with those at low-volume centers, according to the results of a systematic review and meta-analysis.
High-volume endoscopists also had a 30% lower odds of performing ERCP that led to adverse events such as pancreatitis, perforation, and bleeding, reported Rajesh N. Keswani, MD, MS, of Northwestern University, Chicago, and his associates. High-volume centers themselves also were associated with a significantly higher odds of successful ERCP (odds ratio, 2.0; 95% CI, 1.6 to 2.5), although they were not associated with a significantly lower risk of adverse events, the reviewers wrote. The study was published in the December issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.06.002).
Diagnostic ERCP has fallen sevenfold in the past 30 years while therapeutic use has increased 30-fold, the researchers noted. Therapeutic use spans several complex pancreaticobiliary conditions, including chronic pancreatitis, malignant jaundice, and complications of liver transplantation. This shift from diagnostic to therapeutic has naturally increased the complexity of ERCP, the need for expert endoscopy, and the potential risk of adverse events. “As health care continues to shift toward rewarding value rather than volume, it will be increasingly important to deliver care that is effective and efficient,” the reviewers wrote. “Thus, understanding factors associated with unsuccessful interventions, such as a failed ERCP, will be of critical importance to payers and patients (Clin Gastroenterol Hepatol. 2017 Jun 7;218:237-45).
Therefore, they searched MEDLINE, EMBASE, and the Cochrane register of controlled trials for prospective and retrospective studies published through January 2017. In all, the researchers identified 13 studies that stratified outcomes by volume per endoscopist or center. These studies comprised 59,437 procedures and patients. Definitions of low volume varied by study, ranging from less than 25 to less than 156 annual ERCPs per endoscopist and from less than 87 to less than 200 annual ERCPs per center. Endoscopists who achieved this threshold were significantly more likely to perform successful ERCPs than were low-volume endoscopists (OR, 1.6; 95% CI, 1.2 to 2.1), and were significantly less likely to have patients develop pancreatitis, perforation, or bleeding after ERCP (OR, 0.7; 95% CI, 0.5 to 0.8).
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
“Given these compelling findings, we propose that providers and payers consider consolidating ERCP to high-volume endoscopists and centers to improve ERCP outcomes and value,” the reviewers wrote. Minimum thresholds for endoscopists and centers to maintain ERCP skills and optimize outcomes have not been defined, they noted. Intuitively, there is no “critical volume threshold” at which “outcomes suddenly improve,” but the studies in this analysis used widely varying definitions of low volume, they added. It also remains unclear whether a low-volume endoscopist can achieve optimal outcomes at a high-volume center, or vice versa, they said. They recommended studies to better define procedure success and the appropriate use of ERCP in therapeutic settings.
One reviewer acknowledged support from the University of Colorado Department of Medicine Outstanding Early Career Faculty Program. The reviewers reported having no conflicts of interest.
Endoscopists who performed endoscopic retrograde cholangiopancreatography (ERCP) at high-volume centers had a 60% greater odds of procedure success compared with those at low-volume centers, according to the results of a systematic review and meta-analysis.
High-volume endoscopists also had a 30% lower odds of performing ERCP that led to adverse events such as pancreatitis, perforation, and bleeding, reported Rajesh N. Keswani, MD, MS, of Northwestern University, Chicago, and his associates. High-volume centers themselves also were associated with a significantly higher odds of successful ERCP (odds ratio, 2.0; 95% CI, 1.6 to 2.5), although they were not associated with a significantly lower risk of adverse events, the reviewers wrote. The study was published in the December issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.06.002).
Diagnostic ERCP has fallen sevenfold in the past 30 years while therapeutic use has increased 30-fold, the researchers noted. Therapeutic use spans several complex pancreaticobiliary conditions, including chronic pancreatitis, malignant jaundice, and complications of liver transplantation. This shift from diagnostic to therapeutic has naturally increased the complexity of ERCP, the need for expert endoscopy, and the potential risk of adverse events. “As health care continues to shift toward rewarding value rather than volume, it will be increasingly important to deliver care that is effective and efficient,” the reviewers wrote. “Thus, understanding factors associated with unsuccessful interventions, such as a failed ERCP, will be of critical importance to payers and patients (Clin Gastroenterol Hepatol. 2017 Jun 7;218:237-45).
Therefore, they searched MEDLINE, EMBASE, and the Cochrane register of controlled trials for prospective and retrospective studies published through January 2017. In all, the researchers identified 13 studies that stratified outcomes by volume per endoscopist or center. These studies comprised 59,437 procedures and patients. Definitions of low volume varied by study, ranging from less than 25 to less than 156 annual ERCPs per endoscopist and from less than 87 to less than 200 annual ERCPs per center. Endoscopists who achieved this threshold were significantly more likely to perform successful ERCPs than were low-volume endoscopists (OR, 1.6; 95% CI, 1.2 to 2.1), and were significantly less likely to have patients develop pancreatitis, perforation, or bleeding after ERCP (OR, 0.7; 95% CI, 0.5 to 0.8).
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
“Given these compelling findings, we propose that providers and payers consider consolidating ERCP to high-volume endoscopists and centers to improve ERCP outcomes and value,” the reviewers wrote. Minimum thresholds for endoscopists and centers to maintain ERCP skills and optimize outcomes have not been defined, they noted. Intuitively, there is no “critical volume threshold” at which “outcomes suddenly improve,” but the studies in this analysis used widely varying definitions of low volume, they added. It also remains unclear whether a low-volume endoscopist can achieve optimal outcomes at a high-volume center, or vice versa, they said. They recommended studies to better define procedure success and the appropriate use of ERCP in therapeutic settings.
One reviewer acknowledged support from the University of Colorado Department of Medicine Outstanding Early Career Faculty Program. The reviewers reported having no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: High endoscopic retrograde cholangiopancreatography (ERCP) volume predicted procedure success.
Major finding: High-volume endoscopists were significantly more likely to achieve success with ERCP than were low-volume endoscopists (odds ratio, 1.6; 95% confidence interval, 1.2 to 2.1). High-volume centers also had greater odds of successful ERCP than did low-volume centers (OR, 2; 95% CI, 1.6 to 2.5).
Data source: A systematic review and meta-analysis of 13 studies comprising 59,437 procedures and patients.
Disclosures: One coinvestigator acknowledged support from the University of Colorado Department of Medicine Outstanding Early Career Faculty Program. The researchers reported having no conflicts of interest.