User login
Model predicted Barrett’s esophagus progression
A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).
Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.
Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).
The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).
Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”
There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.
SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.
Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.
With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.
Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.
With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.
Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.
With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.
A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).
Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.
Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).
The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).
Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”
There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.
SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.
A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).
Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.
Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).
The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).
Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”
There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.
SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.
FROM GASTROENTEROLOGY
Key clinical point: A model containing four risk factors identified patients with Barrett’s esophagus at significantly increased risk of progression to high-grade dysplasia or esophageal adenocarcinoma.
Major finding: Scores assigned identified patients with BE that progressed to HGD or EAC with a c statistic of 0.76 (95% CI, 0.72 to 0.80; P less than .001).
Data source: A multicenter, longitudinal study of 2,697 patients with Barrett’s esophagus.
Disclosures: There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.
Source: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.
VIDEO: Biomarker accurately predicted primary nonfunction after liver transplant
, researchers reported in Gastroenterology.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.
Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.
Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.
The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).
The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.
Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.
SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.
, researchers reported in Gastroenterology.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.
Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.
Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.
The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).
The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.
Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.
SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.
, researchers reported in Gastroenterology.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.
Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.
Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.
The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).
The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.
Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.
SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.
FROM GASTROENTEROLOGY
Key clinical point: A glycomarker in donor liver perfusate was 100% accurate at predicting primary nonfunction after liver transplantation.
Major finding: In a multivariable model of donor risk index and perfusate markers, only the single-glycan, NGA2F was a significant predictor of primary nonfunction (P less than .0001).
Data source: A dual-center, prospective study of 66 liver transplant patients and a 55-member validation cohort.
Disclosures: Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.
Source: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.
VIDEO: Pioglitazone benefited NASH patients with and without T2DM
Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The primary outcome, at least a 2-point reduction in nonalcoholic fatty liver disease activity score compared with placebo, without worsening fibrosis, was met by 48% of NASH patients with T2DM and by 46% of those with prediabetes, reported Fernando Bril, MD, of the division of endocrinology, diabetes, and metabolism at the University of Florida, Gainesville, and his associates. The report was published in Clinical Gastroenterology and Hepatology.
NASH resolved completely in 44% with T2DM and 26% of patients without it, respectively, perhaps indicating that pioglitazone acts slightly differently when patients with NASH have T2DM, according to the investigators. “Although the effects on fibrosis appear to be similar in both groups, pioglitazone may contribute to halting [its] rapid progression [in T2DM],” they wrote. “These differences will deserve further exploration in larger clinical trials.”
The trial (NCT00994682) enrolled 101 patients with biopsy-confirmed NASH, of whom 52 had T2DM and 49 had prediabetes based on clinical history, baseline fasting plasma glucose, hemoglobin A1c, and an oral glucose tolerance test, as per American Diabetes Association guidelines. After a 4-week run-in period, patients were randomly assigned to receive either pioglitazone (45 mg per day) or placebo for 18 months. All patients received lifestyle counseling and a hypocaloric (500-kcal reduced) diet.
Compared with placebo, pioglitazone improved most secondary outcomes similarly regardless of whether patients were T2DM or prediabetes. The two exceptions were fibrosis and insulin sensitivity of adipose tissue. Patients with T2DM only experienced improved fibrosis in the setting of pioglitazone therapy (P = .035 vs. baseline). In prediabetic patients, fibrosis lessened moderately over time, regardless of whether they received pioglitazone or placebo. Insulin sensitivity of adipose tissue improved much more markedly with treatment in patients with T2DM (P less than .001 vs. baseline) than in those with prediabetes (P = .002 for T2DM vs. prediabetes).
Compared with placebo, pioglitazone improved hepatic and skeletal muscle insulin sensitivity similarly, regardless of diabetes status. Likewise, intrahepatic triglyceride content, as measured by proton magnetic resonance spectroscopy, fell by 11% in pioglitazone recipients with T2DM and by 9% of those with prediabetes, a nonsignificant difference. Pioglitazone also led to a statistically similar decrease in plasma alanine aminotransferase level regardless of whether patients had T2DM (50 U/L) or were prediabetic (36 U/L).
This trial’s key takeaway is that pioglitazone improves liver histology in NASH whether or not patients are diabetic, said the researchers. “We believed that it was essential to compare its efficacy in patients with [and] without T2DM because of the vast number of patients with prediabetes and NASH and given the significant metabolic and cardioprotective effects of pioglitazone among patients without T2DM,” they wrote. The natural history of NASH is worse in the presence of T2DM, which might explain pioglitazone’s superior effects on fibrosis and insulin sensitivity of adipose tissue in this population, they added.
The Burroughs Wellcome Fund, the American Diabetes Association, and the Veteran’s Affairs Merit Award supported the work. Senior author Kenneth Cusi, MD, disclosed nonfinancial support from Takeda Pharmaceuticals, grants from Novartis and Janssen Research and Development, and consulting relationships with Eli Lilly, Tobira Therapeutics, and Pfizer. The other authors had no conflicts.
Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.
Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The primary outcome, at least a 2-point reduction in nonalcoholic fatty liver disease activity score compared with placebo, without worsening fibrosis, was met by 48% of NASH patients with T2DM and by 46% of those with prediabetes, reported Fernando Bril, MD, of the division of endocrinology, diabetes, and metabolism at the University of Florida, Gainesville, and his associates. The report was published in Clinical Gastroenterology and Hepatology.
NASH resolved completely in 44% with T2DM and 26% of patients without it, respectively, perhaps indicating that pioglitazone acts slightly differently when patients with NASH have T2DM, according to the investigators. “Although the effects on fibrosis appear to be similar in both groups, pioglitazone may contribute to halting [its] rapid progression [in T2DM],” they wrote. “These differences will deserve further exploration in larger clinical trials.”
The trial (NCT00994682) enrolled 101 patients with biopsy-confirmed NASH, of whom 52 had T2DM and 49 had prediabetes based on clinical history, baseline fasting plasma glucose, hemoglobin A1c, and an oral glucose tolerance test, as per American Diabetes Association guidelines. After a 4-week run-in period, patients were randomly assigned to receive either pioglitazone (45 mg per day) or placebo for 18 months. All patients received lifestyle counseling and a hypocaloric (500-kcal reduced) diet.
Compared with placebo, pioglitazone improved most secondary outcomes similarly regardless of whether patients were T2DM or prediabetes. The two exceptions were fibrosis and insulin sensitivity of adipose tissue. Patients with T2DM only experienced improved fibrosis in the setting of pioglitazone therapy (P = .035 vs. baseline). In prediabetic patients, fibrosis lessened moderately over time, regardless of whether they received pioglitazone or placebo. Insulin sensitivity of adipose tissue improved much more markedly with treatment in patients with T2DM (P less than .001 vs. baseline) than in those with prediabetes (P = .002 for T2DM vs. prediabetes).
Compared with placebo, pioglitazone improved hepatic and skeletal muscle insulin sensitivity similarly, regardless of diabetes status. Likewise, intrahepatic triglyceride content, as measured by proton magnetic resonance spectroscopy, fell by 11% in pioglitazone recipients with T2DM and by 9% of those with prediabetes, a nonsignificant difference. Pioglitazone also led to a statistically similar decrease in plasma alanine aminotransferase level regardless of whether patients had T2DM (50 U/L) or were prediabetic (36 U/L).
This trial’s key takeaway is that pioglitazone improves liver histology in NASH whether or not patients are diabetic, said the researchers. “We believed that it was essential to compare its efficacy in patients with [and] without T2DM because of the vast number of patients with prediabetes and NASH and given the significant metabolic and cardioprotective effects of pioglitazone among patients without T2DM,” they wrote. The natural history of NASH is worse in the presence of T2DM, which might explain pioglitazone’s superior effects on fibrosis and insulin sensitivity of adipose tissue in this population, they added.
The Burroughs Wellcome Fund, the American Diabetes Association, and the Veteran’s Affairs Merit Award supported the work. Senior author Kenneth Cusi, MD, disclosed nonfinancial support from Takeda Pharmaceuticals, grants from Novartis and Janssen Research and Development, and consulting relationships with Eli Lilly, Tobira Therapeutics, and Pfizer. The other authors had no conflicts.
Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.
Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
The primary outcome, at least a 2-point reduction in nonalcoholic fatty liver disease activity score compared with placebo, without worsening fibrosis, was met by 48% of NASH patients with T2DM and by 46% of those with prediabetes, reported Fernando Bril, MD, of the division of endocrinology, diabetes, and metabolism at the University of Florida, Gainesville, and his associates. The report was published in Clinical Gastroenterology and Hepatology.
NASH resolved completely in 44% with T2DM and 26% of patients without it, respectively, perhaps indicating that pioglitazone acts slightly differently when patients with NASH have T2DM, according to the investigators. “Although the effects on fibrosis appear to be similar in both groups, pioglitazone may contribute to halting [its] rapid progression [in T2DM],” they wrote. “These differences will deserve further exploration in larger clinical trials.”
The trial (NCT00994682) enrolled 101 patients with biopsy-confirmed NASH, of whom 52 had T2DM and 49 had prediabetes based on clinical history, baseline fasting plasma glucose, hemoglobin A1c, and an oral glucose tolerance test, as per American Diabetes Association guidelines. After a 4-week run-in period, patients were randomly assigned to receive either pioglitazone (45 mg per day) or placebo for 18 months. All patients received lifestyle counseling and a hypocaloric (500-kcal reduced) diet.
Compared with placebo, pioglitazone improved most secondary outcomes similarly regardless of whether patients were T2DM or prediabetes. The two exceptions were fibrosis and insulin sensitivity of adipose tissue. Patients with T2DM only experienced improved fibrosis in the setting of pioglitazone therapy (P = .035 vs. baseline). In prediabetic patients, fibrosis lessened moderately over time, regardless of whether they received pioglitazone or placebo. Insulin sensitivity of adipose tissue improved much more markedly with treatment in patients with T2DM (P less than .001 vs. baseline) than in those with prediabetes (P = .002 for T2DM vs. prediabetes).
Compared with placebo, pioglitazone improved hepatic and skeletal muscle insulin sensitivity similarly, regardless of diabetes status. Likewise, intrahepatic triglyceride content, as measured by proton magnetic resonance spectroscopy, fell by 11% in pioglitazone recipients with T2DM and by 9% of those with prediabetes, a nonsignificant difference. Pioglitazone also led to a statistically similar decrease in plasma alanine aminotransferase level regardless of whether patients had T2DM (50 U/L) or were prediabetic (36 U/L).
This trial’s key takeaway is that pioglitazone improves liver histology in NASH whether or not patients are diabetic, said the researchers. “We believed that it was essential to compare its efficacy in patients with [and] without T2DM because of the vast number of patients with prediabetes and NASH and given the significant metabolic and cardioprotective effects of pioglitazone among patients without T2DM,” they wrote. The natural history of NASH is worse in the presence of T2DM, which might explain pioglitazone’s superior effects on fibrosis and insulin sensitivity of adipose tissue in this population, they added.
The Burroughs Wellcome Fund, the American Diabetes Association, and the Veteran’s Affairs Merit Award supported the work. Senior author Kenneth Cusi, MD, disclosed nonfinancial support from Takeda Pharmaceuticals, grants from Novartis and Janssen Research and Development, and consulting relationships with Eli Lilly, Tobira Therapeutics, and Pfizer. The other authors had no conflicts.
Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Pioglitazone improved liver measures in patients with nonalcoholic steatohepatitis whether or not they were diabetic.
Major finding: Nonalcoholic fatty liver disease activity score fell by at least 2 points, without worsening fibrosis, in 48% of T2DM patients and 46% of patients with prediabetes.
Data source: A prospective study of 101 patients with NASH, of whom 52 had type 2 diabetes and 49 had prediabetes.
Disclosures: The Burroughs Wellcome Fund, the American Diabetes Association, and the Veteran’s Affairs Merit Award supported the work. Senior author Kenneth Cusi, MD, disclosed nonfinancial support from Takeda Pharmaceuticals, grants from Novartis and Janssen Research and Development, and consulting relationships with Eli Lilly and Company, Tobira Therapeutics, and Pfizer. The other authors had no conflicts.
Source: Bril F et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.
AGA Clinical Practice Update: Incorporating psychological care in the management of chronic digestive diseases
This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.
A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.
Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.
The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:
- Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
- Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
- Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
- Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
- Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.
Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.
In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.
Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.
SOURCE: Keefer L et al. Gastroenterology. doi: 10.1053/j.gastro.2018.01.045.
This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.
A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.
Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.
The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:
- Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
- Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
- Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
- Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
- Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.
Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.
In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.
Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.
SOURCE: Keefer L et al. Gastroenterology. doi: 10.1053/j.gastro.2018.01.045.
This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.
A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.
Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.
The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:
- Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
- Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
- Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
- Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
- Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.
Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.
In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.
Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.
SOURCE: Keefer L et al. Gastroenterology. doi: 10.1053/j.gastro.2018.01.045.
FROM GASTROENTEROLOGY
Opioids linked to mortality in IBD
Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.
Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”
The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.
From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.
Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.
Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.
SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.
Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.
Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.
Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.
Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.
Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.
Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.
Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.
Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.
Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.
Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.
Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”
The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.
From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.
Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.
Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.
SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.
Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.
Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”
The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.
From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.
Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.
Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.
SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point:
Major finding: Thirty percent of patients were prescribed opioids in 2010-2013 vs. only 10% in 1990-1993 (P less than .005 for trend). Heavy use of strong opioids significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8- 6.2).
Study details: A retrospective cohort study of 3,517 individuals with Crohn’s disease and 5,349 individuals with ulcerative colitis.
Disclosures: Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts.
Source: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.
Bioengineered liver models screen drugs and study liver injury
resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).
Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.
High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.
Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.
Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.
These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.
Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.
Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.
Yet another bioengineered liver model is based on perfusion systems or bioreactors that enable dynamic fluid flow for nutrient and waste exchange. These so called liver-on-a-chip devices contain hepatocyte aggregates adhered to collagen-coated microchannel walls; these are then perfused at optimal flow rates both to meet the oxygen demands of the hepatocytes and deliver low shear stress to the cells that’s similar to what would be the case in vivo. Layered architectures can be created with single-chamber or multichamber, microfluidic device designs that can sustain cell functionality for 2-4 weeks.
Some of the limitations of perfusion systems include the potential binding of drugs to tubing and materials used, large dead volume requiring higher quantities of novel compounds for the treatment of cell cultures, low throughput, and washing away of built-up beneficial molecules with perfusion.
The ongoing development of more sophisticated engineering tools for manipulating cells in culture will lead to continued advances in bioengineered livers that will show improving sensitivity for the prediction of clinically relevant drug and disease outcomes.
This work was funded by National Institutes of Health grants. The author Dr. Khetani disclosed a conflict of interest with Ascendance Biotechnology, which has licensed the micropatterned coculture and related systems from Massachusetts Institute of Technology, Cambridge, and Colorado State University, Fort Collins, for commercial distribution. Dr. Underhill disclosed no conflicts.
SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.
Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.
However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.
Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.
Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.
However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.
Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.
Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.
However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.
Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.
resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).
Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.
High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.
Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.
Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.
These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.
Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.
Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.
Yet another bioengineered liver model is based on perfusion systems or bioreactors that enable dynamic fluid flow for nutrient and waste exchange. These so called liver-on-a-chip devices contain hepatocyte aggregates adhered to collagen-coated microchannel walls; these are then perfused at optimal flow rates both to meet the oxygen demands of the hepatocytes and deliver low shear stress to the cells that’s similar to what would be the case in vivo. Layered architectures can be created with single-chamber or multichamber, microfluidic device designs that can sustain cell functionality for 2-4 weeks.
Some of the limitations of perfusion systems include the potential binding of drugs to tubing and materials used, large dead volume requiring higher quantities of novel compounds for the treatment of cell cultures, low throughput, and washing away of built-up beneficial molecules with perfusion.
The ongoing development of more sophisticated engineering tools for manipulating cells in culture will lead to continued advances in bioengineered livers that will show improving sensitivity for the prediction of clinically relevant drug and disease outcomes.
This work was funded by National Institutes of Health grants. The author Dr. Khetani disclosed a conflict of interest with Ascendance Biotechnology, which has licensed the micropatterned coculture and related systems from Massachusetts Institute of Technology, Cambridge, and Colorado State University, Fort Collins, for commercial distribution. Dr. Underhill disclosed no conflicts.
SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.
resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).
Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.
High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.
Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.
Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.
These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.
Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.
Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.
Yet another bioengineered liver model is based on perfusion systems or bioreactors that enable dynamic fluid flow for nutrient and waste exchange. These so called liver-on-a-chip devices contain hepatocyte aggregates adhered to collagen-coated microchannel walls; these are then perfused at optimal flow rates both to meet the oxygen demands of the hepatocytes and deliver low shear stress to the cells that’s similar to what would be the case in vivo. Layered architectures can be created with single-chamber or multichamber, microfluidic device designs that can sustain cell functionality for 2-4 weeks.
Some of the limitations of perfusion systems include the potential binding of drugs to tubing and materials used, large dead volume requiring higher quantities of novel compounds for the treatment of cell cultures, low throughput, and washing away of built-up beneficial molecules with perfusion.
The ongoing development of more sophisticated engineering tools for manipulating cells in culture will lead to continued advances in bioengineered livers that will show improving sensitivity for the prediction of clinically relevant drug and disease outcomes.
This work was funded by National Institutes of Health grants. The author Dr. Khetani disclosed a conflict of interest with Ascendance Biotechnology, which has licensed the micropatterned coculture and related systems from Massachusetts Institute of Technology, Cambridge, and Colorado State University, Fort Collins, for commercial distribution. Dr. Underhill disclosed no conflicts.
SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Engineered liver models to study human hepatotropic pathogens
Recently, exciting clinical progress has been made in the study of hepatotropic pathogens in the context of liver-dependent infectious diseases. This is crucial for the development and validation of therapeutic interventions, such as drug and vaccine candidates that may act on the liver cells. The engineered models range from two-dimensional (2-D) cultures of primary human hepatocytes (HH) and stem cell–derived progeny to three-dimensional (3-D) organoid cultures and humanized rodent models. A review by Nil Gural and colleagues, published in Cellular and Molecular Gastroenterology and Hepatology, described these unique models. Furthermore, the progress made in combining individual approaches and pairing the most appropriate model system and readout modality was discussed.
The major human hepatotropic pathogens include hepatitis C virus (HCV), hepatitis B virus (HBV), and the protozoan parasites Plasmodium falciparum and P. vivax. While HBV and HCV can cause chronic liver diseases such as cirrhosis and hepatocellular carcinoma, Plasmodium parasites cause malaria. The use of cancer cell lines and animal models to study host-pathogen interactions is limited by uncontrolled proliferation, abnormal liver-specific functions, and stringent host dependency of the hepatotropic pathogens. HHs are thus the only ideal system to study these pathogens, however, maintaining these cells ex vivo is challenging.
For instance, 2D monolayers of human hepatoma-derived cell lines (such as HepG2-A16 and HepaRG) are easier to maintain, to amplify for scaling up, and to use for drug screening, thus representing a renewable alternative to primary hepatocytes. These model systems have been useful to study short-term infections of human Plasmodium parasites (P. vivax and P. falciparum); other hepatotropic pathogens such as Ebola, Lassa, human cytomegalovirus, and dengue viruses; and to generate virion stocks (HCV, HBV). For long-term scientific analyses and cultures, as well as clinical isolates of pathogens that do not infect hepatoma cells, immortalized cell lines have been engineered to differentiate and maintain HH functions for a longer duration. Additionally, cocultivation of primary hepatocytes with nonparenchymal cells or hepatocytes with mouse fibroblasts preserves hepatocyte phenotype. The latter is a self-assembling coculture system that could potentially maintain an infection for over 30 days and be used for testing anti-HBV drugs. A micropatterned coculture system, in which hepatocytes are positioned in “islands” via photolithographic patterning of collagen, surrounded by mouse embryonic fibroblasts, can maintain hepatocyte phenotypes for 4-6 weeks, and remain permissive to P. falciparum, P. vivax, HBV, and HCV infections. Furthermore, micropatterned coculture systems support full developmental liver stages of both P. falciparum and P. vivax, with the release of merozoites from hepatocytes and their subsequent infection of overlaid human red blood cells.
Alternatively, embryonic stem cells and induced pluripotent stem cells of human origin can be differentiated into hepatocytelike cells that enable investigation of host genetics within the context of host-pathogen interactions, and can also be used for target identification for drug development. However, stem cell cultures require significant culture expertise and may not represent a fully differentiated adult hepatocyte phenotype.
Although 2D cultures offer ease of use and monitoring of infection, they often lack the complexity of the liver microenvironment and impact of different cell types on liver infections. A 3D radial-flow bioreactor (cylindrical matrix) was able to maintain and amplify human hepatoma cells (for example, Huh7 cells), by providing sufficient oxygen and nutrient supply, supporting productive HCV infection for months. Other 3D cultures of hepatoma cells using polyethylene glycol–based hydrogels, thermoreversible gelatin polymers, alginate, galactosylated cellulosic sponges, matrigel, and collagen have been developed and shown to be permissive to HCV or HBV infections. Although 3D coculture systems exhibit better hepatic function and differential gene expression profiles in comparison to 2D counterparts, they require a large quantity of cells and are a challenge to scale up. Recently, several liver-on-a-chip models have been created that mimic shear stress, blood flow, and the extracellular environment within a tissue, holding great potential for modeling liver-specific pathogens.
Humanized mouse models with ectopic human liver structures have been developed in which primary HHs are transplanted following liver injury. Chimeric mouse models including Alb-uPA/SCID (HHs transplanted into urokinase-type plasminogen activator-transgenic severe combined immunodeficient mice), FNRG/FRG (HHs transplanted into Fah[-/-], Rag2[-/-], and Il2rg[-/-] mice with or without a nonobese diabetic background), and TK-NOG (HHs transplanted into herpes simplex virus type-1 thymidine kinase mice) were validated for HCV, HBV, P. falciparum, and P. vivax infections. It is, however, laborious to create and maintain chimeric mouse models and monitor infection processes in them.
It is important to note that the selection of model system and the readout modality to monitor infection will vary based on the experimental question at hand. Tissue engineering has thus far made significant contributions to the knowledge of hepatotropic pathogens; a continued effort to develop better liver models is envisioned.
Gural et al. present a timely and outstanding review of the advances made in the engineering of human-relevant liver culture platforms for investigating the molecular mechanisms of infectious diseases (e.g., hepatitis B/C viruses and Plasmodium parasites that cause malaria) and developing better drugs or vaccines against such diseases. The authors cover a continuum of platforms with increasing physiological complexity, such as 2-D hepatocyte monocultures on collagen-coated plastic, 2-D cocultures of hepatocytes and nonparenchymal cells, (both randomly distributed and patterned into microdomains to optimize cell-cell contact), 3-D cultures/cocultures housed in biomaterial-based scaffolds, perfusion-based bioreactors to induce cell growth and phenotypic stability, and finally rodents with humanized livers. Cell sourcing considerations for building human-relevant platforms are discussed, including cancerous cell lines, primary human hepatocytes, and stem cell–derived hepatocytes (e.g., induced pluripotent stem cells).
From the discussions of various studies, it is clear that this field has benefitted tremendously from advances in tissue engineering, including microfabrication tools adapted from the semiconductor industry, to construct human liver platforms that last for several weeks in vitro, can be infected with hepatitis B/C virus and Plasmodium parasites with high efficiencies, and are very useful for high-throughput and high-content drug screening applications. The latest protocols in isolating and cryopreserving primary human hepatocytes and differentiating stem cells into hepatocyte-like cells with adult functions help reduce the reliance on abnormal or cancerous cell lines for building platforms with higher relevance to the clinic. Ultimately, continued advances in microfabricated human liver platforms can aid our understanding of liver infections and spur further drug/vaccine development.
Salman R. Khetani, PhD, is associate professor, department of bioengineering, University of Illinois at Chicago. He has no conflicts of interest.
Gural et al. present a timely and outstanding review of the advances made in the engineering of human-relevant liver culture platforms for investigating the molecular mechanisms of infectious diseases (e.g., hepatitis B/C viruses and Plasmodium parasites that cause malaria) and developing better drugs or vaccines against such diseases. The authors cover a continuum of platforms with increasing physiological complexity, such as 2-D hepatocyte monocultures on collagen-coated plastic, 2-D cocultures of hepatocytes and nonparenchymal cells, (both randomly distributed and patterned into microdomains to optimize cell-cell contact), 3-D cultures/cocultures housed in biomaterial-based scaffolds, perfusion-based bioreactors to induce cell growth and phenotypic stability, and finally rodents with humanized livers. Cell sourcing considerations for building human-relevant platforms are discussed, including cancerous cell lines, primary human hepatocytes, and stem cell–derived hepatocytes (e.g., induced pluripotent stem cells).
From the discussions of various studies, it is clear that this field has benefitted tremendously from advances in tissue engineering, including microfabrication tools adapted from the semiconductor industry, to construct human liver platforms that last for several weeks in vitro, can be infected with hepatitis B/C virus and Plasmodium parasites with high efficiencies, and are very useful for high-throughput and high-content drug screening applications. The latest protocols in isolating and cryopreserving primary human hepatocytes and differentiating stem cells into hepatocyte-like cells with adult functions help reduce the reliance on abnormal or cancerous cell lines for building platforms with higher relevance to the clinic. Ultimately, continued advances in microfabricated human liver platforms can aid our understanding of liver infections and spur further drug/vaccine development.
Salman R. Khetani, PhD, is associate professor, department of bioengineering, University of Illinois at Chicago. He has no conflicts of interest.
Gural et al. present a timely and outstanding review of the advances made in the engineering of human-relevant liver culture platforms for investigating the molecular mechanisms of infectious diseases (e.g., hepatitis B/C viruses and Plasmodium parasites that cause malaria) and developing better drugs or vaccines against such diseases. The authors cover a continuum of platforms with increasing physiological complexity, such as 2-D hepatocyte monocultures on collagen-coated plastic, 2-D cocultures of hepatocytes and nonparenchymal cells, (both randomly distributed and patterned into microdomains to optimize cell-cell contact), 3-D cultures/cocultures housed in biomaterial-based scaffolds, perfusion-based bioreactors to induce cell growth and phenotypic stability, and finally rodents with humanized livers. Cell sourcing considerations for building human-relevant platforms are discussed, including cancerous cell lines, primary human hepatocytes, and stem cell–derived hepatocytes (e.g., induced pluripotent stem cells).
From the discussions of various studies, it is clear that this field has benefitted tremendously from advances in tissue engineering, including microfabrication tools adapted from the semiconductor industry, to construct human liver platforms that last for several weeks in vitro, can be infected with hepatitis B/C virus and Plasmodium parasites with high efficiencies, and are very useful for high-throughput and high-content drug screening applications. The latest protocols in isolating and cryopreserving primary human hepatocytes and differentiating stem cells into hepatocyte-like cells with adult functions help reduce the reliance on abnormal or cancerous cell lines for building platforms with higher relevance to the clinic. Ultimately, continued advances in microfabricated human liver platforms can aid our understanding of liver infections and spur further drug/vaccine development.
Salman R. Khetani, PhD, is associate professor, department of bioengineering, University of Illinois at Chicago. He has no conflicts of interest.
Recently, exciting clinical progress has been made in the study of hepatotropic pathogens in the context of liver-dependent infectious diseases. This is crucial for the development and validation of therapeutic interventions, such as drug and vaccine candidates that may act on the liver cells. The engineered models range from two-dimensional (2-D) cultures of primary human hepatocytes (HH) and stem cell–derived progeny to three-dimensional (3-D) organoid cultures and humanized rodent models. A review by Nil Gural and colleagues, published in Cellular and Molecular Gastroenterology and Hepatology, described these unique models. Furthermore, the progress made in combining individual approaches and pairing the most appropriate model system and readout modality was discussed.
The major human hepatotropic pathogens include hepatitis C virus (HCV), hepatitis B virus (HBV), and the protozoan parasites Plasmodium falciparum and P. vivax. While HBV and HCV can cause chronic liver diseases such as cirrhosis and hepatocellular carcinoma, Plasmodium parasites cause malaria. The use of cancer cell lines and animal models to study host-pathogen interactions is limited by uncontrolled proliferation, abnormal liver-specific functions, and stringent host dependency of the hepatotropic pathogens. HHs are thus the only ideal system to study these pathogens, however, maintaining these cells ex vivo is challenging.
For instance, 2D monolayers of human hepatoma-derived cell lines (such as HepG2-A16 and HepaRG) are easier to maintain, to amplify for scaling up, and to use for drug screening, thus representing a renewable alternative to primary hepatocytes. These model systems have been useful to study short-term infections of human Plasmodium parasites (P. vivax and P. falciparum); other hepatotropic pathogens such as Ebola, Lassa, human cytomegalovirus, and dengue viruses; and to generate virion stocks (HCV, HBV). For long-term scientific analyses and cultures, as well as clinical isolates of pathogens that do not infect hepatoma cells, immortalized cell lines have been engineered to differentiate and maintain HH functions for a longer duration. Additionally, cocultivation of primary hepatocytes with nonparenchymal cells or hepatocytes with mouse fibroblasts preserves hepatocyte phenotype. The latter is a self-assembling coculture system that could potentially maintain an infection for over 30 days and be used for testing anti-HBV drugs. A micropatterned coculture system, in which hepatocytes are positioned in “islands” via photolithographic patterning of collagen, surrounded by mouse embryonic fibroblasts, can maintain hepatocyte phenotypes for 4-6 weeks, and remain permissive to P. falciparum, P. vivax, HBV, and HCV infections. Furthermore, micropatterned coculture systems support full developmental liver stages of both P. falciparum and P. vivax, with the release of merozoites from hepatocytes and their subsequent infection of overlaid human red blood cells.
Alternatively, embryonic stem cells and induced pluripotent stem cells of human origin can be differentiated into hepatocytelike cells that enable investigation of host genetics within the context of host-pathogen interactions, and can also be used for target identification for drug development. However, stem cell cultures require significant culture expertise and may not represent a fully differentiated adult hepatocyte phenotype.
Although 2D cultures offer ease of use and monitoring of infection, they often lack the complexity of the liver microenvironment and impact of different cell types on liver infections. A 3D radial-flow bioreactor (cylindrical matrix) was able to maintain and amplify human hepatoma cells (for example, Huh7 cells), by providing sufficient oxygen and nutrient supply, supporting productive HCV infection for months. Other 3D cultures of hepatoma cells using polyethylene glycol–based hydrogels, thermoreversible gelatin polymers, alginate, galactosylated cellulosic sponges, matrigel, and collagen have been developed and shown to be permissive to HCV or HBV infections. Although 3D coculture systems exhibit better hepatic function and differential gene expression profiles in comparison to 2D counterparts, they require a large quantity of cells and are a challenge to scale up. Recently, several liver-on-a-chip models have been created that mimic shear stress, blood flow, and the extracellular environment within a tissue, holding great potential for modeling liver-specific pathogens.
Humanized mouse models with ectopic human liver structures have been developed in which primary HHs are transplanted following liver injury. Chimeric mouse models including Alb-uPA/SCID (HHs transplanted into urokinase-type plasminogen activator-transgenic severe combined immunodeficient mice), FNRG/FRG (HHs transplanted into Fah[-/-], Rag2[-/-], and Il2rg[-/-] mice with or without a nonobese diabetic background), and TK-NOG (HHs transplanted into herpes simplex virus type-1 thymidine kinase mice) were validated for HCV, HBV, P. falciparum, and P. vivax infections. It is, however, laborious to create and maintain chimeric mouse models and monitor infection processes in them.
It is important to note that the selection of model system and the readout modality to monitor infection will vary based on the experimental question at hand. Tissue engineering has thus far made significant contributions to the knowledge of hepatotropic pathogens; a continued effort to develop better liver models is envisioned.
Recently, exciting clinical progress has been made in the study of hepatotropic pathogens in the context of liver-dependent infectious diseases. This is crucial for the development and validation of therapeutic interventions, such as drug and vaccine candidates that may act on the liver cells. The engineered models range from two-dimensional (2-D) cultures of primary human hepatocytes (HH) and stem cell–derived progeny to three-dimensional (3-D) organoid cultures and humanized rodent models. A review by Nil Gural and colleagues, published in Cellular and Molecular Gastroenterology and Hepatology, described these unique models. Furthermore, the progress made in combining individual approaches and pairing the most appropriate model system and readout modality was discussed.
The major human hepatotropic pathogens include hepatitis C virus (HCV), hepatitis B virus (HBV), and the protozoan parasites Plasmodium falciparum and P. vivax. While HBV and HCV can cause chronic liver diseases such as cirrhosis and hepatocellular carcinoma, Plasmodium parasites cause malaria. The use of cancer cell lines and animal models to study host-pathogen interactions is limited by uncontrolled proliferation, abnormal liver-specific functions, and stringent host dependency of the hepatotropic pathogens. HHs are thus the only ideal system to study these pathogens, however, maintaining these cells ex vivo is challenging.
For instance, 2D monolayers of human hepatoma-derived cell lines (such as HepG2-A16 and HepaRG) are easier to maintain, to amplify for scaling up, and to use for drug screening, thus representing a renewable alternative to primary hepatocytes. These model systems have been useful to study short-term infections of human Plasmodium parasites (P. vivax and P. falciparum); other hepatotropic pathogens such as Ebola, Lassa, human cytomegalovirus, and dengue viruses; and to generate virion stocks (HCV, HBV). For long-term scientific analyses and cultures, as well as clinical isolates of pathogens that do not infect hepatoma cells, immortalized cell lines have been engineered to differentiate and maintain HH functions for a longer duration. Additionally, cocultivation of primary hepatocytes with nonparenchymal cells or hepatocytes with mouse fibroblasts preserves hepatocyte phenotype. The latter is a self-assembling coculture system that could potentially maintain an infection for over 30 days and be used for testing anti-HBV drugs. A micropatterned coculture system, in which hepatocytes are positioned in “islands” via photolithographic patterning of collagen, surrounded by mouse embryonic fibroblasts, can maintain hepatocyte phenotypes for 4-6 weeks, and remain permissive to P. falciparum, P. vivax, HBV, and HCV infections. Furthermore, micropatterned coculture systems support full developmental liver stages of both P. falciparum and P. vivax, with the release of merozoites from hepatocytes and their subsequent infection of overlaid human red blood cells.
Alternatively, embryonic stem cells and induced pluripotent stem cells of human origin can be differentiated into hepatocytelike cells that enable investigation of host genetics within the context of host-pathogen interactions, and can also be used for target identification for drug development. However, stem cell cultures require significant culture expertise and may not represent a fully differentiated adult hepatocyte phenotype.
Although 2D cultures offer ease of use and monitoring of infection, they often lack the complexity of the liver microenvironment and impact of different cell types on liver infections. A 3D radial-flow bioreactor (cylindrical matrix) was able to maintain and amplify human hepatoma cells (for example, Huh7 cells), by providing sufficient oxygen and nutrient supply, supporting productive HCV infection for months. Other 3D cultures of hepatoma cells using polyethylene glycol–based hydrogels, thermoreversible gelatin polymers, alginate, galactosylated cellulosic sponges, matrigel, and collagen have been developed and shown to be permissive to HCV or HBV infections. Although 3D coculture systems exhibit better hepatic function and differential gene expression profiles in comparison to 2D counterparts, they require a large quantity of cells and are a challenge to scale up. Recently, several liver-on-a-chip models have been created that mimic shear stress, blood flow, and the extracellular environment within a tissue, holding great potential for modeling liver-specific pathogens.
Humanized mouse models with ectopic human liver structures have been developed in which primary HHs are transplanted following liver injury. Chimeric mouse models including Alb-uPA/SCID (HHs transplanted into urokinase-type plasminogen activator-transgenic severe combined immunodeficient mice), FNRG/FRG (HHs transplanted into Fah[-/-], Rag2[-/-], and Il2rg[-/-] mice with or without a nonobese diabetic background), and TK-NOG (HHs transplanted into herpes simplex virus type-1 thymidine kinase mice) were validated for HCV, HBV, P. falciparum, and P. vivax infections. It is, however, laborious to create and maintain chimeric mouse models and monitor infection processes in them.
It is important to note that the selection of model system and the readout modality to monitor infection will vary based on the experimental question at hand. Tissue engineering has thus far made significant contributions to the knowledge of hepatotropic pathogens; a continued effort to develop better liver models is envisioned.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
VIDEO: Model supports endoscopic resection for some T1b esophageal adenocarcinomas
Endoscopic treatment of T1a esophageal adenocarcinoma outperformed esophagectomy across a range of ages and comorbidity levels in a Markov model.
Esophagectomy produced 0.16 more unadjusted life-years, but led to 0.27 fewer quality-adjusted life-years (QALYs), in the hypothetical case of a 75-year-old man with T1aN0M0 esophageal adenocarcinoma (EAC) and a Charlson comorbidity index score of 0, reported Jacqueline N. Chu, MD, of Massachusetts General Hospital, Boston, and her associates. “[We] believe QALYs are a more important endpoint because of the significant morbidity associated with esophagectomy,” they wrote in the March issue of Clinical Gastroenterology and Hepatology.
Source: American Gastroenterological Association
In contrast, the model portrayed the management of T1b EAC as “an individualized decision” – esophagectomy was preferable in 60- to 70-year-old patients with T1b EAC, but serial endoscopic treatment was better when patients were older, with more comorbidities, the researchers said. “For the sickest patients, those aged 80 and older with comorbidity index of 2, endoscopic treatment not only provided more QALYs but more unadjusted life years as well.”
Treatment of T1a EAC is transitioning from esophagectomy to serial endoscopic resection, which physicians still tend to regard as too risky in T1b EAC. The Markov model evaluated the efficacy and cost efficacy of the two approaches in hypothetical T1a and T1b patients of various ages and comorbidities, using cancer death data from the Surveillance, Epidemiology, and End Results (SEER) Medicare database and published cost data converted to 2017 U.S. dollars based on the U.S. Bureau of Labor Statistics’ Consumer Price Index.
Like the T1a case, the T1b base case consisted of a 75-year-old man with a Charlson comorbidity index of 0. Esophagectomy produced 0.72 more unadjusted life years than did endoscopic treatment (5.73 vs. 5.01) while yielding 0.22 more QALYs (4.07 vs. 3.85, respectively). Esophagectomy cost $156,981 more, but the model did not account for costs of chemotherapy and radiation or palliative care, all of which are more likely with endoscopic resection than esophagectomy, the researchers noted.
In sensitivity analyses, endoscopic treatment optimized quality of life in T1b EAC patients who were older than 80 years and had a comorbidity index of 1 or 2. Beyond that, treatment choice depended on posttreatment variables. “[If] a patient considered his or her quality of life postesophagectomy nearly equal to, or preferable to, [that] postendoscopic treatment, esophagectomy would be the optimal treatment strategy,” the investigators wrote. “An example would be the patient who would rather have an esophagectomy than worry about recurrence with endoscopic treatment.”
Pathologic analysis of T1a EACs can be inconsistent, and the model did not test whether high versus low pathologic risk affected treatment preference, the researchers said. They added data on T1NOS (T1 not otherwise specified) EACs to the model because the SEER-Medicare database included so few T1b endoscopic cases, but T1NOS patients had the worst outcomes and were in fact probably higher stage than T1. Fully 31% of endoscopy patients were T1NOS, compared with only 11% of esophagectomy patients, which would have biased the model against endoscopic treatment, according to the investigators.
The National Institutes of Health provided funding. Dr. Chu reported having no conflicts of interest. Three coinvestigators disclosed ties to CSA Medical, Ninepoint, C2 Therapeutics, Medtronic, and Trio Medicines. The remaining coinvestigators had no conflicts.
SOURCE: Chu JN et al. Clin Gastroenterol Hepatol. 2017 Nov 24. doi: 10.1016/j.cgh.2017.10.024.
Endoscopic treatment of T1a esophageal adenocarcinoma outperformed esophagectomy across a range of ages and comorbidity levels in a Markov model.
Esophagectomy produced 0.16 more unadjusted life-years, but led to 0.27 fewer quality-adjusted life-years (QALYs), in the hypothetical case of a 75-year-old man with T1aN0M0 esophageal adenocarcinoma (EAC) and a Charlson comorbidity index score of 0, reported Jacqueline N. Chu, MD, of Massachusetts General Hospital, Boston, and her associates. “[We] believe QALYs are a more important endpoint because of the significant morbidity associated with esophagectomy,” they wrote in the March issue of Clinical Gastroenterology and Hepatology.
Source: American Gastroenterological Association
In contrast, the model portrayed the management of T1b EAC as “an individualized decision” – esophagectomy was preferable in 60- to 70-year-old patients with T1b EAC, but serial endoscopic treatment was better when patients were older, with more comorbidities, the researchers said. “For the sickest patients, those aged 80 and older with comorbidity index of 2, endoscopic treatment not only provided more QALYs but more unadjusted life years as well.”
Treatment of T1a EAC is transitioning from esophagectomy to serial endoscopic resection, which physicians still tend to regard as too risky in T1b EAC. The Markov model evaluated the efficacy and cost efficacy of the two approaches in hypothetical T1a and T1b patients of various ages and comorbidities, using cancer death data from the Surveillance, Epidemiology, and End Results (SEER) Medicare database and published cost data converted to 2017 U.S. dollars based on the U.S. Bureau of Labor Statistics’ Consumer Price Index.
Like the T1a case, the T1b base case consisted of a 75-year-old man with a Charlson comorbidity index of 0. Esophagectomy produced 0.72 more unadjusted life years than did endoscopic treatment (5.73 vs. 5.01) while yielding 0.22 more QALYs (4.07 vs. 3.85, respectively). Esophagectomy cost $156,981 more, but the model did not account for costs of chemotherapy and radiation or palliative care, all of which are more likely with endoscopic resection than esophagectomy, the researchers noted.
In sensitivity analyses, endoscopic treatment optimized quality of life in T1b EAC patients who were older than 80 years and had a comorbidity index of 1 or 2. Beyond that, treatment choice depended on posttreatment variables. “[If] a patient considered his or her quality of life postesophagectomy nearly equal to, or preferable to, [that] postendoscopic treatment, esophagectomy would be the optimal treatment strategy,” the investigators wrote. “An example would be the patient who would rather have an esophagectomy than worry about recurrence with endoscopic treatment.”
Pathologic analysis of T1a EACs can be inconsistent, and the model did not test whether high versus low pathologic risk affected treatment preference, the researchers said. They added data on T1NOS (T1 not otherwise specified) EACs to the model because the SEER-Medicare database included so few T1b endoscopic cases, but T1NOS patients had the worst outcomes and were in fact probably higher stage than T1. Fully 31% of endoscopy patients were T1NOS, compared with only 11% of esophagectomy patients, which would have biased the model against endoscopic treatment, according to the investigators.
The National Institutes of Health provided funding. Dr. Chu reported having no conflicts of interest. Three coinvestigators disclosed ties to CSA Medical, Ninepoint, C2 Therapeutics, Medtronic, and Trio Medicines. The remaining coinvestigators had no conflicts.
SOURCE: Chu JN et al. Clin Gastroenterol Hepatol. 2017 Nov 24. doi: 10.1016/j.cgh.2017.10.024.
Endoscopic treatment of T1a esophageal adenocarcinoma outperformed esophagectomy across a range of ages and comorbidity levels in a Markov model.
Esophagectomy produced 0.16 more unadjusted life-years, but led to 0.27 fewer quality-adjusted life-years (QALYs), in the hypothetical case of a 75-year-old man with T1aN0M0 esophageal adenocarcinoma (EAC) and a Charlson comorbidity index score of 0, reported Jacqueline N. Chu, MD, of Massachusetts General Hospital, Boston, and her associates. “[We] believe QALYs are a more important endpoint because of the significant morbidity associated with esophagectomy,” they wrote in the March issue of Clinical Gastroenterology and Hepatology.
Source: American Gastroenterological Association
In contrast, the model portrayed the management of T1b EAC as “an individualized decision” – esophagectomy was preferable in 60- to 70-year-old patients with T1b EAC, but serial endoscopic treatment was better when patients were older, with more comorbidities, the researchers said. “For the sickest patients, those aged 80 and older with comorbidity index of 2, endoscopic treatment not only provided more QALYs but more unadjusted life years as well.”
Treatment of T1a EAC is transitioning from esophagectomy to serial endoscopic resection, which physicians still tend to regard as too risky in T1b EAC. The Markov model evaluated the efficacy and cost efficacy of the two approaches in hypothetical T1a and T1b patients of various ages and comorbidities, using cancer death data from the Surveillance, Epidemiology, and End Results (SEER) Medicare database and published cost data converted to 2017 U.S. dollars based on the U.S. Bureau of Labor Statistics’ Consumer Price Index.
Like the T1a case, the T1b base case consisted of a 75-year-old man with a Charlson comorbidity index of 0. Esophagectomy produced 0.72 more unadjusted life years than did endoscopic treatment (5.73 vs. 5.01) while yielding 0.22 more QALYs (4.07 vs. 3.85, respectively). Esophagectomy cost $156,981 more, but the model did not account for costs of chemotherapy and radiation or palliative care, all of which are more likely with endoscopic resection than esophagectomy, the researchers noted.
In sensitivity analyses, endoscopic treatment optimized quality of life in T1b EAC patients who were older than 80 years and had a comorbidity index of 1 or 2. Beyond that, treatment choice depended on posttreatment variables. “[If] a patient considered his or her quality of life postesophagectomy nearly equal to, or preferable to, [that] postendoscopic treatment, esophagectomy would be the optimal treatment strategy,” the investigators wrote. “An example would be the patient who would rather have an esophagectomy than worry about recurrence with endoscopic treatment.”
Pathologic analysis of T1a EACs can be inconsistent, and the model did not test whether high versus low pathologic risk affected treatment preference, the researchers said. They added data on T1NOS (T1 not otherwise specified) EACs to the model because the SEER-Medicare database included so few T1b endoscopic cases, but T1NOS patients had the worst outcomes and were in fact probably higher stage than T1. Fully 31% of endoscopy patients were T1NOS, compared with only 11% of esophagectomy patients, which would have biased the model against endoscopic treatment, according to the investigators.
The National Institutes of Health provided funding. Dr. Chu reported having no conflicts of interest. Three coinvestigators disclosed ties to CSA Medical, Ninepoint, C2 Therapeutics, Medtronic, and Trio Medicines. The remaining coinvestigators had no conflicts.
SOURCE: Chu JN et al. Clin Gastroenterol Hepatol. 2017 Nov 24. doi: 10.1016/j.cgh.2017.10.024.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: A Markov model supports endoscopic resection for some T1b esophageal adenocarcinomas.
Major finding: Endoscopic resection was preferred in T1b patients who were more than 80 years old or had a Charlson comorbidity index of 1or 2.
Data source: A Markov model with Surveillance, Epidemiology, and End Results (SEER) Medicare mortality data and published cost data converted to 2017 U.S. dollars based on the national Consumer Price Index.
Disclosures: The National Institutes of Health provided funding. Dr. Chu reported having no conflicts of interest. Three coinvestigators disclosed ties to CSA Medical, Ninepoint, C2 Therapeutics, Medtronic, and Trio Medicines. The remaining coinvestigators had no conflicts.
Source: Chu JN et al. Clin Gastroenterol Hepatol .2017 Nov 24. doi: 10.1016/j.cgh.2017.10.024.
Sofosbuvir/ledipasvir looks good in HBV coinfected patients
For patients or death in a phase 3b, multicenter, open-label study.
“Although we observed increases in HBV DNA in most patients, these increases were [usually] not associated with ALT [alanine amino transferase] flares or clinical complications,” reported Chun-Jen Liu, MD, of National Taiwan University College of Medicine and Hospital, Taipei, and his associates. Although nearly two-thirds of patients developed HBV reactivation, less than 5% developed alanine aminotransferase rises at least twice the upper limit of normal, and only one patient had symptomatic HBV reactivation, which entecavir therapy resolved. This study was the first to prospectively evaluate the risk of HBV reactivation during HCV treatment, the researchers wrote in the March issue of Gastroenterology.
Because chronic hepatitis C virus infection tends to suppress HBV replication, peginterferon/ribavirin or direct-acting anti-HCV treatment can reactivate HBV infection, especially in patients who test positive for hepatitis B surface antigen (HBsAg). Left untreated, reactivated HBV can lead to fulminant hepatitis, liver failure, and death, as noted on recently mandated boxed warnings.
Accordingly, guidelines recommend testing patients for HBV infection before starting HCV treatment. The study enrolled 111 coinfected patients; about two-thirds were female, and 16% had compensated cirrhosis. All tested positive for HBsAg at screening, and all but one also tested positive at baseline. Mean baseline HBV DNA levels were 2.1 log10 IU/mL. Patients received 90 mg ledipasvir plus 400 mg sofosbuvir for 12 weeks, and levels of HCV RNA, HBV DNA, and HBsAg were tested at weeks 1, 2, 4, 8, 12, posttreatment week 4, and then every 12 weeks until posttreatment week 108.
In all, 70 (63%) patients developed HBV reactivation, including 84% of the 37 patients with undetectable HBV DNA at baseline. During treatment, none of these patients had ALT rise more than twice the upper limit of normal. By 48 weeks post treatment, however, 77% still had quantifiable HBV DNA, and two had marked ALT rises. Furthermore, by posttreatment week 53, one of these patients developed bilirubinemia and symptomatic HBV infection (malaise, anorexia, sclera jaundice, and nausea), which resolved after treatment with entecavir.
A total of 74 patients had quantifiable baseline HBV DNA (at least 20 IU/mL). Three received entecavir or tenofovir disoproxil fumarate based on confirmed HBV reactivation with a concomitant ALT rise of at least twice the upper limit of normal. All were asymptomatic. There were no cases of liver failure or death.
“Regardless of HBV DNA and/or ALT elevations, no patient had signs of liver failure,” the researchers wrote. “Our results support the recommendations put forth in clinical treatment guidelines: HCV-infected patients should be evaluated for HBV infection prior to HCV treatment with direct-acting antivirals. Those who are HBsAg positive should be monitored during and after treatment for HBV reactivation, and treatment should be initiated in accordance with existing guidelines.”
Gilead funded the study. Dr. Liu and 12 coinvestigators reported having no conflicts of interest. Nine coinvestigators reported being employees and shareholders of Gilead, and one coinvestigator reporting consulting for Gilead. The senior author disclosed ties to Roche, Bristol-Myers Squibb, Johnson & Johnson, Bayer, MSD, and Taiha.
SOURCE: Lui C-J et al. Gastroenterology. 2017 Nov 21. doi: 10.1053/j.gastro.2017.11.011.
For patients or death in a phase 3b, multicenter, open-label study.
“Although we observed increases in HBV DNA in most patients, these increases were [usually] not associated with ALT [alanine amino transferase] flares or clinical complications,” reported Chun-Jen Liu, MD, of National Taiwan University College of Medicine and Hospital, Taipei, and his associates. Although nearly two-thirds of patients developed HBV reactivation, less than 5% developed alanine aminotransferase rises at least twice the upper limit of normal, and only one patient had symptomatic HBV reactivation, which entecavir therapy resolved. This study was the first to prospectively evaluate the risk of HBV reactivation during HCV treatment, the researchers wrote in the March issue of Gastroenterology.
Because chronic hepatitis C virus infection tends to suppress HBV replication, peginterferon/ribavirin or direct-acting anti-HCV treatment can reactivate HBV infection, especially in patients who test positive for hepatitis B surface antigen (HBsAg). Left untreated, reactivated HBV can lead to fulminant hepatitis, liver failure, and death, as noted on recently mandated boxed warnings.
Accordingly, guidelines recommend testing patients for HBV infection before starting HCV treatment. The study enrolled 111 coinfected patients; about two-thirds were female, and 16% had compensated cirrhosis. All tested positive for HBsAg at screening, and all but one also tested positive at baseline. Mean baseline HBV DNA levels were 2.1 log10 IU/mL. Patients received 90 mg ledipasvir plus 400 mg sofosbuvir for 12 weeks, and levels of HCV RNA, HBV DNA, and HBsAg were tested at weeks 1, 2, 4, 8, 12, posttreatment week 4, and then every 12 weeks until posttreatment week 108.
In all, 70 (63%) patients developed HBV reactivation, including 84% of the 37 patients with undetectable HBV DNA at baseline. During treatment, none of these patients had ALT rise more than twice the upper limit of normal. By 48 weeks post treatment, however, 77% still had quantifiable HBV DNA, and two had marked ALT rises. Furthermore, by posttreatment week 53, one of these patients developed bilirubinemia and symptomatic HBV infection (malaise, anorexia, sclera jaundice, and nausea), which resolved after treatment with entecavir.
A total of 74 patients had quantifiable baseline HBV DNA (at least 20 IU/mL). Three received entecavir or tenofovir disoproxil fumarate based on confirmed HBV reactivation with a concomitant ALT rise of at least twice the upper limit of normal. All were asymptomatic. There were no cases of liver failure or death.
“Regardless of HBV DNA and/or ALT elevations, no patient had signs of liver failure,” the researchers wrote. “Our results support the recommendations put forth in clinical treatment guidelines: HCV-infected patients should be evaluated for HBV infection prior to HCV treatment with direct-acting antivirals. Those who are HBsAg positive should be monitored during and after treatment for HBV reactivation, and treatment should be initiated in accordance with existing guidelines.”
Gilead funded the study. Dr. Liu and 12 coinvestigators reported having no conflicts of interest. Nine coinvestigators reported being employees and shareholders of Gilead, and one coinvestigator reporting consulting for Gilead. The senior author disclosed ties to Roche, Bristol-Myers Squibb, Johnson & Johnson, Bayer, MSD, and Taiha.
SOURCE: Lui C-J et al. Gastroenterology. 2017 Nov 21. doi: 10.1053/j.gastro.2017.11.011.
For patients or death in a phase 3b, multicenter, open-label study.
“Although we observed increases in HBV DNA in most patients, these increases were [usually] not associated with ALT [alanine amino transferase] flares or clinical complications,” reported Chun-Jen Liu, MD, of National Taiwan University College of Medicine and Hospital, Taipei, and his associates. Although nearly two-thirds of patients developed HBV reactivation, less than 5% developed alanine aminotransferase rises at least twice the upper limit of normal, and only one patient had symptomatic HBV reactivation, which entecavir therapy resolved. This study was the first to prospectively evaluate the risk of HBV reactivation during HCV treatment, the researchers wrote in the March issue of Gastroenterology.
Because chronic hepatitis C virus infection tends to suppress HBV replication, peginterferon/ribavirin or direct-acting anti-HCV treatment can reactivate HBV infection, especially in patients who test positive for hepatitis B surface antigen (HBsAg). Left untreated, reactivated HBV can lead to fulminant hepatitis, liver failure, and death, as noted on recently mandated boxed warnings.
Accordingly, guidelines recommend testing patients for HBV infection before starting HCV treatment. The study enrolled 111 coinfected patients; about two-thirds were female, and 16% had compensated cirrhosis. All tested positive for HBsAg at screening, and all but one also tested positive at baseline. Mean baseline HBV DNA levels were 2.1 log10 IU/mL. Patients received 90 mg ledipasvir plus 400 mg sofosbuvir for 12 weeks, and levels of HCV RNA, HBV DNA, and HBsAg were tested at weeks 1, 2, 4, 8, 12, posttreatment week 4, and then every 12 weeks until posttreatment week 108.
In all, 70 (63%) patients developed HBV reactivation, including 84% of the 37 patients with undetectable HBV DNA at baseline. During treatment, none of these patients had ALT rise more than twice the upper limit of normal. By 48 weeks post treatment, however, 77% still had quantifiable HBV DNA, and two had marked ALT rises. Furthermore, by posttreatment week 53, one of these patients developed bilirubinemia and symptomatic HBV infection (malaise, anorexia, sclera jaundice, and nausea), which resolved after treatment with entecavir.
A total of 74 patients had quantifiable baseline HBV DNA (at least 20 IU/mL). Three received entecavir or tenofovir disoproxil fumarate based on confirmed HBV reactivation with a concomitant ALT rise of at least twice the upper limit of normal. All were asymptomatic. There were no cases of liver failure or death.
“Regardless of HBV DNA and/or ALT elevations, no patient had signs of liver failure,” the researchers wrote. “Our results support the recommendations put forth in clinical treatment guidelines: HCV-infected patients should be evaluated for HBV infection prior to HCV treatment with direct-acting antivirals. Those who are HBsAg positive should be monitored during and after treatment for HBV reactivation, and treatment should be initiated in accordance with existing guidelines.”
Gilead funded the study. Dr. Liu and 12 coinvestigators reported having no conflicts of interest. Nine coinvestigators reported being employees and shareholders of Gilead, and one coinvestigator reporting consulting for Gilead. The senior author disclosed ties to Roche, Bristol-Myers Squibb, Johnson & Johnson, Bayer, MSD, and Taiha.
SOURCE: Lui C-J et al. Gastroenterology. 2017 Nov 21. doi: 10.1053/j.gastro.2017.11.011.
FROM GASTROENTEROLOGY
Key clinical point: Combination therapy with sofosbuvir/ledipasvir effectively treated chronic hepatitis C infection in hepatitis B coinfected patients.
Major finding: The rate of sustained viral response was 100% at 12 weeks. Most (63%) of patients had an increase in hepatitis B viral DNA, but only 5% of patients had a concomitant increase in alanine aminotransferase. There were no cases of liver failure or death.
Data source: A phase 3b, multicenter, single-arm, open-label study of 111 coinfected patients.
Disclosures: Gilead funded the study. Dr. Liu and 12 coinvestigators reported having no conflicts of interest. Nine coinvestigators reported being employees and shareholders of Gilead, and one coinvestigator reporting consulting for Gilead. The senior author disclosed ties to Roche, Bristol-Myers Squibb, Johnson & Johnson, Bayer, MSD, and Taiha.
Source: Lui C-J et al. Gastroenterology. 2017 Nov 21. doi: 10.1053/j.gastro.2017.11.011.
Ulcerative colitis is disabling over time
Between 70% and 80% of patients with ulcerative colitis relapsed within 10 years of diagnosis and 10%-15% had aggressive disease in a meta-analysis of 17 population-based cohorts spanning 1935 to 2016.
However, “contemporary population-based cohorts of patients diagnosed in the biologic era are lacking,” [and they] “may inform us of the population-level impact of paradigm shifts in approach to ulcerative colitis management during the last decade, such as early use of disease-modifying biologic therapy and treat-to-target [strategies],” wrote Mathurin Fumery, MD, of the University of California San Diego, La Jolla. The report was published in Clinical Gastroenterology and Hepatology (2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016).
Population-based observational cohort studies follow an entire group in a geographic area over an extended time, which better characterizes the true natural history of disease outside highly controlled settings of clinical trials, the reviewers noted. They searched MEDLINE for population-based longitudinal studies of adults with newly diagnosed ulcerative colitis, whose medical records were reviewed, and who were followed for at least a year. They identified 60 such studies of 17 cohorts that included 15,316 patients in southern and northern Europe, Australia, Israel, the United States, Canada, China, Hong Kong, Indonesia, Sri Lanka, Macau, Malaysia, Singapore, and Thailand.
Left-sided colitis was most common (median, 40%; interquartile range, 33%-45%) and about 10%-30% of patients had disease extension. Patients tended to have mild to moderate disease that was most active at diagnosis and subsequently alternated between remission and mild activity. However, nearly half of patients were hospitalized at some point because of ulcerative colitis, and about half of that subgroup was rehospitalized within 5 years. Furthermore, up to 15% of patients with ulcerative colitis underwent colectomy within 10 years, a risk that mucosal healing helped mitigate. Use of corticosteroids dropped over time as the prevalence of immunomodulators and anti–tumor necrosis factor therapy rose.
“Although ulcerative colitis is not associated with an increased risk of mortality, it is associated with high morbidity and work disability, comparable to Crohn’s disease,” the reviewers concluded. Not only are contemporary population-level data lacking, but it also remains unclear whether treating patients with ulcerative colitis according to baseline risk affects the disease course, or whether the natural history of this disease differs in newly industrialized nations or the Asia-Oceania region, they added.
Dr. Fumery disclosed support from the French Society of Gastroenterology, AbbVie, MSD, Takeda, and Ferring. Coinvestigators disclosed ties to numerous pharmaceutical companies.
SOURCE: Fumery M et al. Clin Gastroenterol Hepatol. 2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016.
Understanding the natural history of ulcerative colitis (UC) is imperative especially in view of emerging therapies that could have the potential to alter the natural course of disease. Dr. Fumery and his colleagues are to be congratulated for conducting a comprehensive review of different inception cohorts across the world and evaluating different facets of the disease. They found that the majority of patients had a mild-moderate disease course, which was most active at the time of diagnosis. Approximately half the patients require UC-related hospitalization at some time during the course of their disease. Similarly, 50% of patients received corticosteroids, and while almost all patients with UC were treated with mesalamine within 1 year of diagnosis, 30%-40% are not on mesalamine long term. They also identified consistent predictors of poor prognosis, including young age at diagnosis, extensive disease, early need for corticosteroids, and elevated biochemical markers.
These results are reassuring because they reinforce the previous observations that roughly half the patients with UC have an uncomplicated disease course and that the first few years of disease are the most aggressive. A good indicator was that the proportion of patients receiving corticosteroids decreased over time. The disheartening news was that the long-term colectomy rates have generally remained stable over time.
Nabeel Khan, MD, is assistant professor of clinical medicine, University of Pennsylvania, Philadelphia, and director of gastroenterology, Philadelphia Veterans Affairs Medical Center. He has received research grants from Takeda, Luitpold, and Pfizer.
Understanding the natural history of ulcerative colitis (UC) is imperative especially in view of emerging therapies that could have the potential to alter the natural course of disease. Dr. Fumery and his colleagues are to be congratulated for conducting a comprehensive review of different inception cohorts across the world and evaluating different facets of the disease. They found that the majority of patients had a mild-moderate disease course, which was most active at the time of diagnosis. Approximately half the patients require UC-related hospitalization at some time during the course of their disease. Similarly, 50% of patients received corticosteroids, and while almost all patients with UC were treated with mesalamine within 1 year of diagnosis, 30%-40% are not on mesalamine long term. They also identified consistent predictors of poor prognosis, including young age at diagnosis, extensive disease, early need for corticosteroids, and elevated biochemical markers.
These results are reassuring because they reinforce the previous observations that roughly half the patients with UC have an uncomplicated disease course and that the first few years of disease are the most aggressive. A good indicator was that the proportion of patients receiving corticosteroids decreased over time. The disheartening news was that the long-term colectomy rates have generally remained stable over time.
Nabeel Khan, MD, is assistant professor of clinical medicine, University of Pennsylvania, Philadelphia, and director of gastroenterology, Philadelphia Veterans Affairs Medical Center. He has received research grants from Takeda, Luitpold, and Pfizer.
Understanding the natural history of ulcerative colitis (UC) is imperative especially in view of emerging therapies that could have the potential to alter the natural course of disease. Dr. Fumery and his colleagues are to be congratulated for conducting a comprehensive review of different inception cohorts across the world and evaluating different facets of the disease. They found that the majority of patients had a mild-moderate disease course, which was most active at the time of diagnosis. Approximately half the patients require UC-related hospitalization at some time during the course of their disease. Similarly, 50% of patients received corticosteroids, and while almost all patients with UC were treated with mesalamine within 1 year of diagnosis, 30%-40% are not on mesalamine long term. They also identified consistent predictors of poor prognosis, including young age at diagnosis, extensive disease, early need for corticosteroids, and elevated biochemical markers.
These results are reassuring because they reinforce the previous observations that roughly half the patients with UC have an uncomplicated disease course and that the first few years of disease are the most aggressive. A good indicator was that the proportion of patients receiving corticosteroids decreased over time. The disheartening news was that the long-term colectomy rates have generally remained stable over time.
Nabeel Khan, MD, is assistant professor of clinical medicine, University of Pennsylvania, Philadelphia, and director of gastroenterology, Philadelphia Veterans Affairs Medical Center. He has received research grants from Takeda, Luitpold, and Pfizer.
Between 70% and 80% of patients with ulcerative colitis relapsed within 10 years of diagnosis and 10%-15% had aggressive disease in a meta-analysis of 17 population-based cohorts spanning 1935 to 2016.
However, “contemporary population-based cohorts of patients diagnosed in the biologic era are lacking,” [and they] “may inform us of the population-level impact of paradigm shifts in approach to ulcerative colitis management during the last decade, such as early use of disease-modifying biologic therapy and treat-to-target [strategies],” wrote Mathurin Fumery, MD, of the University of California San Diego, La Jolla. The report was published in Clinical Gastroenterology and Hepatology (2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016).
Population-based observational cohort studies follow an entire group in a geographic area over an extended time, which better characterizes the true natural history of disease outside highly controlled settings of clinical trials, the reviewers noted. They searched MEDLINE for population-based longitudinal studies of adults with newly diagnosed ulcerative colitis, whose medical records were reviewed, and who were followed for at least a year. They identified 60 such studies of 17 cohorts that included 15,316 patients in southern and northern Europe, Australia, Israel, the United States, Canada, China, Hong Kong, Indonesia, Sri Lanka, Macau, Malaysia, Singapore, and Thailand.
Left-sided colitis was most common (median, 40%; interquartile range, 33%-45%) and about 10%-30% of patients had disease extension. Patients tended to have mild to moderate disease that was most active at diagnosis and subsequently alternated between remission and mild activity. However, nearly half of patients were hospitalized at some point because of ulcerative colitis, and about half of that subgroup was rehospitalized within 5 years. Furthermore, up to 15% of patients with ulcerative colitis underwent colectomy within 10 years, a risk that mucosal healing helped mitigate. Use of corticosteroids dropped over time as the prevalence of immunomodulators and anti–tumor necrosis factor therapy rose.
“Although ulcerative colitis is not associated with an increased risk of mortality, it is associated with high morbidity and work disability, comparable to Crohn’s disease,” the reviewers concluded. Not only are contemporary population-level data lacking, but it also remains unclear whether treating patients with ulcerative colitis according to baseline risk affects the disease course, or whether the natural history of this disease differs in newly industrialized nations or the Asia-Oceania region, they added.
Dr. Fumery disclosed support from the French Society of Gastroenterology, AbbVie, MSD, Takeda, and Ferring. Coinvestigators disclosed ties to numerous pharmaceutical companies.
SOURCE: Fumery M et al. Clin Gastroenterol Hepatol. 2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016.
Between 70% and 80% of patients with ulcerative colitis relapsed within 10 years of diagnosis and 10%-15% had aggressive disease in a meta-analysis of 17 population-based cohorts spanning 1935 to 2016.
However, “contemporary population-based cohorts of patients diagnosed in the biologic era are lacking,” [and they] “may inform us of the population-level impact of paradigm shifts in approach to ulcerative colitis management during the last decade, such as early use of disease-modifying biologic therapy and treat-to-target [strategies],” wrote Mathurin Fumery, MD, of the University of California San Diego, La Jolla. The report was published in Clinical Gastroenterology and Hepatology (2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016).
Population-based observational cohort studies follow an entire group in a geographic area over an extended time, which better characterizes the true natural history of disease outside highly controlled settings of clinical trials, the reviewers noted. They searched MEDLINE for population-based longitudinal studies of adults with newly diagnosed ulcerative colitis, whose medical records were reviewed, and who were followed for at least a year. They identified 60 such studies of 17 cohorts that included 15,316 patients in southern and northern Europe, Australia, Israel, the United States, Canada, China, Hong Kong, Indonesia, Sri Lanka, Macau, Malaysia, Singapore, and Thailand.
Left-sided colitis was most common (median, 40%; interquartile range, 33%-45%) and about 10%-30% of patients had disease extension. Patients tended to have mild to moderate disease that was most active at diagnosis and subsequently alternated between remission and mild activity. However, nearly half of patients were hospitalized at some point because of ulcerative colitis, and about half of that subgroup was rehospitalized within 5 years. Furthermore, up to 15% of patients with ulcerative colitis underwent colectomy within 10 years, a risk that mucosal healing helped mitigate. Use of corticosteroids dropped over time as the prevalence of immunomodulators and anti–tumor necrosis factor therapy rose.
“Although ulcerative colitis is not associated with an increased risk of mortality, it is associated with high morbidity and work disability, comparable to Crohn’s disease,” the reviewers concluded. Not only are contemporary population-level data lacking, but it also remains unclear whether treating patients with ulcerative colitis according to baseline risk affects the disease course, or whether the natural history of this disease differs in newly industrialized nations or the Asia-Oceania region, they added.
Dr. Fumery disclosed support from the French Society of Gastroenterology, AbbVie, MSD, Takeda, and Ferring. Coinvestigators disclosed ties to numerous pharmaceutical companies.
SOURCE: Fumery M et al. Clin Gastroenterol Hepatol. 2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Although usually mild to moderate in severity, ulcerative colitis is disabling over time.
Major finding: Cumulative risk of relapse was 70%-80% at 10 years.
Data source: A systematic review and analysis of 17 population-based cohorts.
Disclosures: Dr. Fumery disclosed support from the French Society of Gastroenterology, Abbvie, MSD, Takeda, and Ferring. Coinvestigators disclosed ties to numerous pharmaceutical companies.
Source: Fumery M et al. Clin Gastroenterol Hepatol. 2017 Jun 16. doi: 10.1016/j.cgh.2017.06.016.