Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

PPI use not linked to cognitive decline

Article Type
Changed
Fri, 01/18/2019 - 17:32

 

Use of proton pump inhibitors (PPIs) is not associated with cognitive decline in two prospective, population-based studies of identical twins published in the May issue of Clinical Gastroenterology and Hepatology.

“No stated differences in [mean cognitive] scores between PPI users and nonusers were significant,” wrote Mette Wod, PhD, of the University of Southern Denmark, Odense, with her associates.

Schnoodle/ThinkStock

Past research has yielded mixed findings about whether using PPIs affects the risk of dementia. Preclinical data suggest that exposure to these drugs affects amyloid levels in mice, but “the evidence is equivocal, [and] the results of epidemiologic studies [of humans] have also been inconclusive, with more recent studies pointing toward a null association,” the investigators wrote. Furthermore, there are only “scant” data on whether long-term PPI use affects cognitive function, they noted.

To help clarify the issue, they analyzed prospective data from two studies of twins in Denmark: the Study of Middle-Aged Danish Twins, in which individuals underwent a five-part cognitive battery at baseline and then 10 years later, and the Longitudinal Study of Aging Danish Twins, in which participants underwent the same test at baseline and 2 years later. The cognitive test assessed verbal fluency, forward and backward digit span, and immediate and delayed recall of a 12-item list. Using data from a national prescription registry, the investigators also estimated individuals’ PPI exposure starting 2 years before study enrollment.

In the study of middle-aged twins, participants who used high-dose PPIs before study enrollment had cognitive scores that were slightly lower at baseline, compared with PPI nonusers. Mean baseline scores were 43.1 (standard deviation, 13.1) and 46.8 (SD, 10.2), respectively. However, after researchers adjusted for numerous clinical and demographic variables, the between-group difference in baseline scores narrowed to just 0.69 (95% confidence interval, –4.98 to 3.61), which was not statistically significant.

The longitudinal study of older twins yielded similar results. Individuals who used high doses of PPIs had slightly higher adjusted mean baseline cognitive score than did nonusers, but the difference did not reach statistical significance (0.95; 95% CI, –1.88 to 3.79).

Furthermore, prospective assessments of cognitive decline found no evidence of an effect. In the longitudinal aging study, high-dose PPI users had slightly less cognitive decline (based on a smaller change in test scores over time) than did nonusers, but the adjusted difference in decline between groups was not significant (1.22 points; 95% CI, –3.73 to 1.29). In the middle-aged twin study, individuals with the highest levels of PPI exposure (at least 1,600 daily doses) had slightly less cognitive decline than did nonusers, with an adjusted difference of 0.94 points (95% CI, –1.63 to 3.50) between groups, but this did not reach statistical significance.

 

 


“This study is the first to examine the association between long-term PPI use and cognitive decline in a population-based setting,” the researchers concluded. “Cognitive scores of more than 7,800 middle-aged and older Danish twins at baseline did not indicate an association with previous PPI use. Follow-up data on more than 4,000 of these twins did not indicate that use of this class of drugs was correlated to cognitive decline.”

Odense University Hospital provided partial funding. Dr. Wod had no disclosures. Three coinvestigators disclosed ties to AstraZeneca and Bayer AG.

SOURCE: Wod M et al. Clin Gastro Hepatol. 2018 Feb 3. doi: 10.1016/j.cgh.2018.01.034.

Body

Over the last 20 years, there have been multiple retrospective studies which have shown associations between the use of proton pump inhibitors (PPIs) and a wide constellation of serious medical complications. However, detecting an association between a drug and a complication does not necessarily indicate that the drug was indeed responsible.

Dr. Laura Towgarnik
The evidence supporting the assertion that PPIs cause cognitive decline is among the most tenuous of all the PPI/complication associations. The initial reports linking PPI use to dementia emerged in 2016 based on the results of a German retrospective analysis, which showed an association between PPIs and having a health care contact coded as dementia. However, this study had numerous methodological flaws, including the investigators not using a validated definition for dementia and not being able to control for conditions that may be more common in both PPI users and persons with dementia. In addition, there is little reason to believe that PPIs, based on their mechanism of action, should have any negative effect on cognitive function. Nevertheless, this paper was extensively cited in the lay press, and likely led to the inappropriate discontinuation of PPI therapy among persons with ongoing indications, or in the failure to start PPI therapy in persons who would have derived benefit.

This well-done study by Wod et al, which shows no significant association between PPI use and decreased cognition and cognitive decline will, I hope, serve to allay any misplaced concerns that may exist among clinicians and patients about PPI use in this population. This paper has notable strengths, most importantly having access to results of a direct, unbiased assessment of changes in cognitive function over time and accurate assessment of PPI exposure. Short of performing a controlled, prospective trial, we are unlikely to see better evidence indicating a lack of a causal relationship between PPI use and changes in cognitive function. This provides assurance that patients with indications for PPI use can continue to use them.

Laura E. Targownik, MD, MSHS, FRCPC, is section head, section of gastroenterology, University of Manitoba, Winnipeg, Canada; Gastroenterology and Endoscopy Site Lead, Health Sciences Centre, Winnipeg; associate director, University of Manitoba Inflammatory Bowel Disease Research Centre; associate professor, department of internal medicine, section of gastroenterology, University of Manitoba. She has no conflicts of interest.

Publications
Topics
Sections
Body

Over the last 20 years, there have been multiple retrospective studies which have shown associations between the use of proton pump inhibitors (PPIs) and a wide constellation of serious medical complications. However, detecting an association between a drug and a complication does not necessarily indicate that the drug was indeed responsible.

Dr. Laura Towgarnik
The evidence supporting the assertion that PPIs cause cognitive decline is among the most tenuous of all the PPI/complication associations. The initial reports linking PPI use to dementia emerged in 2016 based on the results of a German retrospective analysis, which showed an association between PPIs and having a health care contact coded as dementia. However, this study had numerous methodological flaws, including the investigators not using a validated definition for dementia and not being able to control for conditions that may be more common in both PPI users and persons with dementia. In addition, there is little reason to believe that PPIs, based on their mechanism of action, should have any negative effect on cognitive function. Nevertheless, this paper was extensively cited in the lay press, and likely led to the inappropriate discontinuation of PPI therapy among persons with ongoing indications, or in the failure to start PPI therapy in persons who would have derived benefit.

This well-done study by Wod et al, which shows no significant association between PPI use and decreased cognition and cognitive decline will, I hope, serve to allay any misplaced concerns that may exist among clinicians and patients about PPI use in this population. This paper has notable strengths, most importantly having access to results of a direct, unbiased assessment of changes in cognitive function over time and accurate assessment of PPI exposure. Short of performing a controlled, prospective trial, we are unlikely to see better evidence indicating a lack of a causal relationship between PPI use and changes in cognitive function. This provides assurance that patients with indications for PPI use can continue to use them.

Laura E. Targownik, MD, MSHS, FRCPC, is section head, section of gastroenterology, University of Manitoba, Winnipeg, Canada; Gastroenterology and Endoscopy Site Lead, Health Sciences Centre, Winnipeg; associate director, University of Manitoba Inflammatory Bowel Disease Research Centre; associate professor, department of internal medicine, section of gastroenterology, University of Manitoba. She has no conflicts of interest.

Body

Over the last 20 years, there have been multiple retrospective studies which have shown associations between the use of proton pump inhibitors (PPIs) and a wide constellation of serious medical complications. However, detecting an association between a drug and a complication does not necessarily indicate that the drug was indeed responsible.

Dr. Laura Towgarnik
The evidence supporting the assertion that PPIs cause cognitive decline is among the most tenuous of all the PPI/complication associations. The initial reports linking PPI use to dementia emerged in 2016 based on the results of a German retrospective analysis, which showed an association between PPIs and having a health care contact coded as dementia. However, this study had numerous methodological flaws, including the investigators not using a validated definition for dementia and not being able to control for conditions that may be more common in both PPI users and persons with dementia. In addition, there is little reason to believe that PPIs, based on their mechanism of action, should have any negative effect on cognitive function. Nevertheless, this paper was extensively cited in the lay press, and likely led to the inappropriate discontinuation of PPI therapy among persons with ongoing indications, or in the failure to start PPI therapy in persons who would have derived benefit.

This well-done study by Wod et al, which shows no significant association between PPI use and decreased cognition and cognitive decline will, I hope, serve to allay any misplaced concerns that may exist among clinicians and patients about PPI use in this population. This paper has notable strengths, most importantly having access to results of a direct, unbiased assessment of changes in cognitive function over time and accurate assessment of PPI exposure. Short of performing a controlled, prospective trial, we are unlikely to see better evidence indicating a lack of a causal relationship between PPI use and changes in cognitive function. This provides assurance that patients with indications for PPI use can continue to use them.

Laura E. Targownik, MD, MSHS, FRCPC, is section head, section of gastroenterology, University of Manitoba, Winnipeg, Canada; Gastroenterology and Endoscopy Site Lead, Health Sciences Centre, Winnipeg; associate director, University of Manitoba Inflammatory Bowel Disease Research Centre; associate professor, department of internal medicine, section of gastroenterology, University of Manitoba. She has no conflicts of interest.

 

Use of proton pump inhibitors (PPIs) is not associated with cognitive decline in two prospective, population-based studies of identical twins published in the May issue of Clinical Gastroenterology and Hepatology.

“No stated differences in [mean cognitive] scores between PPI users and nonusers were significant,” wrote Mette Wod, PhD, of the University of Southern Denmark, Odense, with her associates.

Schnoodle/ThinkStock

Past research has yielded mixed findings about whether using PPIs affects the risk of dementia. Preclinical data suggest that exposure to these drugs affects amyloid levels in mice, but “the evidence is equivocal, [and] the results of epidemiologic studies [of humans] have also been inconclusive, with more recent studies pointing toward a null association,” the investigators wrote. Furthermore, there are only “scant” data on whether long-term PPI use affects cognitive function, they noted.

To help clarify the issue, they analyzed prospective data from two studies of twins in Denmark: the Study of Middle-Aged Danish Twins, in which individuals underwent a five-part cognitive battery at baseline and then 10 years later, and the Longitudinal Study of Aging Danish Twins, in which participants underwent the same test at baseline and 2 years later. The cognitive test assessed verbal fluency, forward and backward digit span, and immediate and delayed recall of a 12-item list. Using data from a national prescription registry, the investigators also estimated individuals’ PPI exposure starting 2 years before study enrollment.

In the study of middle-aged twins, participants who used high-dose PPIs before study enrollment had cognitive scores that were slightly lower at baseline, compared with PPI nonusers. Mean baseline scores were 43.1 (standard deviation, 13.1) and 46.8 (SD, 10.2), respectively. However, after researchers adjusted for numerous clinical and demographic variables, the between-group difference in baseline scores narrowed to just 0.69 (95% confidence interval, –4.98 to 3.61), which was not statistically significant.

The longitudinal study of older twins yielded similar results. Individuals who used high doses of PPIs had slightly higher adjusted mean baseline cognitive score than did nonusers, but the difference did not reach statistical significance (0.95; 95% CI, –1.88 to 3.79).

Furthermore, prospective assessments of cognitive decline found no evidence of an effect. In the longitudinal aging study, high-dose PPI users had slightly less cognitive decline (based on a smaller change in test scores over time) than did nonusers, but the adjusted difference in decline between groups was not significant (1.22 points; 95% CI, –3.73 to 1.29). In the middle-aged twin study, individuals with the highest levels of PPI exposure (at least 1,600 daily doses) had slightly less cognitive decline than did nonusers, with an adjusted difference of 0.94 points (95% CI, –1.63 to 3.50) between groups, but this did not reach statistical significance.

 

 


“This study is the first to examine the association between long-term PPI use and cognitive decline in a population-based setting,” the researchers concluded. “Cognitive scores of more than 7,800 middle-aged and older Danish twins at baseline did not indicate an association with previous PPI use. Follow-up data on more than 4,000 of these twins did not indicate that use of this class of drugs was correlated to cognitive decline.”

Odense University Hospital provided partial funding. Dr. Wod had no disclosures. Three coinvestigators disclosed ties to AstraZeneca and Bayer AG.

SOURCE: Wod M et al. Clin Gastro Hepatol. 2018 Feb 3. doi: 10.1016/j.cgh.2018.01.034.

 

Use of proton pump inhibitors (PPIs) is not associated with cognitive decline in two prospective, population-based studies of identical twins published in the May issue of Clinical Gastroenterology and Hepatology.

“No stated differences in [mean cognitive] scores between PPI users and nonusers were significant,” wrote Mette Wod, PhD, of the University of Southern Denmark, Odense, with her associates.

Schnoodle/ThinkStock

Past research has yielded mixed findings about whether using PPIs affects the risk of dementia. Preclinical data suggest that exposure to these drugs affects amyloid levels in mice, but “the evidence is equivocal, [and] the results of epidemiologic studies [of humans] have also been inconclusive, with more recent studies pointing toward a null association,” the investigators wrote. Furthermore, there are only “scant” data on whether long-term PPI use affects cognitive function, they noted.

To help clarify the issue, they analyzed prospective data from two studies of twins in Denmark: the Study of Middle-Aged Danish Twins, in which individuals underwent a five-part cognitive battery at baseline and then 10 years later, and the Longitudinal Study of Aging Danish Twins, in which participants underwent the same test at baseline and 2 years later. The cognitive test assessed verbal fluency, forward and backward digit span, and immediate and delayed recall of a 12-item list. Using data from a national prescription registry, the investigators also estimated individuals’ PPI exposure starting 2 years before study enrollment.

In the study of middle-aged twins, participants who used high-dose PPIs before study enrollment had cognitive scores that were slightly lower at baseline, compared with PPI nonusers. Mean baseline scores were 43.1 (standard deviation, 13.1) and 46.8 (SD, 10.2), respectively. However, after researchers adjusted for numerous clinical and demographic variables, the between-group difference in baseline scores narrowed to just 0.69 (95% confidence interval, –4.98 to 3.61), which was not statistically significant.

The longitudinal study of older twins yielded similar results. Individuals who used high doses of PPIs had slightly higher adjusted mean baseline cognitive score than did nonusers, but the difference did not reach statistical significance (0.95; 95% CI, –1.88 to 3.79).

Furthermore, prospective assessments of cognitive decline found no evidence of an effect. In the longitudinal aging study, high-dose PPI users had slightly less cognitive decline (based on a smaller change in test scores over time) than did nonusers, but the adjusted difference in decline between groups was not significant (1.22 points; 95% CI, –3.73 to 1.29). In the middle-aged twin study, individuals with the highest levels of PPI exposure (at least 1,600 daily doses) had slightly less cognitive decline than did nonusers, with an adjusted difference of 0.94 points (95% CI, –1.63 to 3.50) between groups, but this did not reach statistical significance.

 

 


“This study is the first to examine the association between long-term PPI use and cognitive decline in a population-based setting,” the researchers concluded. “Cognitive scores of more than 7,800 middle-aged and older Danish twins at baseline did not indicate an association with previous PPI use. Follow-up data on more than 4,000 of these twins did not indicate that use of this class of drugs was correlated to cognitive decline.”

Odense University Hospital provided partial funding. Dr. Wod had no disclosures. Three coinvestigators disclosed ties to AstraZeneca and Bayer AG.

SOURCE: Wod M et al. Clin Gastro Hepatol. 2018 Feb 3. doi: 10.1016/j.cgh.2018.01.034.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Use of proton pump inhibitors was not associated with cognitive decline.

Major finding: Mean baseline cognitive scores did not significantly differ between PPI users and nonusers, nor did changes in cognitive scores over time.

Study details: Two population-based studies of twins in Denmark.

Disclosures: Odense University Hospital provided partial funding. Dr. Wod had no disclosures. Three coinvestigators disclosed ties to AstraZeneca and Bayer AG.

Source: Wod M et al. Clin Gastro Hepatol. 2018 Feb 3. doi: 10.1016/j.cgh.2018.01.034.

Disqus Comments
Default
Use ProPublica

Alpha fetoprotein boosted detection of early-stage liver cancer

Article Type
Changed
Wed, 05/26/2021 - 13:50

 

For patients with cirrhosis, adding serum alpha fetoprotein testing to ultrasound significantly boosted its ability to detect early-stage hepatocellular carcinoma, according to the results of a systematic review and meta-analysis reported in the May issue of Gastroenterology.

Used alone, ultrasound detected only 45% of early-stage hepatocellular carcinomas (95% confidence interval, 30%-62%), reported Kristina Tzartzeva, MD, of the University of Texas, Dallas, with her associates. Adding alpha fetoprotein (AFP) increased this sensitivity to 63% (95% CI, 48%-75%; P = .002). Few studies evaluated alternative surveillance tools, such as CT or MRI.

Diagnosing liver cancer early is key to survival and thus is a central issue in cirrhosis management. However, the best surveillance strategy remains uncertain, hinging as it does on sensitivity, specificity, and cost. The American Association for the Study of Liver Diseases and the European Association for the Study of the Liver recommend that cirrhotic patients undergo twice-yearly ultrasound to screen for hepatocellular carcinoma (HCC), but they disagree about the value of adding serum biomarker AFP testing. Meanwhile, more and more clinics are using CT and MRI because of concerns about the unreliability of ultrasound. “Given few direct comparative studies, we are forced to primarily rely on indirect comparisons across studies,” the reviewers wrote.

To do so, they searched MEDLINE and Scopus and identified 32 studies of HCC surveillance that comprised 13,367 patients, nearly all with baseline cirrhosis. The studies were published from 1990 to August 2016.

Ultrasound detected HCC of any stage with a sensitivity of 84% (95% CI, 76%-92%), but its sensitivity for detecting early-stage disease was less than 50%. In studies that performed direct comparisons, ultrasound alone was significantly less sensitive than ultrasound plus AFP for detecting all stages of HCC (relative risk, 0.80; 95% CI, 0.72-0.88) and early-stage disease (0.78; 0.66-0.92). However, ultrasound alone was more specific than ultrasound plus AFP (RR, 1.08; 95% CI, 1.05-1.09).

Four studies of about 900 patients evaluated cross-sectional imaging with CT or MRI. In one single-center, randomized trial, CT had a sensitivity of 63% for detecting early-stage disease, but the 95% CI for this estimate was very wide (30%-87%) and CT did not significantly outperform ultrasound (Aliment Pharmacol Ther. 2013;38:303-12). In another study, MRI and ultrasound had significantly different sensitivities of 84% and 26% for detecting (usually) early-stage disease (JAMA Oncol. 2017;3[4]:456-63).

 

 

“Ultrasound currently forms the backbone of professional society recommendations for HCC surveillance; however, our meta-analysis highlights its suboptimal sensitivity for detection of hepatocellular carcinoma at an early stage. Using ultrasound in combination with AFP appears to significantly improve sensitivity for detecting early HCC with a small, albeit statistically significant, trade-off in specificity. There are currently insufficient data to support routine use of CT- or MRI-based surveillance in all patients with cirrhosis,” the reviewers concluded.

The National Cancer Institute and Cancer Prevention Research Institute of Texas provided funding. None of the reviewers had conflicts of interest.

SOURCE: Tzartzeva K et al. Gastroenterology. 2018 Feb 6. doi: 10.1053/j.gastro.2018.01.064.

Publications
Topics
Sections

 

For patients with cirrhosis, adding serum alpha fetoprotein testing to ultrasound significantly boosted its ability to detect early-stage hepatocellular carcinoma, according to the results of a systematic review and meta-analysis reported in the May issue of Gastroenterology.

Used alone, ultrasound detected only 45% of early-stage hepatocellular carcinomas (95% confidence interval, 30%-62%), reported Kristina Tzartzeva, MD, of the University of Texas, Dallas, with her associates. Adding alpha fetoprotein (AFP) increased this sensitivity to 63% (95% CI, 48%-75%; P = .002). Few studies evaluated alternative surveillance tools, such as CT or MRI.

Diagnosing liver cancer early is key to survival and thus is a central issue in cirrhosis management. However, the best surveillance strategy remains uncertain, hinging as it does on sensitivity, specificity, and cost. The American Association for the Study of Liver Diseases and the European Association for the Study of the Liver recommend that cirrhotic patients undergo twice-yearly ultrasound to screen for hepatocellular carcinoma (HCC), but they disagree about the value of adding serum biomarker AFP testing. Meanwhile, more and more clinics are using CT and MRI because of concerns about the unreliability of ultrasound. “Given few direct comparative studies, we are forced to primarily rely on indirect comparisons across studies,” the reviewers wrote.

To do so, they searched MEDLINE and Scopus and identified 32 studies of HCC surveillance that comprised 13,367 patients, nearly all with baseline cirrhosis. The studies were published from 1990 to August 2016.

Ultrasound detected HCC of any stage with a sensitivity of 84% (95% CI, 76%-92%), but its sensitivity for detecting early-stage disease was less than 50%. In studies that performed direct comparisons, ultrasound alone was significantly less sensitive than ultrasound plus AFP for detecting all stages of HCC (relative risk, 0.80; 95% CI, 0.72-0.88) and early-stage disease (0.78; 0.66-0.92). However, ultrasound alone was more specific than ultrasound plus AFP (RR, 1.08; 95% CI, 1.05-1.09).

Four studies of about 900 patients evaluated cross-sectional imaging with CT or MRI. In one single-center, randomized trial, CT had a sensitivity of 63% for detecting early-stage disease, but the 95% CI for this estimate was very wide (30%-87%) and CT did not significantly outperform ultrasound (Aliment Pharmacol Ther. 2013;38:303-12). In another study, MRI and ultrasound had significantly different sensitivities of 84% and 26% for detecting (usually) early-stage disease (JAMA Oncol. 2017;3[4]:456-63).

 

 

“Ultrasound currently forms the backbone of professional society recommendations for HCC surveillance; however, our meta-analysis highlights its suboptimal sensitivity for detection of hepatocellular carcinoma at an early stage. Using ultrasound in combination with AFP appears to significantly improve sensitivity for detecting early HCC with a small, albeit statistically significant, trade-off in specificity. There are currently insufficient data to support routine use of CT- or MRI-based surveillance in all patients with cirrhosis,” the reviewers concluded.

The National Cancer Institute and Cancer Prevention Research Institute of Texas provided funding. None of the reviewers had conflicts of interest.

SOURCE: Tzartzeva K et al. Gastroenterology. 2018 Feb 6. doi: 10.1053/j.gastro.2018.01.064.

 

For patients with cirrhosis, adding serum alpha fetoprotein testing to ultrasound significantly boosted its ability to detect early-stage hepatocellular carcinoma, according to the results of a systematic review and meta-analysis reported in the May issue of Gastroenterology.

Used alone, ultrasound detected only 45% of early-stage hepatocellular carcinomas (95% confidence interval, 30%-62%), reported Kristina Tzartzeva, MD, of the University of Texas, Dallas, with her associates. Adding alpha fetoprotein (AFP) increased this sensitivity to 63% (95% CI, 48%-75%; P = .002). Few studies evaluated alternative surveillance tools, such as CT or MRI.

Diagnosing liver cancer early is key to survival and thus is a central issue in cirrhosis management. However, the best surveillance strategy remains uncertain, hinging as it does on sensitivity, specificity, and cost. The American Association for the Study of Liver Diseases and the European Association for the Study of the Liver recommend that cirrhotic patients undergo twice-yearly ultrasound to screen for hepatocellular carcinoma (HCC), but they disagree about the value of adding serum biomarker AFP testing. Meanwhile, more and more clinics are using CT and MRI because of concerns about the unreliability of ultrasound. “Given few direct comparative studies, we are forced to primarily rely on indirect comparisons across studies,” the reviewers wrote.

To do so, they searched MEDLINE and Scopus and identified 32 studies of HCC surveillance that comprised 13,367 patients, nearly all with baseline cirrhosis. The studies were published from 1990 to August 2016.

Ultrasound detected HCC of any stage with a sensitivity of 84% (95% CI, 76%-92%), but its sensitivity for detecting early-stage disease was less than 50%. In studies that performed direct comparisons, ultrasound alone was significantly less sensitive than ultrasound plus AFP for detecting all stages of HCC (relative risk, 0.80; 95% CI, 0.72-0.88) and early-stage disease (0.78; 0.66-0.92). However, ultrasound alone was more specific than ultrasound plus AFP (RR, 1.08; 95% CI, 1.05-1.09).

Four studies of about 900 patients evaluated cross-sectional imaging with CT or MRI. In one single-center, randomized trial, CT had a sensitivity of 63% for detecting early-stage disease, but the 95% CI for this estimate was very wide (30%-87%) and CT did not significantly outperform ultrasound (Aliment Pharmacol Ther. 2013;38:303-12). In another study, MRI and ultrasound had significantly different sensitivities of 84% and 26% for detecting (usually) early-stage disease (JAMA Oncol. 2017;3[4]:456-63).

 

 

“Ultrasound currently forms the backbone of professional society recommendations for HCC surveillance; however, our meta-analysis highlights its suboptimal sensitivity for detection of hepatocellular carcinoma at an early stage. Using ultrasound in combination with AFP appears to significantly improve sensitivity for detecting early HCC with a small, albeit statistically significant, trade-off in specificity. There are currently insufficient data to support routine use of CT- or MRI-based surveillance in all patients with cirrhosis,” the reviewers concluded.

The National Cancer Institute and Cancer Prevention Research Institute of Texas provided funding. None of the reviewers had conflicts of interest.

SOURCE: Tzartzeva K et al. Gastroenterology. 2018 Feb 6. doi: 10.1053/j.gastro.2018.01.064.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Ultrasound unreliably detects hepatocellular carcinoma, but adding alpha fetoprotein increases its sensitivity.

Major finding: Used alone, ultrasound detected only 47% of early-stage cases. Adding alpha fetoprotein increased this sensitivity to 63% (P = .002).

Study details: Systematic review and meta-analysis of 32 studies comprising 13,367 patients and spanning from 1990 to August 2016.

Disclosures: The National Cancer Institute and Cancer Prevention Research Institute of Texas provided funding. None of the researchers had conflicts of interest.

Source: Tzartzeva K et al. Gastroenterology. 2018 Feb 6. doi: 10.1053/j.gastro.2018.01.064.

Disqus Comments
Default
Use ProPublica

One in seven Americans had fecal incontinence

An important step forward
Article Type
Changed
Fri, 01/18/2019 - 17:32

 

One in seven respondents to a national survey reported a history of fecal incontinence, including one-third within the preceding week, investigators reported.

“Fecal incontinence [FI] is age-related and more prevalent among individuals with inflammatory bowel disease, celiac disease, irritable bowel syndrome, or diabetes than people without these disorders. Proactive screening for FI among these groups is warranted,” Stacy B. Menees, MD, and her associates wrote in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2018.01.062).

Accurately determining the prevalence of FI is difficult because patients are reluctant to disclose symptoms and physicians often do not ask. In one study of HMO enrollees, about a third of patients had a history of FI but fewer than 3% had a medical diagnosis. In other studies, the prevalence of FI has ranged from 2% to 21%. Population aging fuels the need to narrow these estimates because FI becomes more common with age, the investigators noted.

Accordingly, in October 2015, they used a mobile app called MyGIHealth to survey nearly 72,000 individuals about fecal incontinence and other GI symptoms. The survey took about 15 minutes to complete, in return for which respondents could receive cash, shop online, or donate to charity. The investigators assessed FI severity by analyzing responses to the National Institutes of Health FI Patient Reported Outcomes Measurement Information System questionnaire.

Of the 10,033 respondents reporting a history of fecal incontinence (14.4%), 33.3% had experienced at least one episode in the past week. About a third of individuals with FI said it interfered with their daily activities. “Increasing age and concomitant diarrhea and constipation were associated with increased odds [of] FI,” the researchers wrote. Compared with individuals aged 18-24 years, the odds of having ever experienced FI rose by 29% among those aged 25-45 years, by 72% among those aged 45-64 years, and by 118% among persons aged 65 years and older.

Self-reported FI also was significantly more common among individuals with Crohn’s disease (41%), ulcerative colitis (37%), celiac disease (34%), irritable bowel syndrome (13%), or diabetes (13%) than it was among persons without these conditions. Corresponding odds ratios ranged from about 1.5 (diabetes) to 2.8 (celiac disease).

For individuals reporting FI within the past week, greater severity (based on their responses to the NIH FI Patient Reported Outcomes Measurement Information System questionnaire) significantly correlated with being non-Hispanic black (P = .03) or Latino (P = .02) and with having Crohn’s disease (P less than .001), celiac disease (P less than .001), diabetes (P = .04), human immunodeficiency syndrome (P = .001), or chronic idiopathic constipation (P less than .001). “Our study is the first to find differences among racial/ethnic groups regarding FI severity,” the researchers noted. They did not speculate on reasons for the finding, but stressed the importance of screening for FI and screening patients with FI for serious GI diseases.

Ironwood Pharmaceuticals funded the National GI Survey, but the investigators received no funding for this study. Three coinvestigators reported ties to Ironwood Pharmaceuticals and My Total Health.

SOURCE: Menees SB et al. Gastroenterology. 2018 Feb 3. doi: 10.1053/j.gastro.2018.01.062.

Body

 

Fecal incontinence (FI) is a common problem associated with significant social anxiety and decreased quality of life for patients who experience it. Unfortunately, patients are not always forthcoming regarding their symptoms, and physicians often fail to inquire directly about incontinence symptoms.

Previous studies have shown the prevalence of FI to vary widely across different populations. Using novel technology through a mobile app, researchers at the University of Michigan, Ann Arbor, and Cedars-Sinai Medical Center, Los Angeles, have been able to perform the largest population-based study of community-dwelling Americans. They confirmed that FI is indeed a common problem experienced across the spectrum of age, sex, race, and socioeconomic status and interferes with the daily activities of more than one-third of those who experience it.

This study supports previous findings of an age-related increase in FI, with the highest prevalence in patients over age 65 years. Interestingly, males were more likely than female to have experienced FI within the past week, but not more likely to have ever experienced FI. While FI is often thought of as a primarily female problem (related to past obstetrical injury), it is important to remember that it likely affects both sexes equally.

Other significant risk factors include diabetes and gastrointestinal disorders. This study also confirms prior population-based findings that patients with chronic constipation are more likely to suffer FI. Finally, this study also identified risk factors associated with FI symptom severity including diabetes, HIV/AIDS, Crohn’s disease, celiac disease, and chronic constipation. This is also the first study to show differences between racial/ethnic groups, suggesting higher FI symptom scores in Latinos and African-Americans.

The strengths of this study include its size and the anonymity provided by an internet-based survey regarding a potentially embarrassing topic; however, it also may have led to the potential exclusion of older individuals or those without regular internet access.

In summary, I believe this is an important study which confirms that FI is a common among Americans while helping to identify potential risk factors for the presence and severity of FI. I am hopeful that with increased awareness, health care providers will become more prudent in screening their patients for FI, particularly in these higher-risk populations.
 

Stephanie A. McAbee, MD, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Publications
Topics
Sections
Body

 

Fecal incontinence (FI) is a common problem associated with significant social anxiety and decreased quality of life for patients who experience it. Unfortunately, patients are not always forthcoming regarding their symptoms, and physicians often fail to inquire directly about incontinence symptoms.

Previous studies have shown the prevalence of FI to vary widely across different populations. Using novel technology through a mobile app, researchers at the University of Michigan, Ann Arbor, and Cedars-Sinai Medical Center, Los Angeles, have been able to perform the largest population-based study of community-dwelling Americans. They confirmed that FI is indeed a common problem experienced across the spectrum of age, sex, race, and socioeconomic status and interferes with the daily activities of more than one-third of those who experience it.

This study supports previous findings of an age-related increase in FI, with the highest prevalence in patients over age 65 years. Interestingly, males were more likely than female to have experienced FI within the past week, but not more likely to have ever experienced FI. While FI is often thought of as a primarily female problem (related to past obstetrical injury), it is important to remember that it likely affects both sexes equally.

Other significant risk factors include diabetes and gastrointestinal disorders. This study also confirms prior population-based findings that patients with chronic constipation are more likely to suffer FI. Finally, this study also identified risk factors associated with FI symptom severity including diabetes, HIV/AIDS, Crohn’s disease, celiac disease, and chronic constipation. This is also the first study to show differences between racial/ethnic groups, suggesting higher FI symptom scores in Latinos and African-Americans.

The strengths of this study include its size and the anonymity provided by an internet-based survey regarding a potentially embarrassing topic; however, it also may have led to the potential exclusion of older individuals or those without regular internet access.

In summary, I believe this is an important study which confirms that FI is a common among Americans while helping to identify potential risk factors for the presence and severity of FI. I am hopeful that with increased awareness, health care providers will become more prudent in screening their patients for FI, particularly in these higher-risk populations.
 

Stephanie A. McAbee, MD, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Body

 

Fecal incontinence (FI) is a common problem associated with significant social anxiety and decreased quality of life for patients who experience it. Unfortunately, patients are not always forthcoming regarding their symptoms, and physicians often fail to inquire directly about incontinence symptoms.

Previous studies have shown the prevalence of FI to vary widely across different populations. Using novel technology through a mobile app, researchers at the University of Michigan, Ann Arbor, and Cedars-Sinai Medical Center, Los Angeles, have been able to perform the largest population-based study of community-dwelling Americans. They confirmed that FI is indeed a common problem experienced across the spectrum of age, sex, race, and socioeconomic status and interferes with the daily activities of more than one-third of those who experience it.

This study supports previous findings of an age-related increase in FI, with the highest prevalence in patients over age 65 years. Interestingly, males were more likely than female to have experienced FI within the past week, but not more likely to have ever experienced FI. While FI is often thought of as a primarily female problem (related to past obstetrical injury), it is important to remember that it likely affects both sexes equally.

Other significant risk factors include diabetes and gastrointestinal disorders. This study also confirms prior population-based findings that patients with chronic constipation are more likely to suffer FI. Finally, this study also identified risk factors associated with FI symptom severity including diabetes, HIV/AIDS, Crohn’s disease, celiac disease, and chronic constipation. This is also the first study to show differences between racial/ethnic groups, suggesting higher FI symptom scores in Latinos and African-Americans.

The strengths of this study include its size and the anonymity provided by an internet-based survey regarding a potentially embarrassing topic; however, it also may have led to the potential exclusion of older individuals or those without regular internet access.

In summary, I believe this is an important study which confirms that FI is a common among Americans while helping to identify potential risk factors for the presence and severity of FI. I am hopeful that with increased awareness, health care providers will become more prudent in screening their patients for FI, particularly in these higher-risk populations.
 

Stephanie A. McAbee, MD, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Title
An important step forward
An important step forward

 

One in seven respondents to a national survey reported a history of fecal incontinence, including one-third within the preceding week, investigators reported.

“Fecal incontinence [FI] is age-related and more prevalent among individuals with inflammatory bowel disease, celiac disease, irritable bowel syndrome, or diabetes than people without these disorders. Proactive screening for FI among these groups is warranted,” Stacy B. Menees, MD, and her associates wrote in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2018.01.062).

Accurately determining the prevalence of FI is difficult because patients are reluctant to disclose symptoms and physicians often do not ask. In one study of HMO enrollees, about a third of patients had a history of FI but fewer than 3% had a medical diagnosis. In other studies, the prevalence of FI has ranged from 2% to 21%. Population aging fuels the need to narrow these estimates because FI becomes more common with age, the investigators noted.

Accordingly, in October 2015, they used a mobile app called MyGIHealth to survey nearly 72,000 individuals about fecal incontinence and other GI symptoms. The survey took about 15 minutes to complete, in return for which respondents could receive cash, shop online, or donate to charity. The investigators assessed FI severity by analyzing responses to the National Institutes of Health FI Patient Reported Outcomes Measurement Information System questionnaire.

Of the 10,033 respondents reporting a history of fecal incontinence (14.4%), 33.3% had experienced at least one episode in the past week. About a third of individuals with FI said it interfered with their daily activities. “Increasing age and concomitant diarrhea and constipation were associated with increased odds [of] FI,” the researchers wrote. Compared with individuals aged 18-24 years, the odds of having ever experienced FI rose by 29% among those aged 25-45 years, by 72% among those aged 45-64 years, and by 118% among persons aged 65 years and older.

Self-reported FI also was significantly more common among individuals with Crohn’s disease (41%), ulcerative colitis (37%), celiac disease (34%), irritable bowel syndrome (13%), or diabetes (13%) than it was among persons without these conditions. Corresponding odds ratios ranged from about 1.5 (diabetes) to 2.8 (celiac disease).

For individuals reporting FI within the past week, greater severity (based on their responses to the NIH FI Patient Reported Outcomes Measurement Information System questionnaire) significantly correlated with being non-Hispanic black (P = .03) or Latino (P = .02) and with having Crohn’s disease (P less than .001), celiac disease (P less than .001), diabetes (P = .04), human immunodeficiency syndrome (P = .001), or chronic idiopathic constipation (P less than .001). “Our study is the first to find differences among racial/ethnic groups regarding FI severity,” the researchers noted. They did not speculate on reasons for the finding, but stressed the importance of screening for FI and screening patients with FI for serious GI diseases.

Ironwood Pharmaceuticals funded the National GI Survey, but the investigators received no funding for this study. Three coinvestigators reported ties to Ironwood Pharmaceuticals and My Total Health.

SOURCE: Menees SB et al. Gastroenterology. 2018 Feb 3. doi: 10.1053/j.gastro.2018.01.062.

 

One in seven respondents to a national survey reported a history of fecal incontinence, including one-third within the preceding week, investigators reported.

“Fecal incontinence [FI] is age-related and more prevalent among individuals with inflammatory bowel disease, celiac disease, irritable bowel syndrome, or diabetes than people without these disorders. Proactive screening for FI among these groups is warranted,” Stacy B. Menees, MD, and her associates wrote in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2018.01.062).

Accurately determining the prevalence of FI is difficult because patients are reluctant to disclose symptoms and physicians often do not ask. In one study of HMO enrollees, about a third of patients had a history of FI but fewer than 3% had a medical diagnosis. In other studies, the prevalence of FI has ranged from 2% to 21%. Population aging fuels the need to narrow these estimates because FI becomes more common with age, the investigators noted.

Accordingly, in October 2015, they used a mobile app called MyGIHealth to survey nearly 72,000 individuals about fecal incontinence and other GI symptoms. The survey took about 15 minutes to complete, in return for which respondents could receive cash, shop online, or donate to charity. The investigators assessed FI severity by analyzing responses to the National Institutes of Health FI Patient Reported Outcomes Measurement Information System questionnaire.

Of the 10,033 respondents reporting a history of fecal incontinence (14.4%), 33.3% had experienced at least one episode in the past week. About a third of individuals with FI said it interfered with their daily activities. “Increasing age and concomitant diarrhea and constipation were associated with increased odds [of] FI,” the researchers wrote. Compared with individuals aged 18-24 years, the odds of having ever experienced FI rose by 29% among those aged 25-45 years, by 72% among those aged 45-64 years, and by 118% among persons aged 65 years and older.

Self-reported FI also was significantly more common among individuals with Crohn’s disease (41%), ulcerative colitis (37%), celiac disease (34%), irritable bowel syndrome (13%), or diabetes (13%) than it was among persons without these conditions. Corresponding odds ratios ranged from about 1.5 (diabetes) to 2.8 (celiac disease).

For individuals reporting FI within the past week, greater severity (based on their responses to the NIH FI Patient Reported Outcomes Measurement Information System questionnaire) significantly correlated with being non-Hispanic black (P = .03) or Latino (P = .02) and with having Crohn’s disease (P less than .001), celiac disease (P less than .001), diabetes (P = .04), human immunodeficiency syndrome (P = .001), or chronic idiopathic constipation (P less than .001). “Our study is the first to find differences among racial/ethnic groups regarding FI severity,” the researchers noted. They did not speculate on reasons for the finding, but stressed the importance of screening for FI and screening patients with FI for serious GI diseases.

Ironwood Pharmaceuticals funded the National GI Survey, but the investigators received no funding for this study. Three coinvestigators reported ties to Ironwood Pharmaceuticals and My Total Health.

SOURCE: Menees SB et al. Gastroenterology. 2018 Feb 3. doi: 10.1053/j.gastro.2018.01.062.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: One in seven (14%) individuals had experienced fecal incontinence (FI), one-third within the past week.

Major finding: Self-reported FI was significantly more common among individuals with Crohn’s disease (41%), ulcerative colitis (37%), celiac disease (34%), irritable bowel syndrome (13%), or diabetes (13%) than among individuals without these diagnoses.

Study details: Analysis of 71,812 responses to the National GI Survey, conducted in October 2015.

Disclosures: Although Ironwood Pharmaceuticals funded the National GI Survey, the investigators received no funding for this study. Three coinvestigators reported ties to Ironwood Pharmaceuticals and My Total Health.

Source: Menees SB et al. Gastroenterology. 2018 Feb 3. doi: 10.1053/j.gastro.2018.01.062.

Disqus Comments
Default
Use ProPublica

Heavy drinking did not worsen clinical outcomes from drug-induced liver injury

Article Type
Changed
Sat, 12/08/2018 - 14:55

 

Heavy drinking was not associated with higher proportions of liver-related deaths or liver transplantation among patients with drug-induced liver injury (DILI), according to the results of a prospective multicenter cohort study reported in the May issue of Clinical Gastroenterology and Hepatology.

Anabolic steroids were the most common cause of DILI among heavy drinkers, defined as men who averaged more than three drinks a day or women who averaged more than two drinks daily, said Lara Dakhoul, MD, of Indiana University, Indianapolis, and her associates. There also was no evidence that heavy alcohol consumption increased the risk of liver injury attributable to isoniazid exposure, the researchers wrote in.

Although consuming alcohol significantly increases the risk of acetaminophen-induced liver injury, there is much less clarity about the relationship between drinking and hepatotoxicity from drugs such as duloxetine or antituberculosis medications, the researchers noted. In fact, one recent study found that drinking led to less severe liver injury among individuals with DILI. To better elucidate these links, the investigators studied 1,198 individuals with confirmed or probable DILI who enrolled in the DILI Network study (DILIN) between 2004 and 2016. At enrollment, all participants were asked if they consumed alcohol, and those who reported drinking within the past 12 months were offered a shortened version of the Skinner Alcohol Dependence Scale to collect details on alcohol consumption, including type, amount, and frequency.

In all, 601 persons reported consuming at least one alcoholic drink in the preceding year, of whom 348 completed the Skinner questionnaire. A total of 80 individuals reported heavy alcohol consumption. Heavy drinkers were typically in their early 40s, while nondrinkers tended to be nearly 50 years old (P less than .01). Heavy drinkers were also more often men (63%) while nondrinkers were usually women (65%; P less than .01). Heavy drinkers were significantly more likely to have DILI secondary to anabolic steroid exposure (13%) than were nondrinkers (2%; P less than .001). However, latency, pattern of liver injury, peak enzyme levels, and patterns of recovery from steroid hepatotoxicity were similar regardless of alcohol history.

A total of eight patients with DILI died of liver-related causes or underwent liver transplantation, and proportions of patients with these outcomes were similar regardless of alcohol history. These eight patients had no evidence of hepatitis C virus infection, but three appeared to have underlying alcoholic liver disease with superimposed acute-on-chronic liver failure. Heavy drinkers did not have significantly higher DILI severity scores than nondrinkers, but they did have significantly higher peak serum levels of alanine aminotransferase (1,323 U/L vs. 754, respectively; P = .02) and significantly higher levels of bilirubin (16.1 vs. 12.7 mg/dL; P = .03).

The two fatal cases of DILI among heavy drinkers involved a 44-year-old man with underlying alcoholic cirrhosis and steatohepatitis who developed acute-on-chronic liver failure 11 days after starting niacin, and a 76-year-old man with chronic obstructive pulmonary disease and bronchitis flare who developed severe liver injury and skin rash 6 days after starting azithromycin.

The study was not able to assess whether heavy alcohol consumption contributed to liver injury from specific agents, the researchers said. Additionally, a substantial number of drinkers did not complete the Skinner questionnaire, and those who did might have underestimated or underreported their own alcohol consumption. “Counterbalancing these issues are the [study’s] unique strengths, such as prospective design, larger sample size, well-characterized DILI phenotype, and careful, structured adjudication of causality and severity,” the researchers wrote.

 

 


Funders included the National Institute of Diabetes and Digestive and Kidney Diseases and the National Cancer Institute. Dr. Dakhoul had no conflicts of interest. On coinvestigator disclosed ties to numerous pharmaceutical companies.

SOURCE: Dakhoul L et al. Clin Gastro Hepatol. 2018 Jan 3. doi: 10.1016/j.cgh.2017.12.036.

Publications
Topics
Sections

 

Heavy drinking was not associated with higher proportions of liver-related deaths or liver transplantation among patients with drug-induced liver injury (DILI), according to the results of a prospective multicenter cohort study reported in the May issue of Clinical Gastroenterology and Hepatology.

Anabolic steroids were the most common cause of DILI among heavy drinkers, defined as men who averaged more than three drinks a day or women who averaged more than two drinks daily, said Lara Dakhoul, MD, of Indiana University, Indianapolis, and her associates. There also was no evidence that heavy alcohol consumption increased the risk of liver injury attributable to isoniazid exposure, the researchers wrote in.

Although consuming alcohol significantly increases the risk of acetaminophen-induced liver injury, there is much less clarity about the relationship between drinking and hepatotoxicity from drugs such as duloxetine or antituberculosis medications, the researchers noted. In fact, one recent study found that drinking led to less severe liver injury among individuals with DILI. To better elucidate these links, the investigators studied 1,198 individuals with confirmed or probable DILI who enrolled in the DILI Network study (DILIN) between 2004 and 2016. At enrollment, all participants were asked if they consumed alcohol, and those who reported drinking within the past 12 months were offered a shortened version of the Skinner Alcohol Dependence Scale to collect details on alcohol consumption, including type, amount, and frequency.

In all, 601 persons reported consuming at least one alcoholic drink in the preceding year, of whom 348 completed the Skinner questionnaire. A total of 80 individuals reported heavy alcohol consumption. Heavy drinkers were typically in their early 40s, while nondrinkers tended to be nearly 50 years old (P less than .01). Heavy drinkers were also more often men (63%) while nondrinkers were usually women (65%; P less than .01). Heavy drinkers were significantly more likely to have DILI secondary to anabolic steroid exposure (13%) than were nondrinkers (2%; P less than .001). However, latency, pattern of liver injury, peak enzyme levels, and patterns of recovery from steroid hepatotoxicity were similar regardless of alcohol history.

A total of eight patients with DILI died of liver-related causes or underwent liver transplantation, and proportions of patients with these outcomes were similar regardless of alcohol history. These eight patients had no evidence of hepatitis C virus infection, but three appeared to have underlying alcoholic liver disease with superimposed acute-on-chronic liver failure. Heavy drinkers did not have significantly higher DILI severity scores than nondrinkers, but they did have significantly higher peak serum levels of alanine aminotransferase (1,323 U/L vs. 754, respectively; P = .02) and significantly higher levels of bilirubin (16.1 vs. 12.7 mg/dL; P = .03).

The two fatal cases of DILI among heavy drinkers involved a 44-year-old man with underlying alcoholic cirrhosis and steatohepatitis who developed acute-on-chronic liver failure 11 days after starting niacin, and a 76-year-old man with chronic obstructive pulmonary disease and bronchitis flare who developed severe liver injury and skin rash 6 days after starting azithromycin.

The study was not able to assess whether heavy alcohol consumption contributed to liver injury from specific agents, the researchers said. Additionally, a substantial number of drinkers did not complete the Skinner questionnaire, and those who did might have underestimated or underreported their own alcohol consumption. “Counterbalancing these issues are the [study’s] unique strengths, such as prospective design, larger sample size, well-characterized DILI phenotype, and careful, structured adjudication of causality and severity,” the researchers wrote.

 

 


Funders included the National Institute of Diabetes and Digestive and Kidney Diseases and the National Cancer Institute. Dr. Dakhoul had no conflicts of interest. On coinvestigator disclosed ties to numerous pharmaceutical companies.

SOURCE: Dakhoul L et al. Clin Gastro Hepatol. 2018 Jan 3. doi: 10.1016/j.cgh.2017.12.036.

 

Heavy drinking was not associated with higher proportions of liver-related deaths or liver transplantation among patients with drug-induced liver injury (DILI), according to the results of a prospective multicenter cohort study reported in the May issue of Clinical Gastroenterology and Hepatology.

Anabolic steroids were the most common cause of DILI among heavy drinkers, defined as men who averaged more than three drinks a day or women who averaged more than two drinks daily, said Lara Dakhoul, MD, of Indiana University, Indianapolis, and her associates. There also was no evidence that heavy alcohol consumption increased the risk of liver injury attributable to isoniazid exposure, the researchers wrote in.

Although consuming alcohol significantly increases the risk of acetaminophen-induced liver injury, there is much less clarity about the relationship between drinking and hepatotoxicity from drugs such as duloxetine or antituberculosis medications, the researchers noted. In fact, one recent study found that drinking led to less severe liver injury among individuals with DILI. To better elucidate these links, the investigators studied 1,198 individuals with confirmed or probable DILI who enrolled in the DILI Network study (DILIN) between 2004 and 2016. At enrollment, all participants were asked if they consumed alcohol, and those who reported drinking within the past 12 months were offered a shortened version of the Skinner Alcohol Dependence Scale to collect details on alcohol consumption, including type, amount, and frequency.

In all, 601 persons reported consuming at least one alcoholic drink in the preceding year, of whom 348 completed the Skinner questionnaire. A total of 80 individuals reported heavy alcohol consumption. Heavy drinkers were typically in their early 40s, while nondrinkers tended to be nearly 50 years old (P less than .01). Heavy drinkers were also more often men (63%) while nondrinkers were usually women (65%; P less than .01). Heavy drinkers were significantly more likely to have DILI secondary to anabolic steroid exposure (13%) than were nondrinkers (2%; P less than .001). However, latency, pattern of liver injury, peak enzyme levels, and patterns of recovery from steroid hepatotoxicity were similar regardless of alcohol history.

A total of eight patients with DILI died of liver-related causes or underwent liver transplantation, and proportions of patients with these outcomes were similar regardless of alcohol history. These eight patients had no evidence of hepatitis C virus infection, but three appeared to have underlying alcoholic liver disease with superimposed acute-on-chronic liver failure. Heavy drinkers did not have significantly higher DILI severity scores than nondrinkers, but they did have significantly higher peak serum levels of alanine aminotransferase (1,323 U/L vs. 754, respectively; P = .02) and significantly higher levels of bilirubin (16.1 vs. 12.7 mg/dL; P = .03).

The two fatal cases of DILI among heavy drinkers involved a 44-year-old man with underlying alcoholic cirrhosis and steatohepatitis who developed acute-on-chronic liver failure 11 days after starting niacin, and a 76-year-old man with chronic obstructive pulmonary disease and bronchitis flare who developed severe liver injury and skin rash 6 days after starting azithromycin.

The study was not able to assess whether heavy alcohol consumption contributed to liver injury from specific agents, the researchers said. Additionally, a substantial number of drinkers did not complete the Skinner questionnaire, and those who did might have underestimated or underreported their own alcohol consumption. “Counterbalancing these issues are the [study’s] unique strengths, such as prospective design, larger sample size, well-characterized DILI phenotype, and careful, structured adjudication of causality and severity,” the researchers wrote.

 

 


Funders included the National Institute of Diabetes and Digestive and Kidney Diseases and the National Cancer Institute. Dr. Dakhoul had no conflicts of interest. On coinvestigator disclosed ties to numerous pharmaceutical companies.

SOURCE: Dakhoul L et al. Clin Gastro Hepatol. 2018 Jan 3. doi: 10.1016/j.cgh.2017.12.036.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Heavy alcohol consumption was not associated with worse outcomes of drug-induced liver toxicity.

Major finding: Proportions of patients with liver-related deaths and liver transplantation were statistically similar regardless of alcohol consumption history (P = .18).

Study details: Prospective study of 1,198 individuals with probable drug-induced liver injury.

Disclosures: Funders included the National Institute of Diabetes and Digestive and Kidney Diseases and the National Cancer Institute. Dr. Dakhoul had no conflicts. One coinvestigator disclosed ties to numerous pharmaceutical companies.

Source: Dakhoul L et al. Clin Gastro Hepatol. 2018 Jan 3. doi: 10.1016/j.cgh.2017.12.036.

Disqus Comments
Default
Use ProPublica

Model predicted Barrett’s esophagus progression

Comment by Dr. Prateek Sharma on Barrett’s esophagus (BE)
Article Type
Changed
Sat, 12/08/2018 - 14:51

 

A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).

Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.

Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).

The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).

 

 


Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”

There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.

SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.

Body

Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.

Dr. Prateek Sharma

With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a  professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.

Publications
Topics
Sections
Body

Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.

Dr. Prateek Sharma

With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a  professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.

Body

Barrett’s esophagus (BE) is the only known precursor lesion to esophageal adenocarcinoma (EAC), a rapidly rising cancer in the Western world, which has a poor 5-year survival rate of less than 20%. Management strategies to affect EAC incidence include screening and surveillance, with current guidelines recommending surveillance for all patients with a diagnosis of BE.
However, there are several challenges associated with adopting BE surveillance for all patients: It is estimated that anywhere from 2 million to 5 million U.S. adults may harbor BE, and the overall risk of BE progression to EAC is low (approximately 0.2%-0.4% annually). Both of these factors influence the cost-effectiveness of a global BE surveillance program.
Hence, a risk-stratification score that can distinguish BE patients who are at high risk for progression to high-grade dysplasia (HGD) and/or EAC from those whose disease will not progress will be extremely useful. This concept would be similar to other risk-scoring mechanisms, such as the MELD score for progression in liver disease.

Dr. Prateek Sharma

With use of a large multicenter cohort of patients with BE (more than 4,500 patients), this is the first risk-prediction score developed and validated using baseline demographic and endoscopy information to determine risk of progression. Readily available factors such as patient sex, smoking status, BE length, and confirmed histology were identified as risk factors for progression, which could then generate a score determining the individual patient’s risk of progression. Such a simple scoring system has the potential of tailoring management based on the risk factors. In the future, inclusion of molecular biomarkers along with this score may further enhance its potential for personalized medicine in BE patients.
Prateek Sharma, MD, is a  professor of medicine of University of Kansas, Kansas City. He has no conflicts of interest.

Title
Comment by Dr. Prateek Sharma on Barrett’s esophagus (BE)
Comment by Dr. Prateek Sharma on Barrett’s esophagus (BE)

 

A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).

Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.

Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).

The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).

 

 


Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”

There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.

SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.

 

A scoring model encompassing just four traits accurately predicted which patients with Barrett’s esophagus were most likely to develop high-grade dysplasia or esophageal adenocarcinoma, researchers reported in the April issue of Gastroenterology (2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009).

Those risk factors included sex, smoking, length of Barrett’s esophagus, and the presence of baseline low-grade dysplasia, said Sravanthi Parasa, MD, of Swedish Medical Center, Seattle, and her associates. For example, a male with a history of smoking found to have a 5-cm, nondysplastic Barrett’s esophagus on histology during his index endoscopy would fall into the model’s intermediate risk category, with a 0.7% annual risk of progression to high-grade dysplasia or esophageal adenocarcinoma, they explained. “This model has the potential to complement molecular biomarker panels currently in development,” they wrote.

Barrett’s esophagus increases the risk of esophageal adenocarcinoma by anywhere from 30 to 125 times, a range that reflects the multifactorial nature of progression and the hypothesis that not all patients with Barrett’s esophagus should undergo the same frequency of endoscopic surveillance, said the researchers. To incorporate predictors of progression into a single model, they analyzed prospective data from nearly 3,000 patients with Barrett’s esophagus who were followed for a median of 6 years at five centers in the United States and one center in the Netherlands. At baseline, patients were an average of 55 years old (standard deviation, 20 years), 84% were men, 88% were white, and the average Barrett’s esophagus length was 3.7 cm (SD, 3.2 cm).

The researchers created the model by starting with many demographic and clinical candidate variables and then using backward selection to eliminate those that did not predict progression with a P value of .05 or less. This is the same method used in the Framingham Heart Study, they noted. In all, 154 (6%) patients with Barrett’s esophagus developed high-grade dysplasia or esophageal adenocarcinoma, with an annual progression rate of about 1%. The significant predictors of progression included male sex, smoking, length of Barrett’s esophagus, and low-grade dysplasia at baseline. A model that included only these four variables distinguished progressors from nonprogressors with a c statistic of 0.76 (95% confidence interval, 0.72 to 0.80; P less than .001). Using 30% of patients as an internal validation cohort, the model’s calibration slope was 0.99 and its calibration intercept was -0.09 cohort (perfectly calibrated models have a slope of 1.0 and an intercept of 0.0).

 

 


Therefore, the model was well calibrated and did an appropriate job of identifying risk groups, the investigators concluded. Considering that the overall risk of Barrett’s esophagus progression is low, using this model could help avoid excess costs and burdens of unnecessary surveillance, they added. “We recognize that there is a key interest in contemporary medical research whether a marker (e.g. molecular, genetic) could add to incremental value of a risk progression score,” they wrote. “This can be an area of future research.”

There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.

SOURCE: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A model containing four risk factors identified patients with Barrett’s esophagus at significantly increased risk of progression to high-grade dysplasia or esophageal adenocarcinoma.

Major finding: Scores assigned identified patients with BE that progressed to HGD or EAC with a c statistic of 0.76 (95% CI, 0.72 to 0.80; P less than .001).

Data source: A multicenter, longitudinal study of 2,697 patients with Barrett’s esophagus.

Disclosures: There were no funding sources. Dr. Parasa had no disclosures. One coinvestigator disclosed ties to Cook Medical, CDx Diagnostics, and Cosmo Pharmaceuticals.

Source: Parasa S et al. Gastroenterology. 2017 Dec 19. doi: 10.1053/j.gastro.2017.12.009.

Disqus Comments
Default

VIDEO: Biomarker accurately predicted primary nonfunction after liver transplant

Article Type
Changed
Wed, 01/02/2019 - 10:06

 

Increased donor liver perfusate levels of an underglycosylated glycoprotein predicted primary transplant nonfunction with 100% accuracy in two prospective cohorts, researchers reported in Gastroenterology.

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.

Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.

Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.

 

 


The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).

The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.

Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.

SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.

Publications
Topics
Sections

 

Increased donor liver perfusate levels of an underglycosylated glycoprotein predicted primary transplant nonfunction with 100% accuracy in two prospective cohorts, researchers reported in Gastroenterology.

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.

Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.

Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.

 

 


The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).

The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.

Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.

SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.

 

Increased donor liver perfusate levels of an underglycosylated glycoprotein predicted primary transplant nonfunction with 100% accuracy in two prospective cohorts, researchers reported in Gastroenterology.

SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION

Glycomic alterations of immunoglobulin G “represent inflammatory disturbances in the liver that [mean it] will fail after transplantation,” wrote Xavier Verhelst, MD, of Ghent (Belgium) University Hospital, and his associates. The new glycomarker “could be a tool to safely select high-risk organs for liver transplantation that otherwise would be discarded from the donor pool based on a conventional clinical assessment,” and also could help prevent engraftment failures. “To our knowledge, not a single biomarker has demonstrated the same accuracy today,” they wrote in the April issue of Gastroenterology.

Chronic shortages of donor livers contribute to morbidity and death worldwide. However, relaxing donor criteria is controversial because of the increased risk of primary nonfunction, which affects some 2%-10% of liver transplantation patients, and early allograft dysfunction, which is even more common. Although no reliable scoring systems or biomarkers have been able to predict these outcomes prior to transplantation, clinical glycomics of serum has proven useful for diagnosing hepatic fibrosis, cirrhosis, and hepatocellular carcinoma, and for distinguishing hepatic steatosis from nonalcoholic steatohepatitis. “Perfusate biomarkers are an attractive alternative [to] liver biopsy or serum markers, because perfusate is believed to represent the condition of the entire liver parenchyma and is easy to collect in large volumes,” the researchers wrote.

Accordingly, they studied 66 patients who underwent liver transplantation at a single center in Belgium and a separate validation cohort of 56 transplantation recipients from two centers. The most common reason for liver transplantation was decompensated cirrhosis secondary to alcoholism, followed by chronic hepatitis C or B virus infection, acute liver failure, and polycystic liver disease. Donor grafts were transported using cold static storage (21° C), and hepatic veins were flushed to collect perfusate before transplantation. Protein-linked N-glycans was isolated from these perfusate samples and analyzed with a multicapillary electrophoresis-based ABI3130 sequencer.

 

 


The four patients in the primary study cohort who developed primary nonfunction resembled the others in terms of all clinical and demographic parameters except that they had a markedly increased concentration (P less than .0001) of a single-glycan, agalacto core-alpha-1,6-fucosylated biantennary glycan, dubbed NGA2F. The single patient in the validation cohort who developed primary nonfunction also had a significantly increased concentration of NGA2F (P = .037). There were no false positives in either cohort, and a 13% cutoff for perfusate NGA2F level identified primary nonfunction with 100% accuracy, the researchers said. In a multivariable model of donor risk index and perfusate markers, only NGA2F was prognostic for developing primary nonfunction (P less than .0001).

The researchers found no specific glycomic signature for early allograft dysfunction, perhaps because it is more complex and multifactorial, they wrote. Although electrophoresis testing took 48 hours, work is underway to shorten this to a “clinically acceptable time frame,” they added. They recommended multicenter studies to validate their findings.

Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.

SOURCE: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A glycomarker in donor liver perfusate was 100% accurate at predicting primary nonfunction after liver transplantation.

Major finding: In a multivariable model of donor risk index and perfusate markers, only the single-glycan, NGA2F was a significant predictor of primary nonfunction (P less than .0001).

Data source: A dual-center, prospective study of 66 liver transplant patients and a 55-member validation cohort.

Disclosures: Funders included the Research Fund – Flanders and Ghent University. The researchers reported having no conflicts of interest.

Source: Verhelst X et al. Gastroenterology 2018 Jan 6. doi: 10.1053/j.gastro.2017.12.027.

Disqus Comments
Default
Use ProPublica

VIDEO: Pioglitazone benefited NASH patients with and without T2DM

Article Type
Changed
Tue, 05/03/2022 - 15:20

Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.

 

Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.

Publications
Topics
Sections

Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.

 

Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.

Pioglitazone therapy given for 18 months benefited patients with nonalcoholic steatohepatitis (NASH) similarly regardless of whether they had type 2 diabetes mellitus or prediabetes, according to the results of a randomized prospective trial.

 

Source: Bril F, et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Pioglitazone improved liver measures in patients with nonalcoholic steatohepatitis whether or not they were diabetic.

Major finding: Nonalcoholic fatty liver disease activity score fell by at least 2 points, without worsening fibrosis, in 48% of T2DM patients and 46% of patients with prediabetes.

Data source: A prospective study of 101 patients with NASH, of whom 52 had type 2 diabetes and 49 had prediabetes.

Disclosures: The Burroughs Wellcome Fund, the American Diabetes Association, and the Veteran’s Affairs Merit Award supported the work. Senior author Kenneth Cusi, MD, disclosed nonfinancial support from Takeda Pharmaceuticals, grants from Novartis and Janssen Research and Development, and consulting relationships with Eli Lilly and Company, Tobira Therapeutics, and Pfizer. The other authors had no conflicts.

Source: Bril F et al. Clin Gastroenterol Hepatol. 2018 Feb 24. doi: 10.1016/j.cgh.2017.12.001.

Disqus Comments
Default

AGA Clinical Practice Update: Incorporating psychological care in the management of chronic digestive diseases

Article Type
Changed
Fri, 01/18/2019 - 17:28

Psychogastroenterology is the science of applying psychological principles and techniques to alleviate the burden of chronic digestive diseases. This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.

A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.

Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.

The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:

  • Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
  • Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
  • Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
  • Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
  • Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.

Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.

In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.

 

 


Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.
Publications
Topics
Sections

Psychogastroenterology is the science of applying psychological principles and techniques to alleviate the burden of chronic digestive diseases. This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.

A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.

Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.

The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:

  • Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
  • Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
  • Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
  • Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
  • Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.

Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.

In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.

 

 


Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.

Psychogastroenterology is the science of applying psychological principles and techniques to alleviate the burden of chronic digestive diseases. This burden includes digestive symptoms and disease severity, as well as patients’ ability to cope with them. Chronic digestive diseases, such as irritable bowel syndrome, gastroesophageal reflux disease, and inflammatory bowel diseases, cannot be disentangled from their psychosocial context. In this regard, the role of gastroenterologists in promoting best practices for the assessment and referral of patients across the spectrum of disease to brain-gut psychotherapies is crucial.

A review by Laurie Keefer, PhD, AGAF, and her coauthors, published in the April issue of Gastroenterology, provided a clinical update on the structure and efficacy of two major classes of psychogastroenterology – cognitive-behavioral therapy (CBT) and gut-directed hypnotherapy (HYP). The review discussed the effects of these therapies on GI symptoms and the patients’ ability to improve coping, resilience, and self-regulation. The review also provided a framework to understand the scientific rationale and best practices associated with incorporating brain-gut psychotherapies into routine GI care. Furthermore, it presented recommendations on how to address psychological issues and make effective referrals in routine practice.

Previous studies had highlighted that the burden of chronic digestive diseases is amplified by psychosocial factors, including poor coping, depression, and poor social support. Mental health professionals specializing in psychogastroenterology integrate the use of brain-gut psychotherapies into GI practice settings, which may help reduce health care utilization and symptom burden.

The article contains best practice advice based on a review of the literature, including existing systematic reviews and expert opinions. These best practices include the following:

  • Gastroenterologists routinely should assess health-related quality of life, symptom-specific anxieties, early-life adversity, and functional impairment related to a patient’s digestive complaints.
  • Gastroenterologists should master patient-friendly language to help explain the brain-gut pathway and how this pathway can become dysregulated by any number of factors, the psychosocial risks perpetuating and maintaining factors of GI diseases, and why the gastroenterologist is referring a patient to a mental health provider.
  • Gastroenterologists should know the structure and core features of the most effective brain-gut psychotherapies.
  • Gastroenterologists should establish a direct referral and ongoing communication pathway with one or two qualified mental health providers and assure patients that he/she will remain a part of the care team.
  • Gastroenterologists should familiarize themselves with one or two neuromodulators that can be used to augment behavioral therapies when necessary.

Patient education about the referral to a mental health provider is difficult and requires attention to detail and fostering a good physician-patient relationship. It is important to help patients understand why they are being referred to a psychologist for a gastrointestinal complaint and that their physical symptoms are not being discounted. Failure to properly explain the reason for referral may lead to poor follow-through and even lead the patient to seek care with another provider.

In order to foster widespread integration of these services, research and clinical gaps need to be addressed. Research gaps include the lack of prospective trials that compare the relative effectiveness of brain-gut psychotherapies with each other and/or with that of psychotropic medications. Other promising brain-gut therapies, such as mindfulness meditation or acceptance-based approaches, lack sufficient research to be included in clinical practice. Limited evidence supports the effect of psychotherapies have in accelerating or enhancing the efficacy of pharmacologic therapies and on improving disease course or inflammation in conditions such as Crohn’s and ulcerative colitis.

 

 


Clinical gaps include the need for better coverage for these therapies by insurance – many providers are out of network or do not accept insurance, although Medicare and commercial insurance plans often cover the cost of services in network. Health psychologists can be reimbursed for health and behavior codes for treating these conditions (CPTs 96150/96152), but there are restrictions on which other types of professionals can use them. Ongoing research is focusing on the cost-effectiveness of these therapies, although some highly effective therapies may be short term and have a one-time total cost of $1,000-$2,000 paid out of pocket. There is a growing need to expand remote, online, or digitally based brain-gut therapies with more trained health care providers that could offset overhead and other therapy costs.
Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Opioids linked to mortality in IBD

Does opioid use in IBD result in increased mortality? 
Article Type
Changed
Sat, 12/08/2018 - 14:48

Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.

Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”

Liderina/Thinkstock

The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.

 

 

From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.

Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.

Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.

SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.

Body

Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.

Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.  
 

Dr. Jason K. Hou

Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.

Publications
Topics
Sections
Body

Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.

Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.  
 

Dr. Jason K. Hou

Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.

Body

Balancing control of pain and prevention of opioid-related morbidity and mortality remains a major challenge for health care providers, particularly in IBD. This study by Burr et al. highlights the potential dangers of opiate use among patients with IBD with the finding that opioid prescriptions at least three times per year were associated with a two- to threefold increase in mortality. Another important observation from this study was that the prevalence of opioid use among IBD patients increased from 10% to 30% during 1990-2013. One would like to believe that, with better treatment modalities for IBD, fewer patients would require chronic opioid medications over time; however, this observation suggests that there has been a shift in the perception and acceptance of opioids for IBD patients.

Studying opioid use among IBD patients remains challenging as even well-controlled retrospective studies are unable to fully separate whether opioid use is merely associated with more aggressive IBD courses and hence worse outcomes, or whether opioid use directly results in increased mortality. As clinicians, we are left with the difficult balance of addressing true symptoms of pain with the potential harm from opioids; we often counsel against the use of nonsteroidal anti-inflammatory medications in IBD, and yet there is growing concern about use of opioids in this same population. Further research is needed to address patients with pain not directly tied to inflammation or complications of IBD, as well as nonmedical, behavioral approaches to pain management.  
 

Dr. Jason K. Hou

Jason K. Hou, MD, MS, is an investigator in the clinical epidemiology and outcomes program, Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center, Houston; assistant professor, department of medicine, section of gastroenterology & hepatology, Baylor College of Medicine, Houston; and codirector of Inflammatory Bowel Disease Center at the VA Medical Center at Baylor. He has no conflicts of interest.

Title
Does opioid use in IBD result in increased mortality? 
Does opioid use in IBD result in increased mortality? 

Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.

Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”

Liderina/Thinkstock

The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.

 

 

From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.

Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.

Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.

SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.

Among patients with inflammatory bowel disease (IBD), opioid prescriptions tripled during a recent 20-year period, and heavy use of strong opioids was a significant predictor of all-cause mortality, according to a large cohort study reported in the April issue of Clinical Gastroenterology and Hepatology.

Because this study was retrospective, it could not establish causality, said Nicholas E. Burr, MD, of the University of Leeds (England) and his associates. But “[de]signing and conducting a large-scale randomized controlled trial may not be feasible,” they wrote. “Despite the limitations of observational data, population data sets may be the best method to investigate a potential effect.”

Liderina/Thinkstock

The gastrointestinal side effects of many analgesics complicate pain management for patients with IBD, who not only live with chronic abdominal pain but also can develop arthropathy-related musculoskeletal pain, chronic widespread pain, and fibromyalgia. In addition to the risk of narcotic bowel associated with opioid use in IBD, opioids can mask flares in IBD or can cause toxic dilatation if administered during acute flares, the researchers noted. Because few studies had examined opioid use in IBD, the investigators retrospectively studied 3,517 individuals with Crohn’s disease and 5,349 patients with ulcerative colitis from ResearchOne, a primary care electronic health records database that covers about 10% of patients in England. The data set excluded patients with indeterminate colitis or who underwent colectomy for ulcerative colitis.

 

 

From 1990 through 1993, only 10% of patients with IBD were prescribed opioids, vs. 30% from 2010 through 2013 (P less than .005). After the investigators controlled for numerous demographic and clinical variables, being prescribed a strong opioid (morphine, oxycodone, fentanyl, buprenorphine, methadone, hydromorphone, or pethidine) more than three times per year significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8-6.2), the researchers reported.Among patients with ulcerative colitis, more moderate use of strong opioids (one to three prescriptions annually) also significantly correlated with all-cause mortality (HR, 2.4; 95% CI, 1.2-5.2), as did heavy use of codeine (HR, 1.8; 95% CI, 1.1-3.1), but these associations did not reach statistical significance among patients with Crohn’s disease. Tramadol was not linked to mortality in either IBD subtype when used alone or in combination with codeine.

Dr. Burr and his associates said they could not control for several important potential confounders, including fistulating disease, quality of life, mental illness, substance abuse, and history of abuse, all of which have been linked to opioid use in IBD. Nonetheless, they found dose-dependent correlations with mortality that highlight a need for pharmacovigilance of opioids in IBD, particularly given dramatic increases in prescriptions, they said. These were primary care data, which tend to accurately reflect long-term medication use, they noted.

Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts of interest.

SOURCE: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Among patients with inflammatory bowel disease, opioid prescriptions tripled in a recent 20-year period, and their heavy use significantly correlated with all-cause mortality.

Major finding: Thirty percent of patients were prescribed opioids in 2010-2013 vs. only 10% in 1990-1993 (P less than .005 for trend). Heavy use of strong opioids significantly correlated with all-cause mortality in both Crohn’s disease (hazard ratio, 2.2; 95% confidence interval, 1.2-4.0) and ulcerative colitis (HR, 3.3; 95% CI, 1.8- 6.2).

Study details: A retrospective cohort study of 3,517 individuals with Crohn’s disease and 5,349 individuals with ulcerative colitis.

Disclosures: Crohn’s and Colitis U.K. and the Leeds Teaching Hospitals NHS Trust Charitable Foundation provided funding. The investigators reported having no conflicts.

Source: Burr NE et al. Clin Gastroenterol Hepatol. doi: 10.1016/j.cgh.2017.10.022.

Disqus Comments
Default
Use ProPublica

Bioengineered liver models screen drugs and study liver injury

Article Type
Changed
Sat, 12/08/2018 - 14:51

Bioengineered liver models have enabled recapitulation of liver architecture with precise control over cellular microenvironments, resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).

Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.

High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.

Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.

 

 


Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.

These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.

Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.

Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.

 

 

SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.

Body

 

Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.

Dr. Rotonya Carr
In Cellular and Molecular Gastroenterology and Hepatology, Underhill and Khetani summarize available and emerging cell-based, high-throughput systems that can be used to predict hepatotoxicity. These modalities include cellular microarrays of single cells; cocultures of liver parenchymal and nonparenchymal cells; organoids (3-D organ-like structures); and liver-on-a-chip devices (complex perfusion bioreactors that allow for modulation of the cellular micro-environment). These in vitro systems have not only enabled investigators to screen multiple drugs at the same time but also have informed the clinical translation of these technologies. For example, the extracorporeal liver assist device – essentially, a liver bypass – and similar bioartificial liver devices can in principal temporarily perform some of the major liver functions while a patient’s native liver heals from drug-induced liver injury or other hepatic injury.

However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.

Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.

Publications
Topics
Sections
Body

 

Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.

Dr. Rotonya Carr
In Cellular and Molecular Gastroenterology and Hepatology, Underhill and Khetani summarize available and emerging cell-based, high-throughput systems that can be used to predict hepatotoxicity. These modalities include cellular microarrays of single cells; cocultures of liver parenchymal and nonparenchymal cells; organoids (3-D organ-like structures); and liver-on-a-chip devices (complex perfusion bioreactors that allow for modulation of the cellular micro-environment). These in vitro systems have not only enabled investigators to screen multiple drugs at the same time but also have informed the clinical translation of these technologies. For example, the extracorporeal liver assist device – essentially, a liver bypass – and similar bioartificial liver devices can in principal temporarily perform some of the major liver functions while a patient’s native liver heals from drug-induced liver injury or other hepatic injury.

However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.

Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.

Body

 

Thirty to 50 new drugs are approved in the United States annually, which costs approximately $2.5 billion/drug in drug development costs. Nine out of 10 drugs never make it to market, and of those that do, adverse events affect their longevity. Hepatotoxicity is the most frequent adverse drug reaction, and drug-induced liver injury, which can lead to acute liver failure, occurs in a subset of affected patients. Understanding a drug’s risk of hepatotoxicity before patients start using it can not only save lives but also conceivably reduce the costs incurred by pharmaceutical companies, which are passed on to consumers.

Dr. Rotonya Carr
In Cellular and Molecular Gastroenterology and Hepatology, Underhill and Khetani summarize available and emerging cell-based, high-throughput systems that can be used to predict hepatotoxicity. These modalities include cellular microarrays of single cells; cocultures of liver parenchymal and nonparenchymal cells; organoids (3-D organ-like structures); and liver-on-a-chip devices (complex perfusion bioreactors that allow for modulation of the cellular micro-environment). These in vitro systems have not only enabled investigators to screen multiple drugs at the same time but also have informed the clinical translation of these technologies. For example, the extracorporeal liver assist device – essentially, a liver bypass – and similar bioartificial liver devices can in principal temporarily perform some of the major liver functions while a patient’s native liver heals from drug-induced liver injury or other hepatic injury.

However, just as we have seen with the limitations of the in vitro systems, bioartificial livers are unlikely to be successful unless they integrate the liver’s complex functions of protein synthesis, immune surveillance, energy homeostasis, and nutrient sensing. The future is bright, though, as biomedical scientists and bioengineers continue to push the envelope by advancing both in vitro and bioartificial technologies.

Rotonya Carr, MD, is an assistant professor of medicine in the division of gastroenterology at the University of Pennsylvania, Philadelphia. She receives research support from Intercept Pharmaceuticals.

Bioengineered liver models have enabled recapitulation of liver architecture with precise control over cellular microenvironments, resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).

Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.

High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.

Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.

 

 


Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.

These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.

Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.

Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.

 

 

SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.

Bioengineered liver models have enabled recapitulation of liver architecture with precise control over cellular microenvironments, resulting in stabilized liver functions for several weeks in vitro. Studies have focused on using these models to investigate cell responses to drugs and other stimuli (for example, viruses and cell differentiation cues) to predict clinical outcomes. Gregory H. Underhill, PhD, from the department of bioengineering at the University of Illinois at Urbana-Champaign and Salman R. Khetani, PhD, from the department of bioengineering at the University of Illinois in Chicago presented a comprehensive review of the these advances in bioengineered liver models in Cellular and Molecular Gastroenterology and Hepatology (doi: 10.1016/j.jcmgh.2017.11.012).

Drug-induced liver injury (DILI) is a leading cause of drug attrition in the United States, with some marketed drugs causing cell necrosis, hepatitis, cholestasis, fibrosis, or a mixture of injury types. Although the Food and Drug Administration requires preclinical drug testing in animal models, differences in species-specific drug metabolism pathways and human genetics may result in inadequate identification of potential for human DILI. Some bioengineered liver models for in vitro studies are based on tissue engineering using high-throughput microarrays, protein micropatterning, microfluidics, specialized plates, biomaterial scaffolds, and bioprinting.

High-throughput cell microarrays enable systematic analysis of a large number of drugs or compounds at a relatively low cost. Several culture platforms have been developed using multiple sources of liver cells, including cancerous and immortalized cell lines. These platforms show enhanced capabilities to evaluate combinatorial effects of multiple signals with independent control of biochemical and biomechanical cues. For instance, a microchip platform for transducing 3-D liver cell cultures with genes for drug metabolism enzymes featuring 532 reaction vessels (micropillars and corresponding microwells) was able to provide information about certain enzyme combinations that led to drug toxicity in cells. The high-throughput cell microarrays are, however, primarily dependent on imaging-based readouts and have a limited ability to investigate cell responses to gradients of microenvironmental signals.

Liver development, physiology, and pathophysiology are dependent on homotypic and heterotypic interactions between parenchymal and nonparenchymal cells (NPCs). Cocultures with both liver- and nonliver-derived NPC types, in vitro, can induce liver functions transiently and have proven useful for investigating host responses to sepsis, mutagenesis, xenobiotic metabolism and toxicity, response to oxidative stress, lipid metabolism, and induction of the acute-phase response. Micropatterned cocultures (MPCCs) are designed to allow the use of different NPC types without significantly altering hepatocyte homotypic interactions. Cell-cell interactions can be precisely controlled to allow for stable functions for up to 4-6 weeks, whereas more randomly distributed cocultures have limited stability. Unlike randomly distributed cocultures, MPCCs can be infected with HBV, HCV, and malaria. Potential limitations of MPCCs include the requirement for specialized equipment and devices for patterning collagen for hepatocyte attachment.

 

 


Randomly distributed spheroids or organoids enable 3-D establishment of homotypic cell-cell interactions surrounded by an extracellular matrix. The spheroids can be further cocultured with NPCs that facilitate heterotypic cell-cell interactions and allow the evaluation of outcomes resulting from drugs and other stimuli. Hepatic spheroids maintain major liver functions for several weeks and have proven to be compatible with multiple applications within the drug development pipeline.

These spheroids showed greater sensitivity in identifying known hepatotoxic drugs than did short-term primary human hepatocyte (PHH) monolayers. PHHs secreted liver proteins, such as albumin, transferrin, and fibrinogen, and showed cytochrome-P450 activities for 77-90 days when cultured on a nylon scaffold containing a mixture of liver NPCs and PHHs.

Nanopillar plates can be used to create induced pluripotent stem cell–derived human hepatocyte-like cell (iHep) spheroids; although these spheroids showed some potential for initial drug toxicity screening, they had lower overall sensitivity than conventional PHH monolayers, which suggests that further maturation of iHeps is likely required.

Potential limitations of randomly distributed spheroids include necrosis of cells in the center of larger spheroids and the requirement for expensive confocal microscopy for high-content imaging of entire spheroid cultures. To overcome the limitation of disorganized cell type interactions over time within the randomly distributed spheroids/organoids, bioprinted human liver organoids are designed to allow precise control of cell placement.

 

 

SOURCE: Underhill GH and Khetani SR. Cell Molec Gastro Hepatol. 2017. doi: org/10.1016/j.jcmgh.2017.11.012.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica