User login
Study supports expanded definition of serrated polyposis syndrome
Patients with more than 10 colonic polyps, of which at least half were serrated, and their first-degree relatives had a risk of colorectal cancer similar to that of patients who met formal diagnostic criteria for serrated polyposis syndrome (SPS), according to a retrospective multicenter study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.003).
Such patients “should be treated with the same follow-up procedures as those proposed for patients with SPS, and possibly the definition of SPS should be broadened to include this phenotype,” wrote Cecilia M. Egoavil, MD, Miriam Juárez, and their associates.
SPS increases the risk of colorectal cancer (CRC) and is considered a heritable disease, which mandates “strict surveillance” of first-degree relatives, the researchers noted. The World Health Organization defines SPS as having at least five histologically diagnosed serrated lesions proximal to the sigmoid colon, of which two are at least 10 mm in diameter, or serrated polyps proximal to the sigmoid colon and a first-degree relative with SPS, or more than 20 serrated polyps throughout the colon. This “arbitrary” definition is “somewhat restrictive, and possibly leads to underdiagnosis of this disease,” the researchers wrote. Patients with multiple serrated polyps who do not meet WHO SPS criteria might have a “phenotypically attenuated form of serrated polyposis.”
For the study, the researchers compared 53 patients meeting WHO SPS criteria with 145 patients who did not meet these criteria but had more than 10 polyps throughout the colon, of which at least 50% were serrated. For both groups, number of polyps was obtained by adding polyp counts from subsequent colonoscopies. The data source was EPIPOLIP, a multicenter study of patients recruited from 24 hospitals in Spain in 2008 and 2009. At baseline, all patients had more than 10 adenomatous or serrated colonic polyps but did not have familial adenomatous polyposis, Lynch syndrome, hamartomatous polyposis, inflammatory bowel disease, or only hyperplastic rectosigmoid polyps.
The prevalence of CRC was statistically similar between groups (P = .4). There were 12 (22.6%) cases among SPS patients (mean age at diagnosis, 50 years), and 41 (28.3%) cases (mean age, 59 years) among patients with multiple serrated polyps who did not meet SPS criteria. During a mean follow-up of 4.2 years, one (1.9%) SPS patient developed incident CRC, as did four (2.8%) patients with multiple serrated polyps without SPS. Thus, standardized incidence ratios were 0.51 (95% confidence interval, 0.01-2.82) and 0.74 (95% CI, 0.20-1.90), respectively (P = .7). Standardized incidence ratios for CRC also did not significantly differ between first-degree relatives of patients with SPS (3.28, 95% CI, 2.16-4.77) and those with multiple serrated polyps (2.79, 95% CI, 2.10-3.63; P = .5).
A Kaplan-Meier analysis confirmed that there were no differences in the incidence of CRC between groups during follow-up. The findings “confirm that a special surveillance strategy is needed for patients with multiple serrated polyps and their relatives, probably similar to the strategy currently recommended for SPS patients,” the researchers concluded. They arbitrarily defined the group with multiple serrated polyps, so they were not able to link CRC to a cutoff number or percentage of serrated polyps, they noted.
Funders included Instituto de Salud Carlos III, Fundación de Investigación Biomédica de la Comunidad Valenciana-Instituto de Investigación Sanitaria y Biomédica de Alicante, Asociación Española Contra el Cáncer, and Conselleria d’Educació de la Generalitat Valenciana. The investigators had no conflicts of interest.
Patients with more than 10 colonic polyps, of which at least half were serrated, and their first-degree relatives had a risk of colorectal cancer similar to that of patients who met formal diagnostic criteria for serrated polyposis syndrome (SPS), according to a retrospective multicenter study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.003).
Such patients “should be treated with the same follow-up procedures as those proposed for patients with SPS, and possibly the definition of SPS should be broadened to include this phenotype,” wrote Cecilia M. Egoavil, MD, Miriam Juárez, and their associates.
SPS increases the risk of colorectal cancer (CRC) and is considered a heritable disease, which mandates “strict surveillance” of first-degree relatives, the researchers noted. The World Health Organization defines SPS as having at least five histologically diagnosed serrated lesions proximal to the sigmoid colon, of which two are at least 10 mm in diameter, or serrated polyps proximal to the sigmoid colon and a first-degree relative with SPS, or more than 20 serrated polyps throughout the colon. This “arbitrary” definition is “somewhat restrictive, and possibly leads to underdiagnosis of this disease,” the researchers wrote. Patients with multiple serrated polyps who do not meet WHO SPS criteria might have a “phenotypically attenuated form of serrated polyposis.”
For the study, the researchers compared 53 patients meeting WHO SPS criteria with 145 patients who did not meet these criteria but had more than 10 polyps throughout the colon, of which at least 50% were serrated. For both groups, number of polyps was obtained by adding polyp counts from subsequent colonoscopies. The data source was EPIPOLIP, a multicenter study of patients recruited from 24 hospitals in Spain in 2008 and 2009. At baseline, all patients had more than 10 adenomatous or serrated colonic polyps but did not have familial adenomatous polyposis, Lynch syndrome, hamartomatous polyposis, inflammatory bowel disease, or only hyperplastic rectosigmoid polyps.
The prevalence of CRC was statistically similar between groups (P = .4). There were 12 (22.6%) cases among SPS patients (mean age at diagnosis, 50 years), and 41 (28.3%) cases (mean age, 59 years) among patients with multiple serrated polyps who did not meet SPS criteria. During a mean follow-up of 4.2 years, one (1.9%) SPS patient developed incident CRC, as did four (2.8%) patients with multiple serrated polyps without SPS. Thus, standardized incidence ratios were 0.51 (95% confidence interval, 0.01-2.82) and 0.74 (95% CI, 0.20-1.90), respectively (P = .7). Standardized incidence ratios for CRC also did not significantly differ between first-degree relatives of patients with SPS (3.28, 95% CI, 2.16-4.77) and those with multiple serrated polyps (2.79, 95% CI, 2.10-3.63; P = .5).
A Kaplan-Meier analysis confirmed that there were no differences in the incidence of CRC between groups during follow-up. The findings “confirm that a special surveillance strategy is needed for patients with multiple serrated polyps and their relatives, probably similar to the strategy currently recommended for SPS patients,” the researchers concluded. They arbitrarily defined the group with multiple serrated polyps, so they were not able to link CRC to a cutoff number or percentage of serrated polyps, they noted.
Funders included Instituto de Salud Carlos III, Fundación de Investigación Biomédica de la Comunidad Valenciana-Instituto de Investigación Sanitaria y Biomédica de Alicante, Asociación Española Contra el Cáncer, and Conselleria d’Educació de la Generalitat Valenciana. The investigators had no conflicts of interest.
Patients with more than 10 colonic polyps, of which at least half were serrated, and their first-degree relatives had a risk of colorectal cancer similar to that of patients who met formal diagnostic criteria for serrated polyposis syndrome (SPS), according to a retrospective multicenter study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.003).
Such patients “should be treated with the same follow-up procedures as those proposed for patients with SPS, and possibly the definition of SPS should be broadened to include this phenotype,” wrote Cecilia M. Egoavil, MD, Miriam Juárez, and their associates.
SPS increases the risk of colorectal cancer (CRC) and is considered a heritable disease, which mandates “strict surveillance” of first-degree relatives, the researchers noted. The World Health Organization defines SPS as having at least five histologically diagnosed serrated lesions proximal to the sigmoid colon, of which two are at least 10 mm in diameter, or serrated polyps proximal to the sigmoid colon and a first-degree relative with SPS, or more than 20 serrated polyps throughout the colon. This “arbitrary” definition is “somewhat restrictive, and possibly leads to underdiagnosis of this disease,” the researchers wrote. Patients with multiple serrated polyps who do not meet WHO SPS criteria might have a “phenotypically attenuated form of serrated polyposis.”
For the study, the researchers compared 53 patients meeting WHO SPS criteria with 145 patients who did not meet these criteria but had more than 10 polyps throughout the colon, of which at least 50% were serrated. For both groups, number of polyps was obtained by adding polyp counts from subsequent colonoscopies. The data source was EPIPOLIP, a multicenter study of patients recruited from 24 hospitals in Spain in 2008 and 2009. At baseline, all patients had more than 10 adenomatous or serrated colonic polyps but did not have familial adenomatous polyposis, Lynch syndrome, hamartomatous polyposis, inflammatory bowel disease, or only hyperplastic rectosigmoid polyps.
The prevalence of CRC was statistically similar between groups (P = .4). There were 12 (22.6%) cases among SPS patients (mean age at diagnosis, 50 years), and 41 (28.3%) cases (mean age, 59 years) among patients with multiple serrated polyps who did not meet SPS criteria. During a mean follow-up of 4.2 years, one (1.9%) SPS patient developed incident CRC, as did four (2.8%) patients with multiple serrated polyps without SPS. Thus, standardized incidence ratios were 0.51 (95% confidence interval, 0.01-2.82) and 0.74 (95% CI, 0.20-1.90), respectively (P = .7). Standardized incidence ratios for CRC also did not significantly differ between first-degree relatives of patients with SPS (3.28, 95% CI, 2.16-4.77) and those with multiple serrated polyps (2.79, 95% CI, 2.10-3.63; P = .5).
A Kaplan-Meier analysis confirmed that there were no differences in the incidence of CRC between groups during follow-up. The findings “confirm that a special surveillance strategy is needed for patients with multiple serrated polyps and their relatives, probably similar to the strategy currently recommended for SPS patients,” the researchers concluded. They arbitrarily defined the group with multiple serrated polyps, so they were not able to link CRC to a cutoff number or percentage of serrated polyps, they noted.
Funders included Instituto de Salud Carlos III, Fundación de Investigación Biomédica de la Comunidad Valenciana-Instituto de Investigación Sanitaria y Biomédica de Alicante, Asociación Española Contra el Cáncer, and Conselleria d’Educació de la Generalitat Valenciana. The investigators had no conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: Risk of colorectal cancer was similar among patients with serrated polyposis syndrome and those who did not meet formal diagnostic criteria but had more than 10 colonic polyps, of which more than 50% were serrated, and their first-degree relatives.
Major finding: Standardized incidence ratios were 0.51 (95% confidence interval, 0.01-2.82) in patients who met criteria for serrated polyposis syndrome and 0.74 (95% CI, 0.20-1.90) in patients with multiple serrated polyps who did not meet the criteria (P = .7).
Data source: A multicenter retrospective study of 53 patients who met criteria for serrated polyposis and 145 patients who did not meet these criteria, but had more than 10 polyps throughout the colon, of which more than 50% were serrated.
Disclosures: Funders included Instituto de Salud Carlos III, Fundación de Investigación Biomédica de la Comunidad Valenciana–Instituto de Investigación Sanitaria y Biomédica de Alicante, Asociación Española Contra el Cáncer, and Conselleria d’Educació de la Generalitat Valenciana. The investigators had no conflicts of interest.
Steatosis linked to persistent ALT increase in hepatitis B
About one in five patients with chronic hepatitis B virus (HBV) infection had persistently elevated alanine aminotransferase (ALT) levels despite long-term treatment with tenofovir disoproxil fumarate, according to data from two phase III trials reported in the July issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.01.032).
“Both host and viral factors, particularly hepatic steatosis and hepatitis B e antigen [HBeAg] seropositivity, are important contributors to this phenomenon,” Ira M. Jacobson, MD, of Mount Sinai Beth Israel Medical Center, New York, wrote with his associates. “Although serum ALT may indicate significant liver injury, this association is inconsistent, suggesting that relying on serum ALT alone is not sufficient to gauge either the extent of liver injury or the impact of antiviral therapy.”
Long-term treatment with newer antivirals such as tenofovir disoproxil fumarate (TDF) achieves complete viral suppression and improves liver histology in most cases of HBV infection. Transaminase levels are used to track long-term clinical response but sometimes remain elevated in the face of complete virologic response and regression of fibrosis. To explore predictors of this outcome, the researchers analyzed data from 471 chronic HBV patients receiving TDF 300 mg once daily for 5 years as part of two ongoing phase III trials (NCT00117676 and NCT00116805). At baseline, about 25% of patients were cirrhotic (Ishak fibrosis score greater than or equal to 5) and none had decompensated cirrhosis. A central laboratory analyzed ALT levels, which were up to 10 times the upper limit of normal in both HBeAg-positive and -negative patients and were at least twice the upper limit of normal in all HBeAg-positive patients.
After 5 years of TDF, ALT levels remained elevated in 87 (18%) of patients. Patients with at least 5% (grade 1) steatosis at baseline were significantly more likely to have persistent ALT elevation than were those with less or no steatosis (odds ratio, 2.2; 95% confidence interval, 1.03-4.9; P = .04). At least grade 1 steatosis at year 5 also was associated with persistent ALT elevation (OR, 3.4; 95% CI, 1.6-7.4; P =.002). Other significant correlates included HBeAg seropositivity (OR, 3.3; 95% CI, 1.7-6.6; P less than .001) and age 40 years or younger (OR, 2.1; 95% CI, 1.01-4.3; P = .046). Strikingly, half of HBeAg-positive patients with steatosis at baseline had elevated ALT at year 5, said the investigators.
Because many patients whose ALT values fall within commercial laboratory reference ranges have chronic active necroinflammation or fibrogenesis, the researchers performed a sensitivity analysis of patients who achieved a stricter definition of ALT normalization of no more than 30 U/L for men and 19 U/L for women that has been previously recommended (Ann Intern Med. 2002;137:1-10). In this analysis, 47% of patients had persistently elevated ALT despite effective virologic suppression, and the only significant predictor of persistent ALT elevation was grade 1 or more steatosis at year 5 (OR, 6.2; 95% CI, 2.3-16.4; P less than .001). Younger age and HBeAg positivity plus age were no longer significant.
Hepatic steatosis is common overall and in chronic HBV infection and often leads to increased serum transaminases, the researchers noted. Although past work has linked a PNPLA3 single nucleotide polymorphism to obesity, metabolic syndrome, and hepatic steatosis, the presence of this single nucleotide polymorphism was not significant in their study, possibly because many patients lacked genotype data, they added. “Larger longitudinal studies are warranted to further explore this factor and its potential effect on the biochemical response to antiviral treatment in [chronic HBV] patients,” they concluded.
Gilead Sciences sponsored the study. Dr. Jacobson disclosed consultancy, honoraria, and research ties to Gilead and several other pharmaceutical companies.
Antiviral therapy for chronic hepatitis B virus in most treated patients suppresses rather than eradicates infection. Despite this, long-term treatment results in substantial histologic improvement – including regression of fibrosis and reduction in complications.
However, as Jacobson et al. report in a histologic follow-up of 471 HBV patients treated long-term, aminotransferase elevation persisted in 18%. Factors implicated on multivariate analysis in unresolved biochemical dysfunction included HBeAg seropositivity, age less than 40 years, and steatosis at entry, in addition to steatosis at 5-year follow-up. The only association with hepatic dysfunction that persisted was steatosis when modified normal ranges for aminotransferases proposed by Prati were applied, namely 30 U for men and 19 U for women. This suggests that metabolic rather than viral factors are implicated in persistent biochemical dysfunction in patients with chronic HBV infection. Steatosis is also a frequent finding on liver biopsy in patients with chronic HCV infection.
Importantly, HCV-specific mechanisms have been implicated in the accumulation of steatosis in infected patients, as the virus may interfere with host lipid metabolism. HCV genotype 3 has a marked propensity to cause fat accumulation in hepatocytes, which appears to regress with successful antiviral therapy. In the interferon era, hepatic steatosis had been identified as a predictor of nonresponse to therapy for HCV. In patients with chronic viral hepatitis, attention needs to be paid to cofactors in liver disease – notably the metabolic syndrome – particularly because successfully treated patients are now discharged from the care of specialists.
Paul S. Martin, MD, is chief, division of hepatology, professor of medicine, University of Miami Health System, Fla. He has been a consultant and investigator for Gilead, BMS, and Merck.
Antiviral therapy for chronic hepatitis B virus in most treated patients suppresses rather than eradicates infection. Despite this, long-term treatment results in substantial histologic improvement – including regression of fibrosis and reduction in complications.
However, as Jacobson et al. report in a histologic follow-up of 471 HBV patients treated long-term, aminotransferase elevation persisted in 18%. Factors implicated on multivariate analysis in unresolved biochemical dysfunction included HBeAg seropositivity, age less than 40 years, and steatosis at entry, in addition to steatosis at 5-year follow-up. The only association with hepatic dysfunction that persisted was steatosis when modified normal ranges for aminotransferases proposed by Prati were applied, namely 30 U for men and 19 U for women. This suggests that metabolic rather than viral factors are implicated in persistent biochemical dysfunction in patients with chronic HBV infection. Steatosis is also a frequent finding on liver biopsy in patients with chronic HCV infection.
Importantly, HCV-specific mechanisms have been implicated in the accumulation of steatosis in infected patients, as the virus may interfere with host lipid metabolism. HCV genotype 3 has a marked propensity to cause fat accumulation in hepatocytes, which appears to regress with successful antiviral therapy. In the interferon era, hepatic steatosis had been identified as a predictor of nonresponse to therapy for HCV. In patients with chronic viral hepatitis, attention needs to be paid to cofactors in liver disease – notably the metabolic syndrome – particularly because successfully treated patients are now discharged from the care of specialists.
Paul S. Martin, MD, is chief, division of hepatology, professor of medicine, University of Miami Health System, Fla. He has been a consultant and investigator for Gilead, BMS, and Merck.
Antiviral therapy for chronic hepatitis B virus in most treated patients suppresses rather than eradicates infection. Despite this, long-term treatment results in substantial histologic improvement – including regression of fibrosis and reduction in complications.
However, as Jacobson et al. report in a histologic follow-up of 471 HBV patients treated long-term, aminotransferase elevation persisted in 18%. Factors implicated on multivariate analysis in unresolved biochemical dysfunction included HBeAg seropositivity, age less than 40 years, and steatosis at entry, in addition to steatosis at 5-year follow-up. The only association with hepatic dysfunction that persisted was steatosis when modified normal ranges for aminotransferases proposed by Prati were applied, namely 30 U for men and 19 U for women. This suggests that metabolic rather than viral factors are implicated in persistent biochemical dysfunction in patients with chronic HBV infection. Steatosis is also a frequent finding on liver biopsy in patients with chronic HCV infection.
Importantly, HCV-specific mechanisms have been implicated in the accumulation of steatosis in infected patients, as the virus may interfere with host lipid metabolism. HCV genotype 3 has a marked propensity to cause fat accumulation in hepatocytes, which appears to regress with successful antiviral therapy. In the interferon era, hepatic steatosis had been identified as a predictor of nonresponse to therapy for HCV. In patients with chronic viral hepatitis, attention needs to be paid to cofactors in liver disease – notably the metabolic syndrome – particularly because successfully treated patients are now discharged from the care of specialists.
Paul S. Martin, MD, is chief, division of hepatology, professor of medicine, University of Miami Health System, Fla. He has been a consultant and investigator for Gilead, BMS, and Merck.
About one in five patients with chronic hepatitis B virus (HBV) infection had persistently elevated alanine aminotransferase (ALT) levels despite long-term treatment with tenofovir disoproxil fumarate, according to data from two phase III trials reported in the July issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.01.032).
“Both host and viral factors, particularly hepatic steatosis and hepatitis B e antigen [HBeAg] seropositivity, are important contributors to this phenomenon,” Ira M. Jacobson, MD, of Mount Sinai Beth Israel Medical Center, New York, wrote with his associates. “Although serum ALT may indicate significant liver injury, this association is inconsistent, suggesting that relying on serum ALT alone is not sufficient to gauge either the extent of liver injury or the impact of antiviral therapy.”
Long-term treatment with newer antivirals such as tenofovir disoproxil fumarate (TDF) achieves complete viral suppression and improves liver histology in most cases of HBV infection. Transaminase levels are used to track long-term clinical response but sometimes remain elevated in the face of complete virologic response and regression of fibrosis. To explore predictors of this outcome, the researchers analyzed data from 471 chronic HBV patients receiving TDF 300 mg once daily for 5 years as part of two ongoing phase III trials (NCT00117676 and NCT00116805). At baseline, about 25% of patients were cirrhotic (Ishak fibrosis score greater than or equal to 5) and none had decompensated cirrhosis. A central laboratory analyzed ALT levels, which were up to 10 times the upper limit of normal in both HBeAg-positive and -negative patients and were at least twice the upper limit of normal in all HBeAg-positive patients.
After 5 years of TDF, ALT levels remained elevated in 87 (18%) of patients. Patients with at least 5% (grade 1) steatosis at baseline were significantly more likely to have persistent ALT elevation than were those with less or no steatosis (odds ratio, 2.2; 95% confidence interval, 1.03-4.9; P = .04). At least grade 1 steatosis at year 5 also was associated with persistent ALT elevation (OR, 3.4; 95% CI, 1.6-7.4; P =.002). Other significant correlates included HBeAg seropositivity (OR, 3.3; 95% CI, 1.7-6.6; P less than .001) and age 40 years or younger (OR, 2.1; 95% CI, 1.01-4.3; P = .046). Strikingly, half of HBeAg-positive patients with steatosis at baseline had elevated ALT at year 5, said the investigators.
Because many patients whose ALT values fall within commercial laboratory reference ranges have chronic active necroinflammation or fibrogenesis, the researchers performed a sensitivity analysis of patients who achieved a stricter definition of ALT normalization of no more than 30 U/L for men and 19 U/L for women that has been previously recommended (Ann Intern Med. 2002;137:1-10). In this analysis, 47% of patients had persistently elevated ALT despite effective virologic suppression, and the only significant predictor of persistent ALT elevation was grade 1 or more steatosis at year 5 (OR, 6.2; 95% CI, 2.3-16.4; P less than .001). Younger age and HBeAg positivity plus age were no longer significant.
Hepatic steatosis is common overall and in chronic HBV infection and often leads to increased serum transaminases, the researchers noted. Although past work has linked a PNPLA3 single nucleotide polymorphism to obesity, metabolic syndrome, and hepatic steatosis, the presence of this single nucleotide polymorphism was not significant in their study, possibly because many patients lacked genotype data, they added. “Larger longitudinal studies are warranted to further explore this factor and its potential effect on the biochemical response to antiviral treatment in [chronic HBV] patients,” they concluded.
Gilead Sciences sponsored the study. Dr. Jacobson disclosed consultancy, honoraria, and research ties to Gilead and several other pharmaceutical companies.
About one in five patients with chronic hepatitis B virus (HBV) infection had persistently elevated alanine aminotransferase (ALT) levels despite long-term treatment with tenofovir disoproxil fumarate, according to data from two phase III trials reported in the July issue of Clinical Gastroenterology and Hepatology (2017. doi: 10.1016/j.cgh.2017.01.032).
“Both host and viral factors, particularly hepatic steatosis and hepatitis B e antigen [HBeAg] seropositivity, are important contributors to this phenomenon,” Ira M. Jacobson, MD, of Mount Sinai Beth Israel Medical Center, New York, wrote with his associates. “Although serum ALT may indicate significant liver injury, this association is inconsistent, suggesting that relying on serum ALT alone is not sufficient to gauge either the extent of liver injury or the impact of antiviral therapy.”
Long-term treatment with newer antivirals such as tenofovir disoproxil fumarate (TDF) achieves complete viral suppression and improves liver histology in most cases of HBV infection. Transaminase levels are used to track long-term clinical response but sometimes remain elevated in the face of complete virologic response and regression of fibrosis. To explore predictors of this outcome, the researchers analyzed data from 471 chronic HBV patients receiving TDF 300 mg once daily for 5 years as part of two ongoing phase III trials (NCT00117676 and NCT00116805). At baseline, about 25% of patients were cirrhotic (Ishak fibrosis score greater than or equal to 5) and none had decompensated cirrhosis. A central laboratory analyzed ALT levels, which were up to 10 times the upper limit of normal in both HBeAg-positive and -negative patients and were at least twice the upper limit of normal in all HBeAg-positive patients.
After 5 years of TDF, ALT levels remained elevated in 87 (18%) of patients. Patients with at least 5% (grade 1) steatosis at baseline were significantly more likely to have persistent ALT elevation than were those with less or no steatosis (odds ratio, 2.2; 95% confidence interval, 1.03-4.9; P = .04). At least grade 1 steatosis at year 5 also was associated with persistent ALT elevation (OR, 3.4; 95% CI, 1.6-7.4; P =.002). Other significant correlates included HBeAg seropositivity (OR, 3.3; 95% CI, 1.7-6.6; P less than .001) and age 40 years or younger (OR, 2.1; 95% CI, 1.01-4.3; P = .046). Strikingly, half of HBeAg-positive patients with steatosis at baseline had elevated ALT at year 5, said the investigators.
Because many patients whose ALT values fall within commercial laboratory reference ranges have chronic active necroinflammation or fibrogenesis, the researchers performed a sensitivity analysis of patients who achieved a stricter definition of ALT normalization of no more than 30 U/L for men and 19 U/L for women that has been previously recommended (Ann Intern Med. 2002;137:1-10). In this analysis, 47% of patients had persistently elevated ALT despite effective virologic suppression, and the only significant predictor of persistent ALT elevation was grade 1 or more steatosis at year 5 (OR, 6.2; 95% CI, 2.3-16.4; P less than .001). Younger age and HBeAg positivity plus age were no longer significant.
Hepatic steatosis is common overall and in chronic HBV infection and often leads to increased serum transaminases, the researchers noted. Although past work has linked a PNPLA3 single nucleotide polymorphism to obesity, metabolic syndrome, and hepatic steatosis, the presence of this single nucleotide polymorphism was not significant in their study, possibly because many patients lacked genotype data, they added. “Larger longitudinal studies are warranted to further explore this factor and its potential effect on the biochemical response to antiviral treatment in [chronic HBV] patients,” they concluded.
Gilead Sciences sponsored the study. Dr. Jacobson disclosed consultancy, honoraria, and research ties to Gilead and several other pharmaceutical companies.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: In patients with chronic hepatitis B virus infection, steatosis was significantly associated with persistently elevated alanine aminotransferase (ALT) levels despite successful treatment with tenofovir disoproxil fumarate.
Major finding: At baseline and after 5 years of treatment, steatosis of grade 1 (5%) or more predicted persistent ALT elevation with odds ratios of 2.2 (P = .04) and 3.4 (P = .002), respectively.
Data source: Two phase III trials of tenofovir disoproxil fumarate in 471 patients with chronic hepatitis B virus infection.
Disclosures: Gilead Sciences sponsored the study. Dr. Jacobson disclosed consultancy, honoraria, and research ties to Gilead and several other pharmaceutical companies.
Improved adenoma detection rate found protective against interval cancers, death
An improved annual adenoma detection rate was associated with a significantly decreased risk of interval colorectal cancer (ICRC) and subsequent death in a national prospective cohort study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.006).
Source: American Gastroenterological Association
This is the first study to show a significant inverse relationship between an improved annual adenoma detection rate (ADR) and ICRC or subsequent death, Michal F. Kaminski MD, PhD, of the Institute of Oncology, Warsaw, wrote with his associates.
The rates of these outcomes were lowest when endoscopists achieved and maintained ADRs above 24.6%, which supports the currently recommended performance target of 25% for a mixed male-female population, they reported (Am J Gastroenterol. 2015;110:72-90).
This study included 294 endoscopists and 146,860 individuals who underwent screening colonoscopy as part of a national cancer prevention program in Poland between 2004 and 2008. Endoscopists received annual feedback based on quality benchmarks to spur improvements in colonoscopy performance, and all participated for at least 2 years. For each endoscopist, investigators categorized annual ADRs based on quintiles for the entire data set. “Improved ADR” was defined as keeping annual ADR within the highest quintile (above 24.6%) or as increasing annual ADR by at least one quintile, compared with baseline.
Based on this definition, 219 endoscopists (75%) improved their ADR during a median of 5.8 years of follow-up (interquartile range, 5-7.2 years). In all, 168 interval CRCs were diagnosed, of which 44 cases led to death. After age, sex, and family history of colorectal cancer were controlled for, patients whose endoscopists improved their ADRs were significantly less likely to develop (adjusted hazard ratio, 0.6; 95% confidence interval, 0.5-0.9; P = .006) or to die of interval CRC (95% CI, 0.3-0.95; P = .04) than were patients whose endoscopists did not improve their ADRs.
Maintaining ADR in the highest quintile (above 24.6%) throughout follow-up led to an even lower risk of interval CRC (HR, 0.3; 95% CI, 0.1-0.6; P = .003) and death (HR, 0.2; 95% CI, 0.1-0.6; P = .003), the researchers reported. In absolute numbers, that translated to a decrease from 25.3 interval CRCs per 100,000 person-years of follow-up to 7.1 cases when endoscopists eventually reached the highest ADR quintile or to 4.5 cases when they were in the highest quintile throughout follow-up. Rates of colonic perforation remained stable even though most endoscopists upped their ADRs.
Together, these findings “prove the causal relationship between endoscopists’ ADRs and the likelihood of being diagnosed with, or dying from, interval CRC,” the investigators concluded. The national cancer registry in Poland is thought to miss about 10% of cases, but the rate of missing cases was not thought to change over time, they noted. However, they also lacked data on colonoscope withdrawal times, and had no control group to definitively show that feedback based on benchmarking was responsible for improved ADRs.
Funders included the Foundation of Polish Science, the Innovative Economy Operational Programme, the Polish Foundation of Gastroenterology, the Polish Ministry of Health, and the Polish Ministry of Science and Higher Education. The investigators reported having no relevant conflicts of interest.
The U.S. Multi-Society Task Force on Colorectal Cancer proposed the adenoma detection rate (ADR) as a colonoscopy quality measure in 2002. The rationale for a new measure was emerging evidence of highly variable adenoma detection and cancer prevention among colonoscopists. Highly variable performance, consistently verified in subsequent studies, casts a pall of severe operator dependence over colonoscopy. In landmark studies from Kaminski et al. and Corley et al. in 2010 and 2014, respectively, it was shown that doctors with higher ADRs provide patients with much greater protection against interval colorectal cancer (CRC).
Now Kaminski and colleagues from Poland have delivered a second landmark study, demonstrating for the first time that improving ADR prevents CRCs. We now have strong evidence that ADR predicts the level of cancer prevention, that ADR improvement is achievable, and that improving ADR further prevents CRCs and CRC deaths. Thanks to this study, ADR has come full circle. Measurement of and improvement in detection is now a fully validated concept that is essential to modern colonoscopy. In 2017, ADR measurement is mandatory for all practicing colonoscopists who are serious about CRC prevention. The tools to improve ADR that are widely accepted include ADR measurement and reporting, split or same-day preparations, lesion recognition and optimal technique, high-definition imaging, double examination (particularly for the right colon), patient rotation during withdrawal, chromoendoscopy, mucosal exposure devices (caps, cuffs, balloons, etc.), and water exchange. Tools for ADR improvement that are emerging or under study are brighter forms of electronic chromoendoscopy, and videorecording.
Douglas K. Rex, MD, is professor of medicine, division of gastroenterology/hepatology, at Indiana University, Indianapolis.* He has no relevant conflicts of interest.
Correction, 6/20/17: An earlier version of this article misstated Dr. Rex's affiliation.
The U.S. Multi-Society Task Force on Colorectal Cancer proposed the adenoma detection rate (ADR) as a colonoscopy quality measure in 2002. The rationale for a new measure was emerging evidence of highly variable adenoma detection and cancer prevention among colonoscopists. Highly variable performance, consistently verified in subsequent studies, casts a pall of severe operator dependence over colonoscopy. In landmark studies from Kaminski et al. and Corley et al. in 2010 and 2014, respectively, it was shown that doctors with higher ADRs provide patients with much greater protection against interval colorectal cancer (CRC).
Now Kaminski and colleagues from Poland have delivered a second landmark study, demonstrating for the first time that improving ADR prevents CRCs. We now have strong evidence that ADR predicts the level of cancer prevention, that ADR improvement is achievable, and that improving ADR further prevents CRCs and CRC deaths. Thanks to this study, ADR has come full circle. Measurement of and improvement in detection is now a fully validated concept that is essential to modern colonoscopy. In 2017, ADR measurement is mandatory for all practicing colonoscopists who are serious about CRC prevention. The tools to improve ADR that are widely accepted include ADR measurement and reporting, split or same-day preparations, lesion recognition and optimal technique, high-definition imaging, double examination (particularly for the right colon), patient rotation during withdrawal, chromoendoscopy, mucosal exposure devices (caps, cuffs, balloons, etc.), and water exchange. Tools for ADR improvement that are emerging or under study are brighter forms of electronic chromoendoscopy, and videorecording.
Douglas K. Rex, MD, is professor of medicine, division of gastroenterology/hepatology, at Indiana University, Indianapolis.* He has no relevant conflicts of interest.
Correction, 6/20/17: An earlier version of this article misstated Dr. Rex's affiliation.
The U.S. Multi-Society Task Force on Colorectal Cancer proposed the adenoma detection rate (ADR) as a colonoscopy quality measure in 2002. The rationale for a new measure was emerging evidence of highly variable adenoma detection and cancer prevention among colonoscopists. Highly variable performance, consistently verified in subsequent studies, casts a pall of severe operator dependence over colonoscopy. In landmark studies from Kaminski et al. and Corley et al. in 2010 and 2014, respectively, it was shown that doctors with higher ADRs provide patients with much greater protection against interval colorectal cancer (CRC).
Now Kaminski and colleagues from Poland have delivered a second landmark study, demonstrating for the first time that improving ADR prevents CRCs. We now have strong evidence that ADR predicts the level of cancer prevention, that ADR improvement is achievable, and that improving ADR further prevents CRCs and CRC deaths. Thanks to this study, ADR has come full circle. Measurement of and improvement in detection is now a fully validated concept that is essential to modern colonoscopy. In 2017, ADR measurement is mandatory for all practicing colonoscopists who are serious about CRC prevention. The tools to improve ADR that are widely accepted include ADR measurement and reporting, split or same-day preparations, lesion recognition and optimal technique, high-definition imaging, double examination (particularly for the right colon), patient rotation during withdrawal, chromoendoscopy, mucosal exposure devices (caps, cuffs, balloons, etc.), and water exchange. Tools for ADR improvement that are emerging or under study are brighter forms of electronic chromoendoscopy, and videorecording.
Douglas K. Rex, MD, is professor of medicine, division of gastroenterology/hepatology, at Indiana University, Indianapolis.* He has no relevant conflicts of interest.
Correction, 6/20/17: An earlier version of this article misstated Dr. Rex's affiliation.
An improved annual adenoma detection rate was associated with a significantly decreased risk of interval colorectal cancer (ICRC) and subsequent death in a national prospective cohort study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.006).
Source: American Gastroenterological Association
This is the first study to show a significant inverse relationship between an improved annual adenoma detection rate (ADR) and ICRC or subsequent death, Michal F. Kaminski MD, PhD, of the Institute of Oncology, Warsaw, wrote with his associates.
The rates of these outcomes were lowest when endoscopists achieved and maintained ADRs above 24.6%, which supports the currently recommended performance target of 25% for a mixed male-female population, they reported (Am J Gastroenterol. 2015;110:72-90).
This study included 294 endoscopists and 146,860 individuals who underwent screening colonoscopy as part of a national cancer prevention program in Poland between 2004 and 2008. Endoscopists received annual feedback based on quality benchmarks to spur improvements in colonoscopy performance, and all participated for at least 2 years. For each endoscopist, investigators categorized annual ADRs based on quintiles for the entire data set. “Improved ADR” was defined as keeping annual ADR within the highest quintile (above 24.6%) or as increasing annual ADR by at least one quintile, compared with baseline.
Based on this definition, 219 endoscopists (75%) improved their ADR during a median of 5.8 years of follow-up (interquartile range, 5-7.2 years). In all, 168 interval CRCs were diagnosed, of which 44 cases led to death. After age, sex, and family history of colorectal cancer were controlled for, patients whose endoscopists improved their ADRs were significantly less likely to develop (adjusted hazard ratio, 0.6; 95% confidence interval, 0.5-0.9; P = .006) or to die of interval CRC (95% CI, 0.3-0.95; P = .04) than were patients whose endoscopists did not improve their ADRs.
Maintaining ADR in the highest quintile (above 24.6%) throughout follow-up led to an even lower risk of interval CRC (HR, 0.3; 95% CI, 0.1-0.6; P = .003) and death (HR, 0.2; 95% CI, 0.1-0.6; P = .003), the researchers reported. In absolute numbers, that translated to a decrease from 25.3 interval CRCs per 100,000 person-years of follow-up to 7.1 cases when endoscopists eventually reached the highest ADR quintile or to 4.5 cases when they were in the highest quintile throughout follow-up. Rates of colonic perforation remained stable even though most endoscopists upped their ADRs.
Together, these findings “prove the causal relationship between endoscopists’ ADRs and the likelihood of being diagnosed with, or dying from, interval CRC,” the investigators concluded. The national cancer registry in Poland is thought to miss about 10% of cases, but the rate of missing cases was not thought to change over time, they noted. However, they also lacked data on colonoscope withdrawal times, and had no control group to definitively show that feedback based on benchmarking was responsible for improved ADRs.
Funders included the Foundation of Polish Science, the Innovative Economy Operational Programme, the Polish Foundation of Gastroenterology, the Polish Ministry of Health, and the Polish Ministry of Science and Higher Education. The investigators reported having no relevant conflicts of interest.
An improved annual adenoma detection rate was associated with a significantly decreased risk of interval colorectal cancer (ICRC) and subsequent death in a national prospective cohort study published in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2017.04.006).
Source: American Gastroenterological Association
This is the first study to show a significant inverse relationship between an improved annual adenoma detection rate (ADR) and ICRC or subsequent death, Michal F. Kaminski MD, PhD, of the Institute of Oncology, Warsaw, wrote with his associates.
The rates of these outcomes were lowest when endoscopists achieved and maintained ADRs above 24.6%, which supports the currently recommended performance target of 25% for a mixed male-female population, they reported (Am J Gastroenterol. 2015;110:72-90).
This study included 294 endoscopists and 146,860 individuals who underwent screening colonoscopy as part of a national cancer prevention program in Poland between 2004 and 2008. Endoscopists received annual feedback based on quality benchmarks to spur improvements in colonoscopy performance, and all participated for at least 2 years. For each endoscopist, investigators categorized annual ADRs based on quintiles for the entire data set. “Improved ADR” was defined as keeping annual ADR within the highest quintile (above 24.6%) or as increasing annual ADR by at least one quintile, compared with baseline.
Based on this definition, 219 endoscopists (75%) improved their ADR during a median of 5.8 years of follow-up (interquartile range, 5-7.2 years). In all, 168 interval CRCs were diagnosed, of which 44 cases led to death. After age, sex, and family history of colorectal cancer were controlled for, patients whose endoscopists improved their ADRs were significantly less likely to develop (adjusted hazard ratio, 0.6; 95% confidence interval, 0.5-0.9; P = .006) or to die of interval CRC (95% CI, 0.3-0.95; P = .04) than were patients whose endoscopists did not improve their ADRs.
Maintaining ADR in the highest quintile (above 24.6%) throughout follow-up led to an even lower risk of interval CRC (HR, 0.3; 95% CI, 0.1-0.6; P = .003) and death (HR, 0.2; 95% CI, 0.1-0.6; P = .003), the researchers reported. In absolute numbers, that translated to a decrease from 25.3 interval CRCs per 100,000 person-years of follow-up to 7.1 cases when endoscopists eventually reached the highest ADR quintile or to 4.5 cases when they were in the highest quintile throughout follow-up. Rates of colonic perforation remained stable even though most endoscopists upped their ADRs.
Together, these findings “prove the causal relationship between endoscopists’ ADRs and the likelihood of being diagnosed with, or dying from, interval CRC,” the investigators concluded. The national cancer registry in Poland is thought to miss about 10% of cases, but the rate of missing cases was not thought to change over time, they noted. However, they also lacked data on colonoscope withdrawal times, and had no control group to definitively show that feedback based on benchmarking was responsible for improved ADRs.
Funders included the Foundation of Polish Science, the Innovative Economy Operational Programme, the Polish Foundation of Gastroenterology, the Polish Ministry of Health, and the Polish Ministry of Science and Higher Education. The investigators reported having no relevant conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: An improved adenoma detection rate was associated with a significantly reduced risk of interval colorectal cancer and subsequent death.
Major finding: Adjusted hazard ratios were 0.6 for developing ICRC (95% CI, 0.5-0.9; P = .006) and 0.50 for dying of ICRC (95% CI, 0.3-0.95; P = .04).
Data source: A prospective registry study of 294 endoscopists and 146,860 individuals who underwent screening colonoscopy as part of a national screening program between 2004 and 2008.
Disclosures: Funders included the Foundation of Polish Science, the Innovative Economy Operational Programme, the Polish Foundation of Gastroenterology, the Polish Ministry of Health, and the Polish Ministry of Science and Higher Education. The investigators reported having no relevant conflicts of interest.
VIDEO: Start probiotics within 2 days of antibiotics to prevent CDI
Starting probiotics within 2 days of the first antibiotic dose could cut the risk of Clostridium difficile infection among hospitalized adults by more than 50%, according to the results of a systemic review and metaregression analysis.
The protective effect waned when patients delayed starting probiotics, reported Nicole T. Shen, MD, of Cornell University, New York, and her associates. The study appears in Gastroenterology (doi: 10.1053/j.gastro.2017.02.003). “Given the magnitude of benefit and the low cost of probiotics, the decision is likely to be highly cost effective,” they added.
Systematic reviews support the use of probiotics for preventing Clostridium difficile infection (CDI), but guidelines do not reflect these findings. To help guide clinical practice, the reviewers searched MEDLINE, EMBASE, the International Journal of Probiotics and Prebiotics, and the Cochrane Library databases for randomized controlled trials of probiotics and CDI among hospitalized adults taking antibiotics. This search yielded 19 published studies of 6,261 patients. Two reviewers separately extracted data from these studies and examined quality of evidence and risk of bias.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
A total of 54 patients in the probiotic cohort (1.6%) developed CDI, compared with 115 controls (3.9%), a statistically significant difference (P less than .001). In regression analysis, the probiotic group was about 58% less likely to develop CDI than controls (hazard ratio, 0.42; 95% confidence interval, 0.30-0.57; P less than .001). Importantly, probiotics were significantly effective against CDI only when started within 2 days of antibiotic initiation (relative risk, 0.32; 95% CI, 0.22-0.48), not when started within 3-7 days (RR, 0.70, 95% CI, 0.40-1.23). The difference between these estimated risk ratios was statistically significant (P = .02).
In 18 of the 19 studies, patients received probiotics within 3 days of starting antibiotics, while patients in the remaining study could start probiotics any time within 7 days of antibiotic initiation. “Not only was [this] study unusual with respect to probiotic timing, it was also much larger than all other studies, and its results were statistically insignificant,” the reviewers wrote. Metaregression analyses of all studies and of all but the outlier study linked delaying probiotics with a decrease in efficacy against CDI, with P values of .04 and .09, respectively. Those findings “suggest that the decrement in efficacy with delay in starting probiotics is not sensitive to inclusion of a single large ‘outlier’ study,” the reviewers emphasized. “In fact, inclusion only dampens the magnitude of the decrement in efficacy, although it is still clinically important and statistically significant.”
The trials included 12 probiotic formulas containing Lactobacillus, Saccharomyces, Bifidobacterium, and Streptococcus, either alone or in combination. Probiotics were not associated with adverse effects in the trials. Quality of evidence was generally high, but seven trials had missing data on the primary outcome. Furthermore, two studies lacked a placebo group, and lead authors of two studies disclosed ties to the probiotic manufacturers that provided funding.
One reviewer received fellowship support from the Louis and Rachel Rudin Foundation. None had conflicts of interest.
Starting probiotics within 2 days of the first antibiotic dose could cut the risk of Clostridium difficile infection among hospitalized adults by more than 50%, according to the results of a systemic review and metaregression analysis.
The protective effect waned when patients delayed starting probiotics, reported Nicole T. Shen, MD, of Cornell University, New York, and her associates. The study appears in Gastroenterology (doi: 10.1053/j.gastro.2017.02.003). “Given the magnitude of benefit and the low cost of probiotics, the decision is likely to be highly cost effective,” they added.
Systematic reviews support the use of probiotics for preventing Clostridium difficile infection (CDI), but guidelines do not reflect these findings. To help guide clinical practice, the reviewers searched MEDLINE, EMBASE, the International Journal of Probiotics and Prebiotics, and the Cochrane Library databases for randomized controlled trials of probiotics and CDI among hospitalized adults taking antibiotics. This search yielded 19 published studies of 6,261 patients. Two reviewers separately extracted data from these studies and examined quality of evidence and risk of bias.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
A total of 54 patients in the probiotic cohort (1.6%) developed CDI, compared with 115 controls (3.9%), a statistically significant difference (P less than .001). In regression analysis, the probiotic group was about 58% less likely to develop CDI than controls (hazard ratio, 0.42; 95% confidence interval, 0.30-0.57; P less than .001). Importantly, probiotics were significantly effective against CDI only when started within 2 days of antibiotic initiation (relative risk, 0.32; 95% CI, 0.22-0.48), not when started within 3-7 days (RR, 0.70, 95% CI, 0.40-1.23). The difference between these estimated risk ratios was statistically significant (P = .02).
In 18 of the 19 studies, patients received probiotics within 3 days of starting antibiotics, while patients in the remaining study could start probiotics any time within 7 days of antibiotic initiation. “Not only was [this] study unusual with respect to probiotic timing, it was also much larger than all other studies, and its results were statistically insignificant,” the reviewers wrote. Metaregression analyses of all studies and of all but the outlier study linked delaying probiotics with a decrease in efficacy against CDI, with P values of .04 and .09, respectively. Those findings “suggest that the decrement in efficacy with delay in starting probiotics is not sensitive to inclusion of a single large ‘outlier’ study,” the reviewers emphasized. “In fact, inclusion only dampens the magnitude of the decrement in efficacy, although it is still clinically important and statistically significant.”
The trials included 12 probiotic formulas containing Lactobacillus, Saccharomyces, Bifidobacterium, and Streptococcus, either alone or in combination. Probiotics were not associated with adverse effects in the trials. Quality of evidence was generally high, but seven trials had missing data on the primary outcome. Furthermore, two studies lacked a placebo group, and lead authors of two studies disclosed ties to the probiotic manufacturers that provided funding.
One reviewer received fellowship support from the Louis and Rachel Rudin Foundation. None had conflicts of interest.
Starting probiotics within 2 days of the first antibiotic dose could cut the risk of Clostridium difficile infection among hospitalized adults by more than 50%, according to the results of a systemic review and metaregression analysis.
The protective effect waned when patients delayed starting probiotics, reported Nicole T. Shen, MD, of Cornell University, New York, and her associates. The study appears in Gastroenterology (doi: 10.1053/j.gastro.2017.02.003). “Given the magnitude of benefit and the low cost of probiotics, the decision is likely to be highly cost effective,” they added.
Systematic reviews support the use of probiotics for preventing Clostridium difficile infection (CDI), but guidelines do not reflect these findings. To help guide clinical practice, the reviewers searched MEDLINE, EMBASE, the International Journal of Probiotics and Prebiotics, and the Cochrane Library databases for randomized controlled trials of probiotics and CDI among hospitalized adults taking antibiotics. This search yielded 19 published studies of 6,261 patients. Two reviewers separately extracted data from these studies and examined quality of evidence and risk of bias.
SOURCE: AMERICAN GASTROENTEROLOGICAL ASSOCIATION
A total of 54 patients in the probiotic cohort (1.6%) developed CDI, compared with 115 controls (3.9%), a statistically significant difference (P less than .001). In regression analysis, the probiotic group was about 58% less likely to develop CDI than controls (hazard ratio, 0.42; 95% confidence interval, 0.30-0.57; P less than .001). Importantly, probiotics were significantly effective against CDI only when started within 2 days of antibiotic initiation (relative risk, 0.32; 95% CI, 0.22-0.48), not when started within 3-7 days (RR, 0.70, 95% CI, 0.40-1.23). The difference between these estimated risk ratios was statistically significant (P = .02).
In 18 of the 19 studies, patients received probiotics within 3 days of starting antibiotics, while patients in the remaining study could start probiotics any time within 7 days of antibiotic initiation. “Not only was [this] study unusual with respect to probiotic timing, it was also much larger than all other studies, and its results were statistically insignificant,” the reviewers wrote. Metaregression analyses of all studies and of all but the outlier study linked delaying probiotics with a decrease in efficacy against CDI, with P values of .04 and .09, respectively. Those findings “suggest that the decrement in efficacy with delay in starting probiotics is not sensitive to inclusion of a single large ‘outlier’ study,” the reviewers emphasized. “In fact, inclusion only dampens the magnitude of the decrement in efficacy, although it is still clinically important and statistically significant.”
The trials included 12 probiotic formulas containing Lactobacillus, Saccharomyces, Bifidobacterium, and Streptococcus, either alone or in combination. Probiotics were not associated with adverse effects in the trials. Quality of evidence was generally high, but seven trials had missing data on the primary outcome. Furthermore, two studies lacked a placebo group, and lead authors of two studies disclosed ties to the probiotic manufacturers that provided funding.
One reviewer received fellowship support from the Louis and Rachel Rudin Foundation. None had conflicts of interest.
FROM GASTROENTEROLOGY
Key clinical point: Starting probiotics within 2 days of antibiotics was associated with a significantly reduced risk of Clostridium difficile infection among hospitalized patients.
Major finding: Probiotics were significantly effective against CDI only when started within 2 days of antibiotic initiation (relative risk, 0.32; 95% CI, 0.22-0.48), not when started within 3-7 days (RR, 0.70; 95% CI, 0.40-1.23).
Data source: A systematic review and metaregression analysis of 19 studies of 6,261 patients.
Disclosures: One reviewer received fellowship support from the Louis and Rachel Rudin Foundation. None had conflicts of interest.
Distance from transplant center predicted mortality in chronic liver disease
Living more than 150 miles from a liver transplant center was associated with a higher risk of mortality among patients with chronic liver failure, regardless of etiology, transplantation status, or whether patients had decompensated cirrhosis or hepatocellular carcinoma, according to a first-in-kind, population-based study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.02.023).
The findings underscore the need for accessible, specialized liver care irrespective of whether patients with chronic liver failure (CLF) are destined for transplantation, David S. Goldberg, MD, of the University of Pennsylvania, Philadelphia, wrote with his associates. The associations “do not provide cause and effect,” but underscore the need to consider “the broader impact of transplant-related policies that could decrease transplant volumes and threaten closures of smaller liver transplant centers that serve geographically isolated populations in the Southeast and Midwest,” they added.
A total of 879 (5.2%) patients lived more than 150 miles from the nearest liver transplant center, the analysis showed. Even after controlling for etiology of liver disease, this subgroup was at significantly greater risk of mortality (hazard ratio, 1.2; 95% confidence interval, 1.1-1.3; P less than .001) and of dying without undergoing transplantation (HR, 1.2; 95% CI, 1.1-1.3; P = .003) than were patients who were less geographically isolated. Distance from a transplant center also predicted overall and transplant-free mortality when modeled as a continuous variable, with hazard ratios of 1.02 (P = .02) and 1.03 (P = .04), respectively. “Although patients living more than 150 miles from a liver transplant center had fewer outpatient gastroenterologist visits, this covariate did not affect the final models,” the investigators reported. Rural locality did not predict mortality after controlling for distance from a transplant center, and neither did living in a low-income zip code, they added.
Data from the Centers for Disease Control and Prevention indicate that age-adjusted rates of death from liver disease are lowest in New York, where the entire population lives within 150 miles of a liver transplant center, the researchers noted. “By contrast, New Mexico and Wyoming have the highest age-adjusted death rates, and more than 95% of those states’ populations live more than 150 miles from a [transplant] center,” they emphasized. “The management of most patients with CLF is not centered on transplantation, but rather the spectrum of care for decompensated cirrhosis and hepatocellular carcinoma. Thus, maintaining access to specialized liver care is important for patients with CLF.”
Dr. Goldberg received support from the National Institutes of Health. The investigators had no conflicts.
Living more than 150 miles from a liver transplant center was associated with a higher risk of mortality among patients with chronic liver failure, regardless of etiology, transplantation status, or whether patients had decompensated cirrhosis or hepatocellular carcinoma, according to a first-in-kind, population-based study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.02.023).
The findings underscore the need for accessible, specialized liver care irrespective of whether patients with chronic liver failure (CLF) are destined for transplantation, David S. Goldberg, MD, of the University of Pennsylvania, Philadelphia, wrote with his associates. The associations “do not provide cause and effect,” but underscore the need to consider “the broader impact of transplant-related policies that could decrease transplant volumes and threaten closures of smaller liver transplant centers that serve geographically isolated populations in the Southeast and Midwest,” they added.
A total of 879 (5.2%) patients lived more than 150 miles from the nearest liver transplant center, the analysis showed. Even after controlling for etiology of liver disease, this subgroup was at significantly greater risk of mortality (hazard ratio, 1.2; 95% confidence interval, 1.1-1.3; P less than .001) and of dying without undergoing transplantation (HR, 1.2; 95% CI, 1.1-1.3; P = .003) than were patients who were less geographically isolated. Distance from a transplant center also predicted overall and transplant-free mortality when modeled as a continuous variable, with hazard ratios of 1.02 (P = .02) and 1.03 (P = .04), respectively. “Although patients living more than 150 miles from a liver transplant center had fewer outpatient gastroenterologist visits, this covariate did not affect the final models,” the investigators reported. Rural locality did not predict mortality after controlling for distance from a transplant center, and neither did living in a low-income zip code, they added.
Data from the Centers for Disease Control and Prevention indicate that age-adjusted rates of death from liver disease are lowest in New York, where the entire population lives within 150 miles of a liver transplant center, the researchers noted. “By contrast, New Mexico and Wyoming have the highest age-adjusted death rates, and more than 95% of those states’ populations live more than 150 miles from a [transplant] center,” they emphasized. “The management of most patients with CLF is not centered on transplantation, but rather the spectrum of care for decompensated cirrhosis and hepatocellular carcinoma. Thus, maintaining access to specialized liver care is important for patients with CLF.”
Dr. Goldberg received support from the National Institutes of Health. The investigators had no conflicts.
Living more than 150 miles from a liver transplant center was associated with a higher risk of mortality among patients with chronic liver failure, regardless of etiology, transplantation status, or whether patients had decompensated cirrhosis or hepatocellular carcinoma, according to a first-in-kind, population-based study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2017.02.023).
The findings underscore the need for accessible, specialized liver care irrespective of whether patients with chronic liver failure (CLF) are destined for transplantation, David S. Goldberg, MD, of the University of Pennsylvania, Philadelphia, wrote with his associates. The associations “do not provide cause and effect,” but underscore the need to consider “the broader impact of transplant-related policies that could decrease transplant volumes and threaten closures of smaller liver transplant centers that serve geographically isolated populations in the Southeast and Midwest,” they added.
A total of 879 (5.2%) patients lived more than 150 miles from the nearest liver transplant center, the analysis showed. Even after controlling for etiology of liver disease, this subgroup was at significantly greater risk of mortality (hazard ratio, 1.2; 95% confidence interval, 1.1-1.3; P less than .001) and of dying without undergoing transplantation (HR, 1.2; 95% CI, 1.1-1.3; P = .003) than were patients who were less geographically isolated. Distance from a transplant center also predicted overall and transplant-free mortality when modeled as a continuous variable, with hazard ratios of 1.02 (P = .02) and 1.03 (P = .04), respectively. “Although patients living more than 150 miles from a liver transplant center had fewer outpatient gastroenterologist visits, this covariate did not affect the final models,” the investigators reported. Rural locality did not predict mortality after controlling for distance from a transplant center, and neither did living in a low-income zip code, they added.
Data from the Centers for Disease Control and Prevention indicate that age-adjusted rates of death from liver disease are lowest in New York, where the entire population lives within 150 miles of a liver transplant center, the researchers noted. “By contrast, New Mexico and Wyoming have the highest age-adjusted death rates, and more than 95% of those states’ populations live more than 150 miles from a [transplant] center,” they emphasized. “The management of most patients with CLF is not centered on transplantation, but rather the spectrum of care for decompensated cirrhosis and hepatocellular carcinoma. Thus, maintaining access to specialized liver care is important for patients with CLF.”
Dr. Goldberg received support from the National Institutes of Health. The investigators had no conflicts.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Geographic isolation from a liver transplant center independently predicted mortality among patients with chronic liver failure.
Major finding: In adjusted analyses, patients who lived more than 150 miles from a liver transplant center were at significantly greater risk of mortality (HR, 1.2; 95% CI, 1.1-1.3; P less than .001) and of dying without undergoing transplantation (HR, 1.2; 95% CI, 1.1-1.3; P = .003) than were patients who were less geographically isolated.
Data source: A retrospective cohort study of 16,824 patients with chronic liver failure who were included in the Healthcare Integrated Research Database between 2006 and 2014.
Disclosures: Dr. Goldberg received support from the National Institutes of Health. The investigators had no conflicts.
Persistently nondysplastic Barrett’s esophagus did not protect against progression
Patients with at least five biopsies showing nondysplastic Barrett’s esophagus were statistically as likely to progress to high-grade dysplasia or esophageal adenocarcinoma as patients with a single such biopsy, according to a multicenter prospective registry study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: org/10.1016/j.cgh.2017.02.019).
The findings, which contradict those from another recent multicenter cohort study (Gastroenterology. 2013;145[3]:548-53), highlight the need for more studies before lengthening the time between surveillance biopsies in patients with nondysplastic Barrett’s esophagus, Rajesh Krishnamoorthi, MD, of Mayo Clinic in Rochester, Minn., wrote with his associates.
Barrett’s esophagus is the strongest predictor of esophageal adenocarcinoma, but studies have reported mixed results as to whether the risk of this cancer increases over time or wanes with consecutive biopsies that indicate nondysplasia, the researchers noted. Therefore, they studied the prospective, multicenter Mayo Clinic Esophageal Adenocarcinoma and Barrett’s Esophagus registry, excluding patients who progressed to adenocarcinoma within 12 months, had missing data, or had no follow-up biopsies. This approach left 480 subjects for analysis. Patients averaged 63 years of age, 78% were male, the mean length of Barrett’s esophagus was 5.7 cm, and the average time between biopsies was 1.8 years, with a standard deviation of 1.3 years.
A total of 16 patients progressed to high-grade dysplasia or esophageal adenocarcinoma over 1,832 patient-years of follow-up, for an overall annual risk of progression of 0.87%. Two patients progressed to esophageal adenocarcinoma (annual risk, 0.11%; 95% confidence interval, 0.03% to 0.44%), while 14 patients progressed to high-grade dysplasia (annual risk, 0.76%; 95% CI, 0.45% to 1.29%). Eight patients progressed to one of these two outcomes after a single nondysplastic biopsy, three progressed after two such biopsies, three progressed after three such biopsies, none progressed after four such biopsies, and two progressed after five such biopsies. Statistically, patients with at least five consecutive nondysplastic biopsies were no less likely to progress than were patients with only one nondysplastic biopsy (hazard ratio, 0.48; 95% CI, 0.07 to 1.92; P = .32). Hazard ratios for the other groups ranged between 0.0 and 0.85, with no significant difference in estimated risk between groups (P = .68) after controlling for age, sex, and length of Barrett’s esophagus.
The previous multicenter cohort study linked persistently nondysplastic Barrett’s esophagus with a lower rate of progression to esophageal adenocarcinoma, and, based on those findings, the authors suggested lengthening intervals between biopsy surveillance or even stopping surveillance, Dr. Krishnamoorthi and his associates noted. However, that study did not have mutually exclusive groups. “Additional data are required before increasing the interval between surveillance endoscopies based on persistence of nondysplastic Barrett’s esophagus,” they concluded.
The study lacked misclassification bias given long-segment Barrett’s esophagus, and specialized gastrointestinal pathologists interpreted all histology specimens, the researchers noted. “The small number of progressors is a potential limitation, reducing power to assess associations,” they added.
The investigators did not report funding sources. They reported having no conflicts of interest.
Current practice guidelines recommend endoscopic surveillance in Barrett’s esophagus (BE) patients to detect esophageal adenocarcinoma (EAC) at an early and potentially curable stage.
As currently practiced, endoscopic surveillance of BE has numerous limitations and provides the impetus for improved risk-stratification and, ultimately, the effectiveness of current surveillance strategies. Persistence of nondysplastic BE (NDBE) has previously been shown to be an indicator of lower risk of progression to high-grade dysplasia (HGD)/EAC. However, outcomes studies on this topic have reported conflicting results.
Where do we stand with regard to persistence of NDBE and its impact on surveillance intervals? Future large cohort studies are required that address all potential confounders and include a large number of patients with progression to HGD/EAC (a challenge given the rarity of this outcome). At the present time, based on the available data, surveillance intervals cannot be lengthened in patients with persistent NDBE. Future studies also need to focus on the development and validation of prediction models that incorporate clinical, endoscopic, and histologic factors in risk stratification. Until then, meticulous examination techniques, cognitive knowledge and training, use of standardized grading systems, and use of high-definition white light endoscopy are critical in improving effectiveness of surveillance programs in BE patients.
Sachin Wani, MD, is associate professor of medicine and Medical codirector of the Esophageal and Gastric Center of Excellence, division of gastroenterology and hepatology, University of Colorado at Denver, Aurora. He is supported by the University of Colorado Department of Medicine Outstanding Early Scholars Program and is a consultant for Medtronic and Boston Scientific.
Current practice guidelines recommend endoscopic surveillance in Barrett’s esophagus (BE) patients to detect esophageal adenocarcinoma (EAC) at an early and potentially curable stage.
As currently practiced, endoscopic surveillance of BE has numerous limitations and provides the impetus for improved risk-stratification and, ultimately, the effectiveness of current surveillance strategies. Persistence of nondysplastic BE (NDBE) has previously been shown to be an indicator of lower risk of progression to high-grade dysplasia (HGD)/EAC. However, outcomes studies on this topic have reported conflicting results.
Where do we stand with regard to persistence of NDBE and its impact on surveillance intervals? Future large cohort studies are required that address all potential confounders and include a large number of patients with progression to HGD/EAC (a challenge given the rarity of this outcome). At the present time, based on the available data, surveillance intervals cannot be lengthened in patients with persistent NDBE. Future studies also need to focus on the development and validation of prediction models that incorporate clinical, endoscopic, and histologic factors in risk stratification. Until then, meticulous examination techniques, cognitive knowledge and training, use of standardized grading systems, and use of high-definition white light endoscopy are critical in improving effectiveness of surveillance programs in BE patients.
Sachin Wani, MD, is associate professor of medicine and Medical codirector of the Esophageal and Gastric Center of Excellence, division of gastroenterology and hepatology, University of Colorado at Denver, Aurora. He is supported by the University of Colorado Department of Medicine Outstanding Early Scholars Program and is a consultant for Medtronic and Boston Scientific.
Current practice guidelines recommend endoscopic surveillance in Barrett’s esophagus (BE) patients to detect esophageal adenocarcinoma (EAC) at an early and potentially curable stage.
As currently practiced, endoscopic surveillance of BE has numerous limitations and provides the impetus for improved risk-stratification and, ultimately, the effectiveness of current surveillance strategies. Persistence of nondysplastic BE (NDBE) has previously been shown to be an indicator of lower risk of progression to high-grade dysplasia (HGD)/EAC. However, outcomes studies on this topic have reported conflicting results.
Where do we stand with regard to persistence of NDBE and its impact on surveillance intervals? Future large cohort studies are required that address all potential confounders and include a large number of patients with progression to HGD/EAC (a challenge given the rarity of this outcome). At the present time, based on the available data, surveillance intervals cannot be lengthened in patients with persistent NDBE. Future studies also need to focus on the development and validation of prediction models that incorporate clinical, endoscopic, and histologic factors in risk stratification. Until then, meticulous examination techniques, cognitive knowledge and training, use of standardized grading systems, and use of high-definition white light endoscopy are critical in improving effectiveness of surveillance programs in BE patients.
Sachin Wani, MD, is associate professor of medicine and Medical codirector of the Esophageal and Gastric Center of Excellence, division of gastroenterology and hepatology, University of Colorado at Denver, Aurora. He is supported by the University of Colorado Department of Medicine Outstanding Early Scholars Program and is a consultant for Medtronic and Boston Scientific.
Patients with at least five biopsies showing nondysplastic Barrett’s esophagus were statistically as likely to progress to high-grade dysplasia or esophageal adenocarcinoma as patients with a single such biopsy, according to a multicenter prospective registry study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: org/10.1016/j.cgh.2017.02.019).
The findings, which contradict those from another recent multicenter cohort study (Gastroenterology. 2013;145[3]:548-53), highlight the need for more studies before lengthening the time between surveillance biopsies in patients with nondysplastic Barrett’s esophagus, Rajesh Krishnamoorthi, MD, of Mayo Clinic in Rochester, Minn., wrote with his associates.
Barrett’s esophagus is the strongest predictor of esophageal adenocarcinoma, but studies have reported mixed results as to whether the risk of this cancer increases over time or wanes with consecutive biopsies that indicate nondysplasia, the researchers noted. Therefore, they studied the prospective, multicenter Mayo Clinic Esophageal Adenocarcinoma and Barrett’s Esophagus registry, excluding patients who progressed to adenocarcinoma within 12 months, had missing data, or had no follow-up biopsies. This approach left 480 subjects for analysis. Patients averaged 63 years of age, 78% were male, the mean length of Barrett’s esophagus was 5.7 cm, and the average time between biopsies was 1.8 years, with a standard deviation of 1.3 years.
A total of 16 patients progressed to high-grade dysplasia or esophageal adenocarcinoma over 1,832 patient-years of follow-up, for an overall annual risk of progression of 0.87%. Two patients progressed to esophageal adenocarcinoma (annual risk, 0.11%; 95% confidence interval, 0.03% to 0.44%), while 14 patients progressed to high-grade dysplasia (annual risk, 0.76%; 95% CI, 0.45% to 1.29%). Eight patients progressed to one of these two outcomes after a single nondysplastic biopsy, three progressed after two such biopsies, three progressed after three such biopsies, none progressed after four such biopsies, and two progressed after five such biopsies. Statistically, patients with at least five consecutive nondysplastic biopsies were no less likely to progress than were patients with only one nondysplastic biopsy (hazard ratio, 0.48; 95% CI, 0.07 to 1.92; P = .32). Hazard ratios for the other groups ranged between 0.0 and 0.85, with no significant difference in estimated risk between groups (P = .68) after controlling for age, sex, and length of Barrett’s esophagus.
The previous multicenter cohort study linked persistently nondysplastic Barrett’s esophagus with a lower rate of progression to esophageal adenocarcinoma, and, based on those findings, the authors suggested lengthening intervals between biopsy surveillance or even stopping surveillance, Dr. Krishnamoorthi and his associates noted. However, that study did not have mutually exclusive groups. “Additional data are required before increasing the interval between surveillance endoscopies based on persistence of nondysplastic Barrett’s esophagus,” they concluded.
The study lacked misclassification bias given long-segment Barrett’s esophagus, and specialized gastrointestinal pathologists interpreted all histology specimens, the researchers noted. “The small number of progressors is a potential limitation, reducing power to assess associations,” they added.
The investigators did not report funding sources. They reported having no conflicts of interest.
Patients with at least five biopsies showing nondysplastic Barrett’s esophagus were statistically as likely to progress to high-grade dysplasia or esophageal adenocarcinoma as patients with a single such biopsy, according to a multicenter prospective registry study reported in the June issue of Clinical Gastroenterology and Hepatology (doi: org/10.1016/j.cgh.2017.02.019).
The findings, which contradict those from another recent multicenter cohort study (Gastroenterology. 2013;145[3]:548-53), highlight the need for more studies before lengthening the time between surveillance biopsies in patients with nondysplastic Barrett’s esophagus, Rajesh Krishnamoorthi, MD, of Mayo Clinic in Rochester, Minn., wrote with his associates.
Barrett’s esophagus is the strongest predictor of esophageal adenocarcinoma, but studies have reported mixed results as to whether the risk of this cancer increases over time or wanes with consecutive biopsies that indicate nondysplasia, the researchers noted. Therefore, they studied the prospective, multicenter Mayo Clinic Esophageal Adenocarcinoma and Barrett’s Esophagus registry, excluding patients who progressed to adenocarcinoma within 12 months, had missing data, or had no follow-up biopsies. This approach left 480 subjects for analysis. Patients averaged 63 years of age, 78% were male, the mean length of Barrett’s esophagus was 5.7 cm, and the average time between biopsies was 1.8 years, with a standard deviation of 1.3 years.
A total of 16 patients progressed to high-grade dysplasia or esophageal adenocarcinoma over 1,832 patient-years of follow-up, for an overall annual risk of progression of 0.87%. Two patients progressed to esophageal adenocarcinoma (annual risk, 0.11%; 95% confidence interval, 0.03% to 0.44%), while 14 patients progressed to high-grade dysplasia (annual risk, 0.76%; 95% CI, 0.45% to 1.29%). Eight patients progressed to one of these two outcomes after a single nondysplastic biopsy, three progressed after two such biopsies, three progressed after three such biopsies, none progressed after four such biopsies, and two progressed after five such biopsies. Statistically, patients with at least five consecutive nondysplastic biopsies were no less likely to progress than were patients with only one nondysplastic biopsy (hazard ratio, 0.48; 95% CI, 0.07 to 1.92; P = .32). Hazard ratios for the other groups ranged between 0.0 and 0.85, with no significant difference in estimated risk between groups (P = .68) after controlling for age, sex, and length of Barrett’s esophagus.
The previous multicenter cohort study linked persistently nondysplastic Barrett’s esophagus with a lower rate of progression to esophageal adenocarcinoma, and, based on those findings, the authors suggested lengthening intervals between biopsy surveillance or even stopping surveillance, Dr. Krishnamoorthi and his associates noted. However, that study did not have mutually exclusive groups. “Additional data are required before increasing the interval between surveillance endoscopies based on persistence of nondysplastic Barrett’s esophagus,” they concluded.
The study lacked misclassification bias given long-segment Barrett’s esophagus, and specialized gastrointestinal pathologists interpreted all histology specimens, the researchers noted. “The small number of progressors is a potential limitation, reducing power to assess associations,” they added.
The investigators did not report funding sources. They reported having no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Patients with multiple consecutive biopsies showing nondysplastic Barrett’s esophagus were statistically as likely to progress to esophageal adenocarcinoma or high-grade dysplasia as those with a single nondysplastic biopsy.
Major finding: Hazard ratios for progression ranged between 0.00 and 0.85, with no significant difference in estimated risk among groups stratified by number of consecutive nondysplastic biopsies (P = .68), after controlling for age, sex, and length of Barrett’s esophagus.
Data source: A prospective multicenter registry of 480 patients with nondysplastic Barrett’s esophagus and multiple surveillance biopsies.
Disclosures: The investigators did not report funding sources. They reported having no conflicts of interest.
AGA Guideline: Transient elastography in liver fibrosis, most used and most accurate
Vibration-controlled transient elastography (VCTE) can accurately diagnose cirrhosis in most patients with chronic liver disease, particularly those with chronic hepatitis B or C, states a new guideline from the AGA Institute, published in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.03.017).
However, magnetic resonance elastography (MRE) is somewhat more accurate for detecting cirrhosis in nonalcoholic fatty liver disease, wrote Joseph K. Lim, MD, AGAF, of Yale University in New Haven, Conn., with his associates from the Clinical Guidelines Committee of the AGA. VCTE is convenient but performs unevenly in some liver conditions and is especially unreliable in patients with acute hepatitis, alcohol abuse, food intake within 2-3 hours, congestive heart failure, or extrahepatic cholestasis, the guideline notes. Yet, VCTE remains the most common imaging tool for diagnosing fibrosis in the United States, and the guideline addresses “focused, clinically relevant questions” to guide its use.
When possible, clinicians should use VCTE instead of noninvasive serum tests for cirrhosis in patients with chronic hepatitis C, the guideline asserts. In pooled analyses of 62 studies, VCTE detected about 89% of cirrhosis cases (95% confidence interval, 84%-92%), Fibrosis-4 test (FIB-4) detected 87% (95% CI, 74%-94%), and aspartate aminotransferase to platelet ratio index (APRI) detected 77% (95% CI, 73%-81%). The specificity of VCTE (91%) also equaled or exceeded that of FIB-4 (91%) or APRI (78%), the guideline noted.
For chronic hepatitis C, MRE had “poorer specificity with higher false-positive rates, suggesting poorer diagnostic performance,” compared with VCTE. Lower cost and lower point-of-care availability make VCTE “an attractive solution compared to MRE,” the guideline adds. It conditionally recommends VCTE cutoffs of 12.5 kPa for cirrhosis and 9.5 kPa for advanced (F3-F4) liver fibrosis after patients have a sustained virologic response to therapy. The 9.5-kPa cutoff would misclassify only 1% of low-risk patients and 7% of high-risk patients, but noncirrhotic patients (less than 9.5 kPa) may reasonably choose to continue specialty care if they prioritize avoiding “the small risk” of hepatocellular carcinoma over the “inconvenience and risks of continued laboratory and fibrosis testing.”
For chronic hepatitis B, the guideline conditionally recommends VCTE with an 11.0-kPa cutoff over APRI or FIB-4. In a pooled analysis of 28 studies, VCTE detected cirrhosis with a sensitivity of 86% and a specificity of 85%, compared with 66% and 74%, respectively, for APRI, and 87% and 65%, respectively, for FIB-4. However, the overall diagnostic performance of VCTE resembled that of the serum tests, and clinicians should interpret VCTE in the context of other clinical cirrhosis data, the guideline states.
Among 17 studies of VCTE cutoffs in hepatitis B, an 11.0-kPa threshold diagnosed cirrhosis with a sensitivity of 81% and a specificity of 83%. This cutoff would miss cirrhosis in less than 1% of low-risk patients and about 5% of high-risk patients and would yield false positives in 10%-15% of patients. Thus, its cutoff minimizes false negatives, reflecting “a judgment that the harm of missing cirrhosis is greater than the harm of over diagnosis,” the authors write.
For chronic alcoholic liver disease, the AGA conditionally recommends VCTE with a cirrhosis cutoff of 12.5 kPa. In pooled analyses, this value had a sensitivity of 95% and a specificity of 71%. For suspected compensated cirrhosis, the guideline conditionally suggests a 19.5-kPa cutoff when assessing the need for esophagogastroduodenoscopy (EGD) to identify high-risk esophageal varices. Patients who fall below this cutoff can reasonably pursue screening endoscopy if they are concerned about the small risk of acute variceal hemorrhage, the guideline adds.
The guideline also conditionally recommends a 17-kPa cutoff to detect clinically significant portal hypertension in patients with suspected chronic liver disease who are undergoing elective nonhepatic surgeries. This cutoff will miss about 0.1% of very low-risk patients, 0.8% of low-risk patients, and 7% of high-risk patients. Because the failure to detect portal hypertension contributes to operative morbidity and mortality, higher-risk patients might “reasonably” pursue screening endoscopy even if their kPa is below the cutoff, the guideline states.
The guideline made no recommendation about VCTE versus APRI or FIB-4 in adults with nonalcoholic fatty liver disease (NAFLD), citing “unacceptable bias” in 12 studies that excluded obese patients, used per-protocol rather than intention-to-diagnose analyses, and ignored “unsuccessful or inadequate” liver stiffness measurements, which are relatively common in NAFLD, the guideline notes. It conditionally recommends MRE over VCTE in high-risk adults with NAFLD, including those who are older, diabetic, or obese (especially with central adiposity) or who have alanine levels more than twice the upper limit of normal. However, it cites insufficient evidence to extend this recommendation to low-risk patients who only have imaging evidence of fatty liver.
Overall, the guideline focuses on “routine clinical management issues, and [does] not address comparisons with proprietary serum fibrosis assays, other emerging imaging-based fibrosis assessment techniques, or combinations of more than one noninvasive fibrosis test,” the authors note. They also limited VCTE cutoffs to single thresholds that prioritized sensitivity over specificity. “Additional studies are needed to further define the role of VCTE, MRE, and emerging diagnostic studies in the assessment of liver fibrosis, for which a significant unmet medical need remains, particularly in conditions such as NAFLD/[nonalcoholic steatohepatitis],” they add. “In particular, defining the implications for serial liver stiffness measurements over time on management decisions is of great interest.”
Dr. Muir has served as a consultant for AbbVie, Bristol-Myers Squibb, Gilead, and Merck. Dr. Lim has served as a consultant for Bristol Myers-Squibb, Gilead, Merck, and Boehringer Ingelheim. Dr. Flamm has served as a consultant or received research support from Gilead, Bristol-Myers Squibb, AbbVie, Salix Pharmaceuticals, and Intercept Pharmaceuticals. Dr. Dieterich has presented lectures for Gilead and Merck products. The rest of the authors disclosed no conflicts related to the content of this guideline.
Vibration-controlled transient elastography (VCTE) can accurately diagnose cirrhosis in most patients with chronic liver disease, particularly those with chronic hepatitis B or C, states a new guideline from the AGA Institute, published in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.03.017).
However, magnetic resonance elastography (MRE) is somewhat more accurate for detecting cirrhosis in nonalcoholic fatty liver disease, wrote Joseph K. Lim, MD, AGAF, of Yale University in New Haven, Conn., with his associates from the Clinical Guidelines Committee of the AGA. VCTE is convenient but performs unevenly in some liver conditions and is especially unreliable in patients with acute hepatitis, alcohol abuse, food intake within 2-3 hours, congestive heart failure, or extrahepatic cholestasis, the guideline notes. Yet, VCTE remains the most common imaging tool for diagnosing fibrosis in the United States, and the guideline addresses “focused, clinically relevant questions” to guide its use.
When possible, clinicians should use VCTE instead of noninvasive serum tests for cirrhosis in patients with chronic hepatitis C, the guideline asserts. In pooled analyses of 62 studies, VCTE detected about 89% of cirrhosis cases (95% confidence interval, 84%-92%), Fibrosis-4 test (FIB-4) detected 87% (95% CI, 74%-94%), and aspartate aminotransferase to platelet ratio index (APRI) detected 77% (95% CI, 73%-81%). The specificity of VCTE (91%) also equaled or exceeded that of FIB-4 (91%) or APRI (78%), the guideline noted.
For chronic hepatitis C, MRE had “poorer specificity with higher false-positive rates, suggesting poorer diagnostic performance,” compared with VCTE. Lower cost and lower point-of-care availability make VCTE “an attractive solution compared to MRE,” the guideline adds. It conditionally recommends VCTE cutoffs of 12.5 kPa for cirrhosis and 9.5 kPa for advanced (F3-F4) liver fibrosis after patients have a sustained virologic response to therapy. The 9.5-kPa cutoff would misclassify only 1% of low-risk patients and 7% of high-risk patients, but noncirrhotic patients (less than 9.5 kPa) may reasonably choose to continue specialty care if they prioritize avoiding “the small risk” of hepatocellular carcinoma over the “inconvenience and risks of continued laboratory and fibrosis testing.”
For chronic hepatitis B, the guideline conditionally recommends VCTE with an 11.0-kPa cutoff over APRI or FIB-4. In a pooled analysis of 28 studies, VCTE detected cirrhosis with a sensitivity of 86% and a specificity of 85%, compared with 66% and 74%, respectively, for APRI, and 87% and 65%, respectively, for FIB-4. However, the overall diagnostic performance of VCTE resembled that of the serum tests, and clinicians should interpret VCTE in the context of other clinical cirrhosis data, the guideline states.
Among 17 studies of VCTE cutoffs in hepatitis B, an 11.0-kPa threshold diagnosed cirrhosis with a sensitivity of 81% and a specificity of 83%. This cutoff would miss cirrhosis in less than 1% of low-risk patients and about 5% of high-risk patients and would yield false positives in 10%-15% of patients. Thus, its cutoff minimizes false negatives, reflecting “a judgment that the harm of missing cirrhosis is greater than the harm of over diagnosis,” the authors write.
For chronic alcoholic liver disease, the AGA conditionally recommends VCTE with a cirrhosis cutoff of 12.5 kPa. In pooled analyses, this value had a sensitivity of 95% and a specificity of 71%. For suspected compensated cirrhosis, the guideline conditionally suggests a 19.5-kPa cutoff when assessing the need for esophagogastroduodenoscopy (EGD) to identify high-risk esophageal varices. Patients who fall below this cutoff can reasonably pursue screening endoscopy if they are concerned about the small risk of acute variceal hemorrhage, the guideline adds.
The guideline also conditionally recommends a 17-kPa cutoff to detect clinically significant portal hypertension in patients with suspected chronic liver disease who are undergoing elective nonhepatic surgeries. This cutoff will miss about 0.1% of very low-risk patients, 0.8% of low-risk patients, and 7% of high-risk patients. Because the failure to detect portal hypertension contributes to operative morbidity and mortality, higher-risk patients might “reasonably” pursue screening endoscopy even if their kPa is below the cutoff, the guideline states.
The guideline made no recommendation about VCTE versus APRI or FIB-4 in adults with nonalcoholic fatty liver disease (NAFLD), citing “unacceptable bias” in 12 studies that excluded obese patients, used per-protocol rather than intention-to-diagnose analyses, and ignored “unsuccessful or inadequate” liver stiffness measurements, which are relatively common in NAFLD, the guideline notes. It conditionally recommends MRE over VCTE in high-risk adults with NAFLD, including those who are older, diabetic, or obese (especially with central adiposity) or who have alanine levels more than twice the upper limit of normal. However, it cites insufficient evidence to extend this recommendation to low-risk patients who only have imaging evidence of fatty liver.
Overall, the guideline focuses on “routine clinical management issues, and [does] not address comparisons with proprietary serum fibrosis assays, other emerging imaging-based fibrosis assessment techniques, or combinations of more than one noninvasive fibrosis test,” the authors note. They also limited VCTE cutoffs to single thresholds that prioritized sensitivity over specificity. “Additional studies are needed to further define the role of VCTE, MRE, and emerging diagnostic studies in the assessment of liver fibrosis, for which a significant unmet medical need remains, particularly in conditions such as NAFLD/[nonalcoholic steatohepatitis],” they add. “In particular, defining the implications for serial liver stiffness measurements over time on management decisions is of great interest.”
Dr. Muir has served as a consultant for AbbVie, Bristol-Myers Squibb, Gilead, and Merck. Dr. Lim has served as a consultant for Bristol Myers-Squibb, Gilead, Merck, and Boehringer Ingelheim. Dr. Flamm has served as a consultant or received research support from Gilead, Bristol-Myers Squibb, AbbVie, Salix Pharmaceuticals, and Intercept Pharmaceuticals. Dr. Dieterich has presented lectures for Gilead and Merck products. The rest of the authors disclosed no conflicts related to the content of this guideline.
Vibration-controlled transient elastography (VCTE) can accurately diagnose cirrhosis in most patients with chronic liver disease, particularly those with chronic hepatitis B or C, states a new guideline from the AGA Institute, published in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.03.017).
However, magnetic resonance elastography (MRE) is somewhat more accurate for detecting cirrhosis in nonalcoholic fatty liver disease, wrote Joseph K. Lim, MD, AGAF, of Yale University in New Haven, Conn., with his associates from the Clinical Guidelines Committee of the AGA. VCTE is convenient but performs unevenly in some liver conditions and is especially unreliable in patients with acute hepatitis, alcohol abuse, food intake within 2-3 hours, congestive heart failure, or extrahepatic cholestasis, the guideline notes. Yet, VCTE remains the most common imaging tool for diagnosing fibrosis in the United States, and the guideline addresses “focused, clinically relevant questions” to guide its use.
When possible, clinicians should use VCTE instead of noninvasive serum tests for cirrhosis in patients with chronic hepatitis C, the guideline asserts. In pooled analyses of 62 studies, VCTE detected about 89% of cirrhosis cases (95% confidence interval, 84%-92%), Fibrosis-4 test (FIB-4) detected 87% (95% CI, 74%-94%), and aspartate aminotransferase to platelet ratio index (APRI) detected 77% (95% CI, 73%-81%). The specificity of VCTE (91%) also equaled or exceeded that of FIB-4 (91%) or APRI (78%), the guideline noted.
For chronic hepatitis C, MRE had “poorer specificity with higher false-positive rates, suggesting poorer diagnostic performance,” compared with VCTE. Lower cost and lower point-of-care availability make VCTE “an attractive solution compared to MRE,” the guideline adds. It conditionally recommends VCTE cutoffs of 12.5 kPa for cirrhosis and 9.5 kPa for advanced (F3-F4) liver fibrosis after patients have a sustained virologic response to therapy. The 9.5-kPa cutoff would misclassify only 1% of low-risk patients and 7% of high-risk patients, but noncirrhotic patients (less than 9.5 kPa) may reasonably choose to continue specialty care if they prioritize avoiding “the small risk” of hepatocellular carcinoma over the “inconvenience and risks of continued laboratory and fibrosis testing.”
For chronic hepatitis B, the guideline conditionally recommends VCTE with an 11.0-kPa cutoff over APRI or FIB-4. In a pooled analysis of 28 studies, VCTE detected cirrhosis with a sensitivity of 86% and a specificity of 85%, compared with 66% and 74%, respectively, for APRI, and 87% and 65%, respectively, for FIB-4. However, the overall diagnostic performance of VCTE resembled that of the serum tests, and clinicians should interpret VCTE in the context of other clinical cirrhosis data, the guideline states.
Among 17 studies of VCTE cutoffs in hepatitis B, an 11.0-kPa threshold diagnosed cirrhosis with a sensitivity of 81% and a specificity of 83%. This cutoff would miss cirrhosis in less than 1% of low-risk patients and about 5% of high-risk patients and would yield false positives in 10%-15% of patients. Thus, its cutoff minimizes false negatives, reflecting “a judgment that the harm of missing cirrhosis is greater than the harm of over diagnosis,” the authors write.
For chronic alcoholic liver disease, the AGA conditionally recommends VCTE with a cirrhosis cutoff of 12.5 kPa. In pooled analyses, this value had a sensitivity of 95% and a specificity of 71%. For suspected compensated cirrhosis, the guideline conditionally suggests a 19.5-kPa cutoff when assessing the need for esophagogastroduodenoscopy (EGD) to identify high-risk esophageal varices. Patients who fall below this cutoff can reasonably pursue screening endoscopy if they are concerned about the small risk of acute variceal hemorrhage, the guideline adds.
The guideline also conditionally recommends a 17-kPa cutoff to detect clinically significant portal hypertension in patients with suspected chronic liver disease who are undergoing elective nonhepatic surgeries. This cutoff will miss about 0.1% of very low-risk patients, 0.8% of low-risk patients, and 7% of high-risk patients. Because the failure to detect portal hypertension contributes to operative morbidity and mortality, higher-risk patients might “reasonably” pursue screening endoscopy even if their kPa is below the cutoff, the guideline states.
The guideline made no recommendation about VCTE versus APRI or FIB-4 in adults with nonalcoholic fatty liver disease (NAFLD), citing “unacceptable bias” in 12 studies that excluded obese patients, used per-protocol rather than intention-to-diagnose analyses, and ignored “unsuccessful or inadequate” liver stiffness measurements, which are relatively common in NAFLD, the guideline notes. It conditionally recommends MRE over VCTE in high-risk adults with NAFLD, including those who are older, diabetic, or obese (especially with central adiposity) or who have alanine levels more than twice the upper limit of normal. However, it cites insufficient evidence to extend this recommendation to low-risk patients who only have imaging evidence of fatty liver.
Overall, the guideline focuses on “routine clinical management issues, and [does] not address comparisons with proprietary serum fibrosis assays, other emerging imaging-based fibrosis assessment techniques, or combinations of more than one noninvasive fibrosis test,” the authors note. They also limited VCTE cutoffs to single thresholds that prioritized sensitivity over specificity. “Additional studies are needed to further define the role of VCTE, MRE, and emerging diagnostic studies in the assessment of liver fibrosis, for which a significant unmet medical need remains, particularly in conditions such as NAFLD/[nonalcoholic steatohepatitis],” they add. “In particular, defining the implications for serial liver stiffness measurements over time on management decisions is of great interest.”
Dr. Muir has served as a consultant for AbbVie, Bristol-Myers Squibb, Gilead, and Merck. Dr. Lim has served as a consultant for Bristol Myers-Squibb, Gilead, Merck, and Boehringer Ingelheim. Dr. Flamm has served as a consultant or received research support from Gilead, Bristol-Myers Squibb, AbbVie, Salix Pharmaceuticals, and Intercept Pharmaceuticals. Dr. Dieterich has presented lectures for Gilead and Merck products. The rest of the authors disclosed no conflicts related to the content of this guideline.
FROM GASTROENTEROLOGY
VIDEO: Study confirms uneven access to liver cancer treatment at VA hospitals
Only 25% of Veterans Affairs (VA) patients with potentially curable (Barcelona Clinic Liver Cancer stage 0/A) hepatocellular carcinoma received resection, transplantation, or ablative therapy, according to the results of a national retrospective cohort study published in the June issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.040).
Furthermore, 13% of the fittest (ECOG performance status 1-2) patients received no active treatment for their hepatocellular carcinoma, Marina Serper, MD, of Corporal Michael J. Crescenz VA Medical Center, Philadelphia, and Tamar H. Taddei, MD, of VA New York Harbor Health Care System, Brooklyn, New York, wrote with their associates in Gastroenterology.
Source: American Gastroenterological Association
“Delivery of curative therapies conferred the highest survival benefit, and notable geographic and specialist variation was observed in the delivery of active treatment,” they added. “Future studies should further evaluate modifiable health system and provider-specific barriers to delivering high quality, multidisciplinary care in hepatocellular carcinoma [in order] to optimize patient outcomes.”
Hepatocellular carcinoma ranks second worldwide and fifth in the United States as a cause of cancer mortality. Gastroenterologists, hepatologists, medical oncologists, or surgeons may take primary responsibility for treatment in community settings, but little is known about how provider and health system factors affect outcomes or the likelihood of receiving active treatments, such as liver transplantation, resection, ablative or transarterial therapy, sorafenib, systemic chemotherapy, or radiation. Accordingly, the researchers reviewed medical records and demographic data from all 3,988 U.S. patients diagnosed with hepatocellular carcinoma between 2008 and 2010 who received care at 128 Veterans Affairs centers. Patients were followed through the end of 2014. Data were from the Veterans Outcomes and Costs Associated with Liver Disease (VOCAL) cohort study (Gastroenterology. 2017 Mar 7. doi: 10.1053/j.gastro.2017.02.040).
After diagnosis, most (54%) patients only underwent transarterial palliative therapy, and 24% received no cancer treatment, the researchers reported. Being treated at an academically affiliated VA hospital nearly doubled the odds of receiving active therapy (odds ratio, 1.97; 95% confidence interval, 1.6 to 2.4; P less than .001), even after the researchers controlled for race, Charlson-Deyo comorbidity, and presenting Barcelona Clinic Liver Cancer stage. Evaluation by multiple specialists also significantly increased the odds of active treatment (OR, 1.60; 95% CI, 1.15 to 2.21; P = .005), but review by a multidisciplinary tumor board did not (OR, 1.19; P = .1).
Receipt of active therapy also varied significantly by region. Compared with patients in the Northeastern United States, those in the mid-South were significantly less likely to receive active therapy (HR, 0.62; 95% CI, 0.44-0.85). Patients in the Southeast, Central, and Western United States also were less likely to receive active treatment than were those in the Northeast, but 95% CIs for these hazard ratios were nonsignificant. Virtual tumor boards could help overcome diagnostic and treatment delays, but costs, care coordination, patient factors, and compensation issues are major barriers against implementation, the investigators noted.
Overall survival was associated with active treatment of hepatocellular carcinoma, including liver transplantation (hazard ratio, 0.22; 95% CI, 0.16-0.31), liver resection (HR, 0.38; 95% CI, 0.28-0.52), ablative therapy (HR, 0.63; 95% CI, 0.52-0.76), and transarterial therapy (HR, 0.83; 95% CI, 0.74-0.92). Reduced mortality was associated with seeing a hepatologist (HR, 0.7), medical oncologist (HR, 0.82), or surgeon (HR, 0.79) within 30 days of diagnosis (P less than .001 for each). Undergoing review by a multidisciplinary tumor board was associated with significantly reduced mortality (HR, 0.83; P less than .001), said the researchers.
“Findings from the VOCAL cohort of predominantly older males with significant medical comorbidities are important in light of the aging U.S. population and a nearly 70% expected increase in cancer among older adults,” they wrote. Together, the results indicate that access to multidisciplinary and expert care “is critical for optimizing treatment choices and for maximizing survival, but that such access is non-uniform,” they noted. “Detailed national VA clinical and administrative data are a unique resource that may be tapped to facilitate development of a parsimonious set of evidence-based, patient-centered, liver cancer–specific quality measures,” they emphasized. Quality measures based on timeliness, receipt of appropriate care, survival, or patient-reported outcomes “could be applicable both within and outside the VA system.”
The study was funded by unrestricted grants from Bayer Healthcare Pharmaceuticals and the VA HIV, Hepatitis and Public Health Pathogens Programs. The investigators had no conflicts.
The treatment of hepatocellular carcinoma (HCC) can be challenging because of the presence of underlying chronic liver disease and cirrhosis in the majority of patients. The study by Dr. Serper and colleagues evaluated the care of patients diagnosed with HCC in the Veterans Affairs (VA) system between 2008 and 2010. There are important aspects of this study worth highlighting.
First, 36% of patients presented with early-stage HCC and clearly had a better overall survival. This highlights the need for surveillance of patients with cirrhosis not only in the VA but also in other health systems. Second, only a minority of patients with early-stage HCC received curative interventions. In order to improve outcomes, patients with early stage disease should receive appropriate curative interventions. Third, gastroenterologists saw a large number of patients with HCC in the VA system, but, unfortunately, this led to less receipt of active therapy and a trend for a worse all-cause mortality, compared with hepatologists and other specialties. It is critical that gastroenterologists refer patients to specialties more adept at treating HCC in order to achieve better outcomes.
Lastly, only 34% of patients with HCC were managed via a multidisciplinary tumor conference. Importantly, these patients had an increased probability of receipt of active treatment and a 17% reduction in all-cause mortality. Our group has shown that a multidisciplinary approach to treating HCC improves overall survival. It is critical that medical centers develop a multidisciplinary treatment approach to HCC.
Jorge A. Marrero, MD, MS, AGAF, is professor of medicine and medical director for liver transplantation at UT Southwestern Medical Center Dallas. He has no conflicts of interest to report regarding this manuscript or commentary.
The treatment of hepatocellular carcinoma (HCC) can be challenging because of the presence of underlying chronic liver disease and cirrhosis in the majority of patients. The study by Dr. Serper and colleagues evaluated the care of patients diagnosed with HCC in the Veterans Affairs (VA) system between 2008 and 2010. There are important aspects of this study worth highlighting.
First, 36% of patients presented with early-stage HCC and clearly had a better overall survival. This highlights the need for surveillance of patients with cirrhosis not only in the VA but also in other health systems. Second, only a minority of patients with early-stage HCC received curative interventions. In order to improve outcomes, patients with early stage disease should receive appropriate curative interventions. Third, gastroenterologists saw a large number of patients with HCC in the VA system, but, unfortunately, this led to less receipt of active therapy and a trend for a worse all-cause mortality, compared with hepatologists and other specialties. It is critical that gastroenterologists refer patients to specialties more adept at treating HCC in order to achieve better outcomes.
Lastly, only 34% of patients with HCC were managed via a multidisciplinary tumor conference. Importantly, these patients had an increased probability of receipt of active treatment and a 17% reduction in all-cause mortality. Our group has shown that a multidisciplinary approach to treating HCC improves overall survival. It is critical that medical centers develop a multidisciplinary treatment approach to HCC.
Jorge A. Marrero, MD, MS, AGAF, is professor of medicine and medical director for liver transplantation at UT Southwestern Medical Center Dallas. He has no conflicts of interest to report regarding this manuscript or commentary.
The treatment of hepatocellular carcinoma (HCC) can be challenging because of the presence of underlying chronic liver disease and cirrhosis in the majority of patients. The study by Dr. Serper and colleagues evaluated the care of patients diagnosed with HCC in the Veterans Affairs (VA) system between 2008 and 2010. There are important aspects of this study worth highlighting.
First, 36% of patients presented with early-stage HCC and clearly had a better overall survival. This highlights the need for surveillance of patients with cirrhosis not only in the VA but also in other health systems. Second, only a minority of patients with early-stage HCC received curative interventions. In order to improve outcomes, patients with early stage disease should receive appropriate curative interventions. Third, gastroenterologists saw a large number of patients with HCC in the VA system, but, unfortunately, this led to less receipt of active therapy and a trend for a worse all-cause mortality, compared with hepatologists and other specialties. It is critical that gastroenterologists refer patients to specialties more adept at treating HCC in order to achieve better outcomes.
Lastly, only 34% of patients with HCC were managed via a multidisciplinary tumor conference. Importantly, these patients had an increased probability of receipt of active treatment and a 17% reduction in all-cause mortality. Our group has shown that a multidisciplinary approach to treating HCC improves overall survival. It is critical that medical centers develop a multidisciplinary treatment approach to HCC.
Jorge A. Marrero, MD, MS, AGAF, is professor of medicine and medical director for liver transplantation at UT Southwestern Medical Center Dallas. He has no conflicts of interest to report regarding this manuscript or commentary.
Only 25% of Veterans Affairs (VA) patients with potentially curable (Barcelona Clinic Liver Cancer stage 0/A) hepatocellular carcinoma received resection, transplantation, or ablative therapy, according to the results of a national retrospective cohort study published in the June issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.040).
Furthermore, 13% of the fittest (ECOG performance status 1-2) patients received no active treatment for their hepatocellular carcinoma, Marina Serper, MD, of Corporal Michael J. Crescenz VA Medical Center, Philadelphia, and Tamar H. Taddei, MD, of VA New York Harbor Health Care System, Brooklyn, New York, wrote with their associates in Gastroenterology.
Source: American Gastroenterological Association
“Delivery of curative therapies conferred the highest survival benefit, and notable geographic and specialist variation was observed in the delivery of active treatment,” they added. “Future studies should further evaluate modifiable health system and provider-specific barriers to delivering high quality, multidisciplinary care in hepatocellular carcinoma [in order] to optimize patient outcomes.”
Hepatocellular carcinoma ranks second worldwide and fifth in the United States as a cause of cancer mortality. Gastroenterologists, hepatologists, medical oncologists, or surgeons may take primary responsibility for treatment in community settings, but little is known about how provider and health system factors affect outcomes or the likelihood of receiving active treatments, such as liver transplantation, resection, ablative or transarterial therapy, sorafenib, systemic chemotherapy, or radiation. Accordingly, the researchers reviewed medical records and demographic data from all 3,988 U.S. patients diagnosed with hepatocellular carcinoma between 2008 and 2010 who received care at 128 Veterans Affairs centers. Patients were followed through the end of 2014. Data were from the Veterans Outcomes and Costs Associated with Liver Disease (VOCAL) cohort study (Gastroenterology. 2017 Mar 7. doi: 10.1053/j.gastro.2017.02.040).
After diagnosis, most (54%) patients only underwent transarterial palliative therapy, and 24% received no cancer treatment, the researchers reported. Being treated at an academically affiliated VA hospital nearly doubled the odds of receiving active therapy (odds ratio, 1.97; 95% confidence interval, 1.6 to 2.4; P less than .001), even after the researchers controlled for race, Charlson-Deyo comorbidity, and presenting Barcelona Clinic Liver Cancer stage. Evaluation by multiple specialists also significantly increased the odds of active treatment (OR, 1.60; 95% CI, 1.15 to 2.21; P = .005), but review by a multidisciplinary tumor board did not (OR, 1.19; P = .1).
Receipt of active therapy also varied significantly by region. Compared with patients in the Northeastern United States, those in the mid-South were significantly less likely to receive active therapy (HR, 0.62; 95% CI, 0.44-0.85). Patients in the Southeast, Central, and Western United States also were less likely to receive active treatment than were those in the Northeast, but 95% CIs for these hazard ratios were nonsignificant. Virtual tumor boards could help overcome diagnostic and treatment delays, but costs, care coordination, patient factors, and compensation issues are major barriers against implementation, the investigators noted.
Overall survival was associated with active treatment of hepatocellular carcinoma, including liver transplantation (hazard ratio, 0.22; 95% CI, 0.16-0.31), liver resection (HR, 0.38; 95% CI, 0.28-0.52), ablative therapy (HR, 0.63; 95% CI, 0.52-0.76), and transarterial therapy (HR, 0.83; 95% CI, 0.74-0.92). Reduced mortality was associated with seeing a hepatologist (HR, 0.7), medical oncologist (HR, 0.82), or surgeon (HR, 0.79) within 30 days of diagnosis (P less than .001 for each). Undergoing review by a multidisciplinary tumor board was associated with significantly reduced mortality (HR, 0.83; P less than .001), said the researchers.
“Findings from the VOCAL cohort of predominantly older males with significant medical comorbidities are important in light of the aging U.S. population and a nearly 70% expected increase in cancer among older adults,” they wrote. Together, the results indicate that access to multidisciplinary and expert care “is critical for optimizing treatment choices and for maximizing survival, but that such access is non-uniform,” they noted. “Detailed national VA clinical and administrative data are a unique resource that may be tapped to facilitate development of a parsimonious set of evidence-based, patient-centered, liver cancer–specific quality measures,” they emphasized. Quality measures based on timeliness, receipt of appropriate care, survival, or patient-reported outcomes “could be applicable both within and outside the VA system.”
The study was funded by unrestricted grants from Bayer Healthcare Pharmaceuticals and the VA HIV, Hepatitis and Public Health Pathogens Programs. The investigators had no conflicts.
Only 25% of Veterans Affairs (VA) patients with potentially curable (Barcelona Clinic Liver Cancer stage 0/A) hepatocellular carcinoma received resection, transplantation, or ablative therapy, according to the results of a national retrospective cohort study published in the June issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.040).
Furthermore, 13% of the fittest (ECOG performance status 1-2) patients received no active treatment for their hepatocellular carcinoma, Marina Serper, MD, of Corporal Michael J. Crescenz VA Medical Center, Philadelphia, and Tamar H. Taddei, MD, of VA New York Harbor Health Care System, Brooklyn, New York, wrote with their associates in Gastroenterology.
Source: American Gastroenterological Association
“Delivery of curative therapies conferred the highest survival benefit, and notable geographic and specialist variation was observed in the delivery of active treatment,” they added. “Future studies should further evaluate modifiable health system and provider-specific barriers to delivering high quality, multidisciplinary care in hepatocellular carcinoma [in order] to optimize patient outcomes.”
Hepatocellular carcinoma ranks second worldwide and fifth in the United States as a cause of cancer mortality. Gastroenterologists, hepatologists, medical oncologists, or surgeons may take primary responsibility for treatment in community settings, but little is known about how provider and health system factors affect outcomes or the likelihood of receiving active treatments, such as liver transplantation, resection, ablative or transarterial therapy, sorafenib, systemic chemotherapy, or radiation. Accordingly, the researchers reviewed medical records and demographic data from all 3,988 U.S. patients diagnosed with hepatocellular carcinoma between 2008 and 2010 who received care at 128 Veterans Affairs centers. Patients were followed through the end of 2014. Data were from the Veterans Outcomes and Costs Associated with Liver Disease (VOCAL) cohort study (Gastroenterology. 2017 Mar 7. doi: 10.1053/j.gastro.2017.02.040).
After diagnosis, most (54%) patients only underwent transarterial palliative therapy, and 24% received no cancer treatment, the researchers reported. Being treated at an academically affiliated VA hospital nearly doubled the odds of receiving active therapy (odds ratio, 1.97; 95% confidence interval, 1.6 to 2.4; P less than .001), even after the researchers controlled for race, Charlson-Deyo comorbidity, and presenting Barcelona Clinic Liver Cancer stage. Evaluation by multiple specialists also significantly increased the odds of active treatment (OR, 1.60; 95% CI, 1.15 to 2.21; P = .005), but review by a multidisciplinary tumor board did not (OR, 1.19; P = .1).
Receipt of active therapy also varied significantly by region. Compared with patients in the Northeastern United States, those in the mid-South were significantly less likely to receive active therapy (HR, 0.62; 95% CI, 0.44-0.85). Patients in the Southeast, Central, and Western United States also were less likely to receive active treatment than were those in the Northeast, but 95% CIs for these hazard ratios were nonsignificant. Virtual tumor boards could help overcome diagnostic and treatment delays, but costs, care coordination, patient factors, and compensation issues are major barriers against implementation, the investigators noted.
Overall survival was associated with active treatment of hepatocellular carcinoma, including liver transplantation (hazard ratio, 0.22; 95% CI, 0.16-0.31), liver resection (HR, 0.38; 95% CI, 0.28-0.52), ablative therapy (HR, 0.63; 95% CI, 0.52-0.76), and transarterial therapy (HR, 0.83; 95% CI, 0.74-0.92). Reduced mortality was associated with seeing a hepatologist (HR, 0.7), medical oncologist (HR, 0.82), or surgeon (HR, 0.79) within 30 days of diagnosis (P less than .001 for each). Undergoing review by a multidisciplinary tumor board was associated with significantly reduced mortality (HR, 0.83; P less than .001), said the researchers.
“Findings from the VOCAL cohort of predominantly older males with significant medical comorbidities are important in light of the aging U.S. population and a nearly 70% expected increase in cancer among older adults,” they wrote. Together, the results indicate that access to multidisciplinary and expert care “is critical for optimizing treatment choices and for maximizing survival, but that such access is non-uniform,” they noted. “Detailed national VA clinical and administrative data are a unique resource that may be tapped to facilitate development of a parsimonious set of evidence-based, patient-centered, liver cancer–specific quality measures,” they emphasized. Quality measures based on timeliness, receipt of appropriate care, survival, or patient-reported outcomes “could be applicable both within and outside the VA system.”
The study was funded by unrestricted grants from Bayer Healthcare Pharmaceuticals and the VA HIV, Hepatitis and Public Health Pathogens Programs. The investigators had no conflicts.
FROM GASTROENTEROLOGY
Key clinical point: Undertreatment of hepatocellular carcinoma was common within the Veterans Affairs system, and varied by geographic region.
Major finding: Only 25% of Barcelona Clinic Liver Cancer stage 0/A patients received potentially curative therapies. Those in the mid-South were significantly less likely to receive active treatment than were those in the Northeast (HR, 0.62; 95% CI, 0.44-0.85). In an adjusted model, treatment at an academically affiliated VA hospital nearly doubled the odds of receiving active therapy (odds ratio, 1.97; P less than .001).
Data source: A national, retrospective cohort study of all 3,988 patients who were diagnosed with hepatocellular carcinoma between 2008 and 2010 and received care through Veterans Affairs.
Disclosures: The study was funded by unrestricted grants from Bayer Healthcare Pharmaceuticals and the VA HIV, Hepatitis and Public Health Pathogens Programs. The investigators had no conflicts.
VIDEO: Study estimates prevalence of pediatric celiac disease, autoimmunity
By age 15 years, 3.1% of adolescents in Denver developed celiac disease, and another 2% developed a lesser degree of celiac disease autoimmunity, according to a 20-year prospective longitudinal study.
“Although more than 5% of children may experience a period of celiac disease autoimmunity [CDA], not all develop celiac disease [CD] or require gluten-free diets,” Edwin Liu, MD, of University of Colorado School of Medicine and Children’s Hospital Colorado (Aurora, Colo.), wrote with his associates in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.002). Most celiac autoimmunity probably develops before age 10, “which informs future efforts for universal screening,” they added.
Source: American Gastroenterological Association
About 40% of the general population has the HLA-DQ2 or DQ8 risk genotypes for celiac disease [CD], but little is known about rates of celiac disease among children in the United States, the researchers said. To help fill this gap, they analyzed celiac-risk HLA genotypes for 31,766 infants born between 1993 and 2004 from the Diabetes Autoimmunity Study in the Young. The 1,339 children with HLA risk genotypes were followed for up to 20 years.
By age 15 years, 66 of these children (4.9%) had developed tissue transglutaminase autoantibodies (tTGA) consistent with CDA, and also met criteria for CD, the researchers said. Another 46 (3.4%) children developed only CDA, of whom 46% experienced spontaneous resolution of tTGA seropositivity without treatment. By using genotype-specific risk weighting for population frequencies of HLA, the researchers estimated that 2.4% of the general population of Denver had CDA by age 5 years, 4.3% had CDA by age 10 years, and 5.1% had CDA by age 15 years. Estimated rates of CD were 1.6%, 2.8%, and 3.1%, respectively.
These findings suggest a significant rise in the incidence of CD compared with historical estimates in the United States, and reflect recent studies “using different approaches in North America,” the researchers said. Reasons for the “dramatic increase” are unknown, but environmental causes seem likely, especially given the absence of identified genetic differences and marked changes in the prevalence of CD during the past 2 decades, they added.
Several other reports have documented fluctuating and transient tTGA antibodies in children, the researchers noted. Awareness of transient CD autoantibodies might limit public acceptance of universal screening programs for CD, they said. “Continued long-term follow-up will identify whether the autoimmunity in these subjects truly abates and tolerance develops, or if CDA will recur in time, possibly in response to additional stimulating events,” they added. “At present, low positive tTGA results should be interpreted with caution, and do not necessarily indicate need for biopsy or for treatment.”
The study did not include the DR5/DR7 risk genotype, which accounts for less than 5% of CD cases. The study also did not account for the estimated 2.5% of the general population that has DR3/DR7, which can be considered high risk, the researchers said. Thus, the study is conservative and might underestimate the real incidence of CD or CDA, they added.
The National Institutes of Health provided funding. The investigators reported having no conflicts of interest.
This study calls into question the incidence of celiac disease in the modern pediatric population and, by extension, future prevalence in adults. This is a unique prospective cohort study that followed children over a decade and a half and estimated a cumulative incidence of celiac disease of 3.1% by age 15. In sharp contrast, previous retrospective population-based studies estimated a prevalence of approximately 0.75%-1% in adult and pediatric populations. A recent publication by the United States Preventive Services Task Force used the previously accepted prevalence estimates to recommend against routine screening for celiac disease in the asymptomatic general population as well as targeted screening in those at higher risk. Increases in disease incidence as reported by the current study may call these recommendations into question, particularly in young children where cumulative incidence was high and potential for treatment benefit is substantial.
Dawn Wiese Adams, MD, MS, is assistant professor, director of celiac clinic, in the department of gastroenterology, hepatology, and nutrition, Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.
This study calls into question the incidence of celiac disease in the modern pediatric population and, by extension, future prevalence in adults. This is a unique prospective cohort study that followed children over a decade and a half and estimated a cumulative incidence of celiac disease of 3.1% by age 15. In sharp contrast, previous retrospective population-based studies estimated a prevalence of approximately 0.75%-1% in adult and pediatric populations. A recent publication by the United States Preventive Services Task Force used the previously accepted prevalence estimates to recommend against routine screening for celiac disease in the asymptomatic general population as well as targeted screening in those at higher risk. Increases in disease incidence as reported by the current study may call these recommendations into question, particularly in young children where cumulative incidence was high and potential for treatment benefit is substantial.
Dawn Wiese Adams, MD, MS, is assistant professor, director of celiac clinic, in the department of gastroenterology, hepatology, and nutrition, Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.
This study calls into question the incidence of celiac disease in the modern pediatric population and, by extension, future prevalence in adults. This is a unique prospective cohort study that followed children over a decade and a half and estimated a cumulative incidence of celiac disease of 3.1% by age 15. In sharp contrast, previous retrospective population-based studies estimated a prevalence of approximately 0.75%-1% in adult and pediatric populations. A recent publication by the United States Preventive Services Task Force used the previously accepted prevalence estimates to recommend against routine screening for celiac disease in the asymptomatic general population as well as targeted screening in those at higher risk. Increases in disease incidence as reported by the current study may call these recommendations into question, particularly in young children where cumulative incidence was high and potential for treatment benefit is substantial.
Dawn Wiese Adams, MD, MS, is assistant professor, director of celiac clinic, in the department of gastroenterology, hepatology, and nutrition, Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.
By age 15 years, 3.1% of adolescents in Denver developed celiac disease, and another 2% developed a lesser degree of celiac disease autoimmunity, according to a 20-year prospective longitudinal study.
“Although more than 5% of children may experience a period of celiac disease autoimmunity [CDA], not all develop celiac disease [CD] or require gluten-free diets,” Edwin Liu, MD, of University of Colorado School of Medicine and Children’s Hospital Colorado (Aurora, Colo.), wrote with his associates in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.002). Most celiac autoimmunity probably develops before age 10, “which informs future efforts for universal screening,” they added.
Source: American Gastroenterological Association
About 40% of the general population has the HLA-DQ2 or DQ8 risk genotypes for celiac disease [CD], but little is known about rates of celiac disease among children in the United States, the researchers said. To help fill this gap, they analyzed celiac-risk HLA genotypes for 31,766 infants born between 1993 and 2004 from the Diabetes Autoimmunity Study in the Young. The 1,339 children with HLA risk genotypes were followed for up to 20 years.
By age 15 years, 66 of these children (4.9%) had developed tissue transglutaminase autoantibodies (tTGA) consistent with CDA, and also met criteria for CD, the researchers said. Another 46 (3.4%) children developed only CDA, of whom 46% experienced spontaneous resolution of tTGA seropositivity without treatment. By using genotype-specific risk weighting for population frequencies of HLA, the researchers estimated that 2.4% of the general population of Denver had CDA by age 5 years, 4.3% had CDA by age 10 years, and 5.1% had CDA by age 15 years. Estimated rates of CD were 1.6%, 2.8%, and 3.1%, respectively.
These findings suggest a significant rise in the incidence of CD compared with historical estimates in the United States, and reflect recent studies “using different approaches in North America,” the researchers said. Reasons for the “dramatic increase” are unknown, but environmental causes seem likely, especially given the absence of identified genetic differences and marked changes in the prevalence of CD during the past 2 decades, they added.
Several other reports have documented fluctuating and transient tTGA antibodies in children, the researchers noted. Awareness of transient CD autoantibodies might limit public acceptance of universal screening programs for CD, they said. “Continued long-term follow-up will identify whether the autoimmunity in these subjects truly abates and tolerance develops, or if CDA will recur in time, possibly in response to additional stimulating events,” they added. “At present, low positive tTGA results should be interpreted with caution, and do not necessarily indicate need for biopsy or for treatment.”
The study did not include the DR5/DR7 risk genotype, which accounts for less than 5% of CD cases. The study also did not account for the estimated 2.5% of the general population that has DR3/DR7, which can be considered high risk, the researchers said. Thus, the study is conservative and might underestimate the real incidence of CD or CDA, they added.
The National Institutes of Health provided funding. The investigators reported having no conflicts of interest.
By age 15 years, 3.1% of adolescents in Denver developed celiac disease, and another 2% developed a lesser degree of celiac disease autoimmunity, according to a 20-year prospective longitudinal study.
“Although more than 5% of children may experience a period of celiac disease autoimmunity [CDA], not all develop celiac disease [CD] or require gluten-free diets,” Edwin Liu, MD, of University of Colorado School of Medicine and Children’s Hospital Colorado (Aurora, Colo.), wrote with his associates in the May issue of Gastroenterology (doi: 10.1053/j.gastro.2017.02.002). Most celiac autoimmunity probably develops before age 10, “which informs future efforts for universal screening,” they added.
Source: American Gastroenterological Association
About 40% of the general population has the HLA-DQ2 or DQ8 risk genotypes for celiac disease [CD], but little is known about rates of celiac disease among children in the United States, the researchers said. To help fill this gap, they analyzed celiac-risk HLA genotypes for 31,766 infants born between 1993 and 2004 from the Diabetes Autoimmunity Study in the Young. The 1,339 children with HLA risk genotypes were followed for up to 20 years.
By age 15 years, 66 of these children (4.9%) had developed tissue transglutaminase autoantibodies (tTGA) consistent with CDA, and also met criteria for CD, the researchers said. Another 46 (3.4%) children developed only CDA, of whom 46% experienced spontaneous resolution of tTGA seropositivity without treatment. By using genotype-specific risk weighting for population frequencies of HLA, the researchers estimated that 2.4% of the general population of Denver had CDA by age 5 years, 4.3% had CDA by age 10 years, and 5.1% had CDA by age 15 years. Estimated rates of CD were 1.6%, 2.8%, and 3.1%, respectively.
These findings suggest a significant rise in the incidence of CD compared with historical estimates in the United States, and reflect recent studies “using different approaches in North America,” the researchers said. Reasons for the “dramatic increase” are unknown, but environmental causes seem likely, especially given the absence of identified genetic differences and marked changes in the prevalence of CD during the past 2 decades, they added.
Several other reports have documented fluctuating and transient tTGA antibodies in children, the researchers noted. Awareness of transient CD autoantibodies might limit public acceptance of universal screening programs for CD, they said. “Continued long-term follow-up will identify whether the autoimmunity in these subjects truly abates and tolerance develops, or if CDA will recur in time, possibly in response to additional stimulating events,” they added. “At present, low positive tTGA results should be interpreted with caution, and do not necessarily indicate need for biopsy or for treatment.”
The study did not include the DR5/DR7 risk genotype, which accounts for less than 5% of CD cases. The study also did not account for the estimated 2.5% of the general population that has DR3/DR7, which can be considered high risk, the researchers said. Thus, the study is conservative and might underestimate the real incidence of CD or CDA, they added.
The National Institutes of Health provided funding. The investigators reported having no conflicts of interest.
Key clinical point: The presence of celiac disease autoimmunity does not predict universal progression to celiac disease.
Major finding: By age 15 years, an estimated 3.1% of children in Denver developed celiac disease, and another 2% developed a lesser degree of celiac disease autoimmunity that often resolved spontaneously without treatment.
Data source: A 20-year prospective study of 1,339 children with genetic risk factors for celiac disease, with extrapolation based on the prevalence of human leukocyte antigen genotypes in the general population.
Disclosures: The National Institutes of Health provided funding. The investigators reported having no conflicts of interest.
VIDEO: Occult cancers contribute to GI bleeding in anticoagulated patients
Occult cancers accounted for one in about every 12 major gastrointestinal bleeding events among patients taking warfarin or dabigatran for atrial fibrillation, according to a retrospective analysis of data from a randomized prospective trial reported in the May issue of Clinical Gastroenterology and Hepatology (2017. doi: org/10.1016/j.cgh.2016.10.011).
These bleeding events caused similarly significant morbidity among patients taking either drug, Kathryn F. Flack, MD, of Icahn School of Medicine at Mount Sinai in New York and her associates wrote. “Patients bleeding from cancer required a mean of approximately 10 nights in the hospital, and approximately one-fourth required intensive care, but 0 of 44 died as a direct result of the bleeding,” the researchers reported. They hoped the specific dabigatran reversal agent, idarucizumab (Praxbind), will improve bleeding outcomes in patients receiving dabigatran.
Source: American Gastroenterological Association
Major gastrointestinal bleeding (MGIB) is the first sign of occult malignancy in certain patients receiving anticoagulation therapy. Starting an anticoagulant is a type of “stress test” that can reveal an occult cancer, the researchers said. Although dabigatran etexilate (Pradaxa) is generally safe and effective, a twice-daily, 150-mg dose of this direct oral anticoagulant slightly increased MGIB, compared with a lower dose in the international, multicenter RE-LY (Randomized Evaluation of Long Term Anticoagulant Therapy) trial (N Engl J Med. 2009;361:1139-51). Furthermore, unlike warfarin, dabigatran therapy places active anticoagulant within the luminal gastrointestinal tract, which “might promote bleeding from friable gastrointestinal cancers,” the investigators noted. To explore this possibility, they evaluated 546 unique MGIB events among RE-LY patients.
Medical chart reviews identified 44 (8.1%) MGIB events resulting from occult gastrointestinal cancers. Cancer accounted for similar proportions of MGIB among warfarin and dabigatran recipients (8.5% and 6.8%; P = .6). Nearly all cancers were colorectal or gastric, except for one case each of ampullary cancer, renal cell carcinoma, and melanoma that had metastasized to the luminal gastrointestinal tract. Colorectal cancer accounted for 80% of cancer-related MGIB overall, including 88% in the dabigatran group and 50% in the warfarin group (P = .02). Conversely, warfarin recipients had more MGIB associated with gastric cancer (50%) than did dabigatran recipients (2.9%; P = .001).
Short-term outcomes of MGIB associated with cancer did not vary by anticoagulant, the investigators said. There were no deaths, but two (4.5%) MGIB events required emergency endoscopic treatment, one (2.3%) required emergency surgery, and 33 (75%) required at least one red blood cell transfusion. Compared with patients whose MGIB was unrelated to cancer, those with cancer were more likely to bleed for more than 7 days (27.3% vs. 63.6%; P less than .001). Patients with occult cancer also developed MGIB sooner after starting anticoagulation (223 vs. 343 days; P = .003), but time to bleeding did not significantly vary by type of anticoagulant.
“Most prior studies on cancer bleeding have been case reports and case series in patients receiving warfarin,” the investigators wrote. “Our study is relevant because of the increasing prevalence of atrial fibrillation and anticoagulation in the aging global population, the increasing prescription of direct oral anticoagulants, and the morbidity, mortality, and complex decision making associated with MGIB and especially cancer-related MGIB in patients receiving anticoagulation therapy.”
The RE-LY trial was sponsored by Boehringer Ingelheim . Dr. Flack reported no conflicts of interest. Senior author James Aisenberg, MD, disclosed advisory board and consulting relationships with Boehringer Ingelheim and Portola Pharmaceuticals. Five other coinvestigators disclosed ties to several pharmaceutical companies, and two coinvestigators reported employment with Boehringer Ingelheim. The other coinvestigators had no conflicts.
Dr. Flack and her colleagues should be congratulated for providing important data as they reviewed 546 major GI bleeding events from a large randomized prospective trial of long-term anticoagulation in subjects with AF. They found that 1 in every 12 major GI bleeding events in patients on warfarin or dabigatran was associated with an occult cancer; colorectal cancer being the most common.
How will these results help us in clinical practice? First, when faced with GI bleeding in AF subjects on anticoagulants, a proactive diagnostic approach is needed for the search for a potential luminal GI malignancy; whether screening for GI malignancy before initiating anticoagulants is beneficial requires prospective studies with cost analysis. Second, cancer-related GI bleeding in dabigatran users occurs earlier than noncancer-related bleeding. Given that a fraction of GI bleeding events were not investigated, one cannot exclude the possibility of undiagnosed luminal GI cancers in the comparator group. Third, cancer-related bleeding is associated with prolonged hospital stay. We should seize the opportunity to study the effects of this double-edged sword; anticoagulants may help us reveal occult malignancy, but more importantly, we need to determine whether dabigatranreversal agent idarucizumab can improve bleeding outcomes in patients on dabigatran presenting with cancer-related bleeding.
Siew C. Ng, MD, PhD, AGAF, is professor at the department of medicine and therapeutics, Institute of Digestive Disease, Chinese University of Hong Kong. She has no conflicts of interest.
Dr. Flack and her colleagues should be congratulated for providing important data as they reviewed 546 major GI bleeding events from a large randomized prospective trial of long-term anticoagulation in subjects with AF. They found that 1 in every 12 major GI bleeding events in patients on warfarin or dabigatran was associated with an occult cancer; colorectal cancer being the most common.
How will these results help us in clinical practice? First, when faced with GI bleeding in AF subjects on anticoagulants, a proactive diagnostic approach is needed for the search for a potential luminal GI malignancy; whether screening for GI malignancy before initiating anticoagulants is beneficial requires prospective studies with cost analysis. Second, cancer-related GI bleeding in dabigatran users occurs earlier than noncancer-related bleeding. Given that a fraction of GI bleeding events were not investigated, one cannot exclude the possibility of undiagnosed luminal GI cancers in the comparator group. Third, cancer-related bleeding is associated with prolonged hospital stay. We should seize the opportunity to study the effects of this double-edged sword; anticoagulants may help us reveal occult malignancy, but more importantly, we need to determine whether dabigatranreversal agent idarucizumab can improve bleeding outcomes in patients on dabigatran presenting with cancer-related bleeding.
Siew C. Ng, MD, PhD, AGAF, is professor at the department of medicine and therapeutics, Institute of Digestive Disease, Chinese University of Hong Kong. She has no conflicts of interest.
Dr. Flack and her colleagues should be congratulated for providing important data as they reviewed 546 major GI bleeding events from a large randomized prospective trial of long-term anticoagulation in subjects with AF. They found that 1 in every 12 major GI bleeding events in patients on warfarin or dabigatran was associated with an occult cancer; colorectal cancer being the most common.
How will these results help us in clinical practice? First, when faced with GI bleeding in AF subjects on anticoagulants, a proactive diagnostic approach is needed for the search for a potential luminal GI malignancy; whether screening for GI malignancy before initiating anticoagulants is beneficial requires prospective studies with cost analysis. Second, cancer-related GI bleeding in dabigatran users occurs earlier than noncancer-related bleeding. Given that a fraction of GI bleeding events were not investigated, one cannot exclude the possibility of undiagnosed luminal GI cancers in the comparator group. Third, cancer-related bleeding is associated with prolonged hospital stay. We should seize the opportunity to study the effects of this double-edged sword; anticoagulants may help us reveal occult malignancy, but more importantly, we need to determine whether dabigatranreversal agent idarucizumab can improve bleeding outcomes in patients on dabigatran presenting with cancer-related bleeding.
Siew C. Ng, MD, PhD, AGAF, is professor at the department of medicine and therapeutics, Institute of Digestive Disease, Chinese University of Hong Kong. She has no conflicts of interest.
Occult cancers accounted for one in about every 12 major gastrointestinal bleeding events among patients taking warfarin or dabigatran for atrial fibrillation, according to a retrospective analysis of data from a randomized prospective trial reported in the May issue of Clinical Gastroenterology and Hepatology (2017. doi: org/10.1016/j.cgh.2016.10.011).
These bleeding events caused similarly significant morbidity among patients taking either drug, Kathryn F. Flack, MD, of Icahn School of Medicine at Mount Sinai in New York and her associates wrote. “Patients bleeding from cancer required a mean of approximately 10 nights in the hospital, and approximately one-fourth required intensive care, but 0 of 44 died as a direct result of the bleeding,” the researchers reported. They hoped the specific dabigatran reversal agent, idarucizumab (Praxbind), will improve bleeding outcomes in patients receiving dabigatran.
Source: American Gastroenterological Association
Major gastrointestinal bleeding (MGIB) is the first sign of occult malignancy in certain patients receiving anticoagulation therapy. Starting an anticoagulant is a type of “stress test” that can reveal an occult cancer, the researchers said. Although dabigatran etexilate (Pradaxa) is generally safe and effective, a twice-daily, 150-mg dose of this direct oral anticoagulant slightly increased MGIB, compared with a lower dose in the international, multicenter RE-LY (Randomized Evaluation of Long Term Anticoagulant Therapy) trial (N Engl J Med. 2009;361:1139-51). Furthermore, unlike warfarin, dabigatran therapy places active anticoagulant within the luminal gastrointestinal tract, which “might promote bleeding from friable gastrointestinal cancers,” the investigators noted. To explore this possibility, they evaluated 546 unique MGIB events among RE-LY patients.
Medical chart reviews identified 44 (8.1%) MGIB events resulting from occult gastrointestinal cancers. Cancer accounted for similar proportions of MGIB among warfarin and dabigatran recipients (8.5% and 6.8%; P = .6). Nearly all cancers were colorectal or gastric, except for one case each of ampullary cancer, renal cell carcinoma, and melanoma that had metastasized to the luminal gastrointestinal tract. Colorectal cancer accounted for 80% of cancer-related MGIB overall, including 88% in the dabigatran group and 50% in the warfarin group (P = .02). Conversely, warfarin recipients had more MGIB associated with gastric cancer (50%) than did dabigatran recipients (2.9%; P = .001).
Short-term outcomes of MGIB associated with cancer did not vary by anticoagulant, the investigators said. There were no deaths, but two (4.5%) MGIB events required emergency endoscopic treatment, one (2.3%) required emergency surgery, and 33 (75%) required at least one red blood cell transfusion. Compared with patients whose MGIB was unrelated to cancer, those with cancer were more likely to bleed for more than 7 days (27.3% vs. 63.6%; P less than .001). Patients with occult cancer also developed MGIB sooner after starting anticoagulation (223 vs. 343 days; P = .003), but time to bleeding did not significantly vary by type of anticoagulant.
“Most prior studies on cancer bleeding have been case reports and case series in patients receiving warfarin,” the investigators wrote. “Our study is relevant because of the increasing prevalence of atrial fibrillation and anticoagulation in the aging global population, the increasing prescription of direct oral anticoagulants, and the morbidity, mortality, and complex decision making associated with MGIB and especially cancer-related MGIB in patients receiving anticoagulation therapy.”
The RE-LY trial was sponsored by Boehringer Ingelheim . Dr. Flack reported no conflicts of interest. Senior author James Aisenberg, MD, disclosed advisory board and consulting relationships with Boehringer Ingelheim and Portola Pharmaceuticals. Five other coinvestigators disclosed ties to several pharmaceutical companies, and two coinvestigators reported employment with Boehringer Ingelheim. The other coinvestigators had no conflicts.
Occult cancers accounted for one in about every 12 major gastrointestinal bleeding events among patients taking warfarin or dabigatran for atrial fibrillation, according to a retrospective analysis of data from a randomized prospective trial reported in the May issue of Clinical Gastroenterology and Hepatology (2017. doi: org/10.1016/j.cgh.2016.10.011).
These bleeding events caused similarly significant morbidity among patients taking either drug, Kathryn F. Flack, MD, of Icahn School of Medicine at Mount Sinai in New York and her associates wrote. “Patients bleeding from cancer required a mean of approximately 10 nights in the hospital, and approximately one-fourth required intensive care, but 0 of 44 died as a direct result of the bleeding,” the researchers reported. They hoped the specific dabigatran reversal agent, idarucizumab (Praxbind), will improve bleeding outcomes in patients receiving dabigatran.
Source: American Gastroenterological Association
Major gastrointestinal bleeding (MGIB) is the first sign of occult malignancy in certain patients receiving anticoagulation therapy. Starting an anticoagulant is a type of “stress test” that can reveal an occult cancer, the researchers said. Although dabigatran etexilate (Pradaxa) is generally safe and effective, a twice-daily, 150-mg dose of this direct oral anticoagulant slightly increased MGIB, compared with a lower dose in the international, multicenter RE-LY (Randomized Evaluation of Long Term Anticoagulant Therapy) trial (N Engl J Med. 2009;361:1139-51). Furthermore, unlike warfarin, dabigatran therapy places active anticoagulant within the luminal gastrointestinal tract, which “might promote bleeding from friable gastrointestinal cancers,” the investigators noted. To explore this possibility, they evaluated 546 unique MGIB events among RE-LY patients.
Medical chart reviews identified 44 (8.1%) MGIB events resulting from occult gastrointestinal cancers. Cancer accounted for similar proportions of MGIB among warfarin and dabigatran recipients (8.5% and 6.8%; P = .6). Nearly all cancers were colorectal or gastric, except for one case each of ampullary cancer, renal cell carcinoma, and melanoma that had metastasized to the luminal gastrointestinal tract. Colorectal cancer accounted for 80% of cancer-related MGIB overall, including 88% in the dabigatran group and 50% in the warfarin group (P = .02). Conversely, warfarin recipients had more MGIB associated with gastric cancer (50%) than did dabigatran recipients (2.9%; P = .001).
Short-term outcomes of MGIB associated with cancer did not vary by anticoagulant, the investigators said. There were no deaths, but two (4.5%) MGIB events required emergency endoscopic treatment, one (2.3%) required emergency surgery, and 33 (75%) required at least one red blood cell transfusion. Compared with patients whose MGIB was unrelated to cancer, those with cancer were more likely to bleed for more than 7 days (27.3% vs. 63.6%; P less than .001). Patients with occult cancer also developed MGIB sooner after starting anticoagulation (223 vs. 343 days; P = .003), but time to bleeding did not significantly vary by type of anticoagulant.
“Most prior studies on cancer bleeding have been case reports and case series in patients receiving warfarin,” the investigators wrote. “Our study is relevant because of the increasing prevalence of atrial fibrillation and anticoagulation in the aging global population, the increasing prescription of direct oral anticoagulants, and the morbidity, mortality, and complex decision making associated with MGIB and especially cancer-related MGIB in patients receiving anticoagulation therapy.”
The RE-LY trial was sponsored by Boehringer Ingelheim . Dr. Flack reported no conflicts of interest. Senior author James Aisenberg, MD, disclosed advisory board and consulting relationships with Boehringer Ingelheim and Portola Pharmaceuticals. Five other coinvestigators disclosed ties to several pharmaceutical companies, and two coinvestigators reported employment with Boehringer Ingelheim. The other coinvestigators had no conflicts.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Key clinical point: Occult cancers accounted for about 1 in every 12 major gastrointestinal bleeding events among patients receiving warfarin or dabigatran for atrial fibrillation.
Major finding: A total of 44 (8.1%) major gastrointestinal bleeds were associated with occult cancers.Data source: A retrospective analysis of 546 unique major gastrointestinal bleeding events from the Randomized Evaluation of Long Term Anticoagulant Therapy (RE-LY) trial.
Disclosures: RE-LY was sponsored by Boehringer Ingleheim. Dr. Flack had no conflicts of interest. Senior author James Aisenberg, MD, disclosed advisory board and consulting relationships with Boehringer Ingelheim and Portola Pharmaceuticals. Five other coinvestigators disclosed ties to several pharmaceutical companies, and two coinvestigators reported employment with Boehringer Ingelheim. The other coinvestigators had no conflicts.