User login
Coffee consumption affects cancer risk differently for liver vs. pancreatic cancers
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Neither total coffee intake (whether decaffeinated or caffeinated, analyzed separately) nor tea intake appears to influence the risk of pancreatic cancer, but coffee intake of any kind reduces the risk of hepatocellular carcinoma by 40% (RR, 0.60; 95% CI 0.50-0.71), with a dose-response effect even after accounting for participants’ sex, alcohol drinking, and history of hepatitis or liver disease.
Data source: The findings of the pancreatic cancer study are based on prospective analysis of 477,312 initially cancer-free male and female participants from 10 European countries participating in the European Prospective Investigation into Nutrition and Cancer cohort between 1992 and 2000. The liver cancer meta-analysis is based on 14 articles comprising eight case-control studies and eight cohort studies, published in PubMed/MEDLINE between 1966 and September 2012.
Disclosures: The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Study identifies preferred approach to managing anemia from HCV treatment
The results of a randomized, open-label study suggest that reducing the ribavirin dose should be the "primary approach" for managing anemia associated with peginterferon, ribavirin, and boceprevir therapy in patients with chronic hepatitis C, the authors of the study concluded.
In the study, the effects of two anemia-management strategies, ribavirin (RBV) dose reduction and erythropoietin (EPO) treatment, on the sustained virologic response (SVR) were similar – there was less than 1% difference between the two groups, according to the investigators, Dr. Fred F. Poordad of the Texas Liver Institute, at the University of Texas Health Science Center, San Antonio, and his associates. They also found that SVR rates were significantly lower among those who received less than half of the total ribavirin dose during the entire treatment period, compared with those who received a greater proportion of the total dosage.
"There appears to be no apparent benefit of using EPO as a first-line anemia-management strategy to enhance SVR rate or minimize relapse," the authors concluded. "EPO can be used as a secondary management strategy to prevent treatment interruption if RBV dosage reduction alone is inadequate, but the safety of EPO use in this setting has not been clearly established," they added. The study was published in the November issue of Gastroenterology (2013 [doi:10.1053/j.gastro.2013.07.051]).
The study includes an algorithm for managing boceprevir-related anemia, based on the results of this and other clinical studies, and the authors’ expertise.
The study, conducted between December 2009 and October 2011, compared the two regimens in 500 of 687 previously untreated patients with chronic HCV genotype-1 infections, who became anemic (hemoglobin levels dropping to 10 g/dL or lower) during treatment with the three drugs: peginterferon alfa-2b (PegIntron) at a dose of 1.5 mcg/kg per week; ribavirin at a dose of 600-1400 mg per day depending on weight; and boceprevir (Victrelis) at a dose of 800 mg three times a day. (After 4 weeks of treatment with peginterferon and RBV, boceprevir was added for 24 or 44 weeks). Their mean age was about 50 years, 33% were men, 77% were white, and 18% were black; 91% had a baseline viral load of more than 400,000 IU/mL. Almost 90% were in the United States, the rest were in Canada and Europe.
The 500 patients were randomized to treatment with EPO (a subcutaneous infusion of 40,000 IU a week) or a reduction in the ribavirin dose (200 mg/day or, for those on the 1,400 mg daily dose, a 400-mg reduction). If hemoglobin levels dropped to 7.5 g/dL or lower, the patients were dropped from the study.
The SVR rate (undetectable HCV RNA 24 weeks after the end of treatment) was 71.5% among those whose ribavirin dosage was reduced and 70.9% among those treated with EPO. Among the 187 patients who did not develop anemia, the SVR rate was 40.1%; this group included a large number of patients who discontinued treatment because of adverse events. But of the 64 who completed treatment, the SVR rate was 89%. The overall SVR rate – among all 687 patients, those randomized and not randomized – was 63%.
Common adverse events were similar in the two randomized treatment groups, with anemia, fatigue, nausea, and headache being the most commonly reported. The rates of serious adverse events were 16% among those in the RBV dose-reduction arm and 13% of those on EPO. There were more thromboembolic events among the patients treated with EPO. There was one death in a patient in the RBV arm, a sudden cardiac death 3 weeks after stopping treatment.
The algorithm proposed by the authors, which has different hemoglobin monitoring recommendations for those with and without advanced fibrosis and cirrhosis, recommends that the primary intervention for managing anemia should be to reduce the RBV dosage. But if hemoglobin levels remain below 10 g/dL, "secondary interventions, such as administration of EPO, red cell transfusions, and reducing the dosage of peginterferon can be considered," the authors wrote. In addition, "it is important that the patient receives at least 50% of the total milligrams of RBV calculated from the initial RBV dosage (mg/d) and the assigned duration" defined by the response-guided therapy algorithm, they added.
The open-label design was one of the study’s limitations, and whether these results apply to other HCV treatment regimens is unclear, the authors noted. However, the results "would most likely be applicable to all RBV- and peginterferon/RBV-based regimens" for hepatitis C, they added.
The study was funded by Schering-Plough, the manufacturer of PegIntron and combination packs of PegIntron with ribavirin, which is now part of Merck. The investigator disclosures included having served as consultants and speakers, and/or having received grants from multiple pharmaceutical companies; the investigators include several current and former employees of Merck Sharp & Dohme Corp. (a subsidiary of Merck & Co.), the manufacturer of Victrelis.
The results of a randomized, open-label study suggest that reducing the ribavirin dose should be the "primary approach" for managing anemia associated with peginterferon, ribavirin, and boceprevir therapy in patients with chronic hepatitis C, the authors of the study concluded.
In the study, the effects of two anemia-management strategies, ribavirin (RBV) dose reduction and erythropoietin (EPO) treatment, on the sustained virologic response (SVR) were similar – there was less than 1% difference between the two groups, according to the investigators, Dr. Fred F. Poordad of the Texas Liver Institute, at the University of Texas Health Science Center, San Antonio, and his associates. They also found that SVR rates were significantly lower among those who received less than half of the total ribavirin dose during the entire treatment period, compared with those who received a greater proportion of the total dosage.
"There appears to be no apparent benefit of using EPO as a first-line anemia-management strategy to enhance SVR rate or minimize relapse," the authors concluded. "EPO can be used as a secondary management strategy to prevent treatment interruption if RBV dosage reduction alone is inadequate, but the safety of EPO use in this setting has not been clearly established," they added. The study was published in the November issue of Gastroenterology (2013 [doi:10.1053/j.gastro.2013.07.051]).
The study includes an algorithm for managing boceprevir-related anemia, based on the results of this and other clinical studies, and the authors’ expertise.
The study, conducted between December 2009 and October 2011, compared the two regimens in 500 of 687 previously untreated patients with chronic HCV genotype-1 infections, who became anemic (hemoglobin levels dropping to 10 g/dL or lower) during treatment with the three drugs: peginterferon alfa-2b (PegIntron) at a dose of 1.5 mcg/kg per week; ribavirin at a dose of 600-1400 mg per day depending on weight; and boceprevir (Victrelis) at a dose of 800 mg three times a day. (After 4 weeks of treatment with peginterferon and RBV, boceprevir was added for 24 or 44 weeks). Their mean age was about 50 years, 33% were men, 77% were white, and 18% were black; 91% had a baseline viral load of more than 400,000 IU/mL. Almost 90% were in the United States, the rest were in Canada and Europe.
The 500 patients were randomized to treatment with EPO (a subcutaneous infusion of 40,000 IU a week) or a reduction in the ribavirin dose (200 mg/day or, for those on the 1,400 mg daily dose, a 400-mg reduction). If hemoglobin levels dropped to 7.5 g/dL or lower, the patients were dropped from the study.
The SVR rate (undetectable HCV RNA 24 weeks after the end of treatment) was 71.5% among those whose ribavirin dosage was reduced and 70.9% among those treated with EPO. Among the 187 patients who did not develop anemia, the SVR rate was 40.1%; this group included a large number of patients who discontinued treatment because of adverse events. But of the 64 who completed treatment, the SVR rate was 89%. The overall SVR rate – among all 687 patients, those randomized and not randomized – was 63%.
Common adverse events were similar in the two randomized treatment groups, with anemia, fatigue, nausea, and headache being the most commonly reported. The rates of serious adverse events were 16% among those in the RBV dose-reduction arm and 13% of those on EPO. There were more thromboembolic events among the patients treated with EPO. There was one death in a patient in the RBV arm, a sudden cardiac death 3 weeks after stopping treatment.
The algorithm proposed by the authors, which has different hemoglobin monitoring recommendations for those with and without advanced fibrosis and cirrhosis, recommends that the primary intervention for managing anemia should be to reduce the RBV dosage. But if hemoglobin levels remain below 10 g/dL, "secondary interventions, such as administration of EPO, red cell transfusions, and reducing the dosage of peginterferon can be considered," the authors wrote. In addition, "it is important that the patient receives at least 50% of the total milligrams of RBV calculated from the initial RBV dosage (mg/d) and the assigned duration" defined by the response-guided therapy algorithm, they added.
The open-label design was one of the study’s limitations, and whether these results apply to other HCV treatment regimens is unclear, the authors noted. However, the results "would most likely be applicable to all RBV- and peginterferon/RBV-based regimens" for hepatitis C, they added.
The study was funded by Schering-Plough, the manufacturer of PegIntron and combination packs of PegIntron with ribavirin, which is now part of Merck. The investigator disclosures included having served as consultants and speakers, and/or having received grants from multiple pharmaceutical companies; the investigators include several current and former employees of Merck Sharp & Dohme Corp. (a subsidiary of Merck & Co.), the manufacturer of Victrelis.
The results of a randomized, open-label study suggest that reducing the ribavirin dose should be the "primary approach" for managing anemia associated with peginterferon, ribavirin, and boceprevir therapy in patients with chronic hepatitis C, the authors of the study concluded.
In the study, the effects of two anemia-management strategies, ribavirin (RBV) dose reduction and erythropoietin (EPO) treatment, on the sustained virologic response (SVR) were similar – there was less than 1% difference between the two groups, according to the investigators, Dr. Fred F. Poordad of the Texas Liver Institute, at the University of Texas Health Science Center, San Antonio, and his associates. They also found that SVR rates were significantly lower among those who received less than half of the total ribavirin dose during the entire treatment period, compared with those who received a greater proportion of the total dosage.
"There appears to be no apparent benefit of using EPO as a first-line anemia-management strategy to enhance SVR rate or minimize relapse," the authors concluded. "EPO can be used as a secondary management strategy to prevent treatment interruption if RBV dosage reduction alone is inadequate, but the safety of EPO use in this setting has not been clearly established," they added. The study was published in the November issue of Gastroenterology (2013 [doi:10.1053/j.gastro.2013.07.051]).
The study includes an algorithm for managing boceprevir-related anemia, based on the results of this and other clinical studies, and the authors’ expertise.
The study, conducted between December 2009 and October 2011, compared the two regimens in 500 of 687 previously untreated patients with chronic HCV genotype-1 infections, who became anemic (hemoglobin levels dropping to 10 g/dL or lower) during treatment with the three drugs: peginterferon alfa-2b (PegIntron) at a dose of 1.5 mcg/kg per week; ribavirin at a dose of 600-1400 mg per day depending on weight; and boceprevir (Victrelis) at a dose of 800 mg three times a day. (After 4 weeks of treatment with peginterferon and RBV, boceprevir was added for 24 or 44 weeks). Their mean age was about 50 years, 33% were men, 77% were white, and 18% were black; 91% had a baseline viral load of more than 400,000 IU/mL. Almost 90% were in the United States, the rest were in Canada and Europe.
The 500 patients were randomized to treatment with EPO (a subcutaneous infusion of 40,000 IU a week) or a reduction in the ribavirin dose (200 mg/day or, for those on the 1,400 mg daily dose, a 400-mg reduction). If hemoglobin levels dropped to 7.5 g/dL or lower, the patients were dropped from the study.
The SVR rate (undetectable HCV RNA 24 weeks after the end of treatment) was 71.5% among those whose ribavirin dosage was reduced and 70.9% among those treated with EPO. Among the 187 patients who did not develop anemia, the SVR rate was 40.1%; this group included a large number of patients who discontinued treatment because of adverse events. But of the 64 who completed treatment, the SVR rate was 89%. The overall SVR rate – among all 687 patients, those randomized and not randomized – was 63%.
Common adverse events were similar in the two randomized treatment groups, with anemia, fatigue, nausea, and headache being the most commonly reported. The rates of serious adverse events were 16% among those in the RBV dose-reduction arm and 13% of those on EPO. There were more thromboembolic events among the patients treated with EPO. There was one death in a patient in the RBV arm, a sudden cardiac death 3 weeks after stopping treatment.
The algorithm proposed by the authors, which has different hemoglobin monitoring recommendations for those with and without advanced fibrosis and cirrhosis, recommends that the primary intervention for managing anemia should be to reduce the RBV dosage. But if hemoglobin levels remain below 10 g/dL, "secondary interventions, such as administration of EPO, red cell transfusions, and reducing the dosage of peginterferon can be considered," the authors wrote. In addition, "it is important that the patient receives at least 50% of the total milligrams of RBV calculated from the initial RBV dosage (mg/d) and the assigned duration" defined by the response-guided therapy algorithm, they added.
The open-label design was one of the study’s limitations, and whether these results apply to other HCV treatment regimens is unclear, the authors noted. However, the results "would most likely be applicable to all RBV- and peginterferon/RBV-based regimens" for hepatitis C, they added.
The study was funded by Schering-Plough, the manufacturer of PegIntron and combination packs of PegIntron with ribavirin, which is now part of Merck. The investigator disclosures included having served as consultants and speakers, and/or having received grants from multiple pharmaceutical companies; the investigators include several current and former employees of Merck Sharp & Dohme Corp. (a subsidiary of Merck & Co.), the manufacturer of Victrelis.
FROM GASTROENTEROLOGY
Major finding: In a study of 500 patients with chronic hepatitis C patients who developed anemia while on triple therapy, the sustained virologic response rates were similar among those who were managed with a reduced ribavirin dose (71.5%) and those who were treated with erythropoietin (70.9%).
Data source: The randomized open-label multicenter study compared the effects of reducing the ribavirin dose to treatment with erythropoietin on the sustained viral response rates in 500 patients with chronic hepatitis C who became anemic during treatment with peginterferon, ribavirin, and boceprevir.
Disclosures: The study was funded by Schering-Plough, which is now part of Merck. The investigator disclosures included having served as consultants and speakers, and/or having received grants from multiple pharmaceutical companies; the investigators include several current and former employees of Merck Sharp & Dohme Corp. (a subsidiary of Merck & Co.).
Hybrid colorectal cancer screening model reduced cancer rate
A hybrid colorectal cancer screening strategy that incorporates annual fecal immunological testing beginning at age 50 years and a single colonoscopy at age 66 years proved both clinically effective and cost-effective in a simulation model.
Using the Archimedes Model – a "large-scale integrated simulation of human physiology, diseases, and health care systems" – Tuan Dinh, Ph.D., of Archimedes Inc., San Francisco, and colleagues found that compared with no screening, the hybrid strategy with annual fecal immunological testing (FIT) reduced colorectal cancer incidence by 73%, gained 11,200 quality-adjusted life years (QALYs), and saved $126.8 million for every 100,000 people screened during a 30-year period.
Source: American Gastroenterological Association
Without screening, a cohort of 100,000 members of Kaiser Permanente Northern California who were included in the virtual study experienced 6,004 colorectal cancers and 1,837 colorectal cancer deaths. All methods of screening that were evaluated in the model – including annual FIT, colonoscopy at 10-year intervals, sigmoidoscopy at 5-year intervals, both FIT and sigmoidoscopy, and both FIT and colonoscopy – substantially reduced the colorectal cancer incidence, by 53%-76%, and added a significant number of QALYs, compared with no screening, the investigators reported online March 28 in Clinical Gastroenterology and Hepatology.
Colonoscopy as a single-modality screening strategy was most effective for colorectal cancer reduction (76%), and FIT alone was the least costly approach (with savings of $142.6 million per 100,000 persons, compared with no screening), but FIT plus colonoscopy came close: The hybrid strategy reduced colorectal cancer by 73%, and compared with FIT alone, gained 1,400 QALYs/100,000 at an incremental cost of $9,700 per QALY gained. Colonoscopy gained 500 QALY/100,000 more than the hybrid strategy at an incremental cost of $35,100 per QALY gained.
Furthermore, the hybrid strategy required 55% fewer FITs and 41% more colonoscopies than FIT alone, and required 2.1-2.3 fewer colonoscopies per person during 30 years than screening by colonoscopy alone, they reported (Clin. Gastroenterol. Hepatol. 2013 March 28 [doi: 10.1016/j.cgh.2013.03.013]).
On sensitivity analysis, a hybrid approach using biennial FIT was also cost-effective, compared with either FIT or colonoscopy alone.
The core of the Archimedes Model is "a set of equations that represent physiological pathways at the clinical level (i.e., at the level of detail of basic medical tests, clinical trials, and patient charts)." The colorectal cancer submodel, which was derived from public databases, published epidemiologic studies, and clinical trials, was developed in collaboration with the American Cancer Society, the authors explained.
The simulated population included a cross section of 2008 Kaiser Permanente members who were aged 50-75 years at the start of the virtual trial comparing the screening strategies.
The findings are important, given that colorectal cancer is the second-leading cause of cancer deaths among adults in the United States, and although colonoscopy is the recommended approach for primary screening in most U.S. guidelines, it is the most invasive, risky, and costly screening modality, the investigators noted.
Conversely, stool tests with follow-up colonoscopy for positive results are the least expensive. In the past, stool test strategies have been hampered by low sensitivity for adenomas and low specificity, but recent improvements in sensitivity and specificity of FIT has renewed interest in the use of stool tests, they said.
Though the hybrid FIT/colonoscopy strategy is limited by several factors – for example, the accuracy of any simulation model is dependent on assumptions about test performance and adherence, which may vary – the findings of this study suggest the hybrid strategy could improve outcomes while lowering costs.
"The simulation results indicated that [the hybrid strategy] required 37% fewer colonoscopies than [colonoscopy alone], while delivering only slightly inferior health benefits," the investigators said. These results demonstrate that "it is possible to design hybrid colorectal cancer screening strategies that can deliver health benefits and cost-effectiveness that are comparable to those of single-modality strategies, with a favorable impact on resource demand," they noted.
Future clinical studies should address whether hybrid strategies have the additional advantage of increasing screening adherence, they concluded.
This study was carried out by Archimedes under a contract with The Permanente Medical Group (TPMG). One author, Dr. Theodore R. Levin, is a TPMG shareholder, and another, Cindy Caldwell, is a TPMG employee. The authors reported having no other conflicts of interest.
Screening for colorectal cancer (CRC) is currently based on strategies employing single tests, with the exception of the sigmoidoscopy/fecal occult blood test combination. In the United States, colonoscopy has emerged as a dominant CRC screening modality, given its effectiveness for CRC prevention. Drawbacks include increased risk for complications, especially with older patients, and higher cost. Fecal immunochemical testing (FIT) outperforms the older-generation guaiac-based stool tests and has emerged as the prime noninvasive CRC screening option.
Ongoing randomized controlled trials are focused on head-to-head comparisons of colonoscopy versus FIT (or usual care); however, colonoscopy and FIT have complementary strengths and limitations, which make hybrid screening approaches logical and attractive from the clinical and economical standpoints. For example, in the Spanish ColonPrev study, subjects randomized to the FIT group were more likely to participate in screening; however, subjects in the colonoscopy group had more adenomas detected.
A hybrid strategy could capitalize on colonoscopy's higher effectiveness and FIT's lower cost and better adherence, while attenuating the drawbacks of colonoscopy's invasiveness and FIT's lower sensitivity for adenoma detection.
In the present simulation model, a hybrid strategy based on annual or biennial FIT starting at age 50, followed by a single colonoscopy at age 66, resulted in decreased CRC incidence and mortality, gain in quality-adjusted life-years, and reduction in cost comparable to those of single-test strategies.
The study findings, as with any simulation exercise, depend largely upon the baseline assumptions, notably regarding test sensitivity and patient adherence. However, Dinh et al.'s study is an important first step to determine the viability of hybrid screening approaches, and paves the way for future clinical studies.
Dr. Charles Kahi is associate professor of clinical medicine at the Indiana University School of Medicine, and gastroenterology section chief at Roudebush VA Medical Center, both in Indianapolis. He had no relevant financial disclosures.
Screening for colorectal cancer (CRC) is currently based on strategies employing single tests, with the exception of the sigmoidoscopy/fecal occult blood test combination. In the United States, colonoscopy has emerged as a dominant CRC screening modality, given its effectiveness for CRC prevention. Drawbacks include increased risk for complications, especially with older patients, and higher cost. Fecal immunochemical testing (FIT) outperforms the older-generation guaiac-based stool tests and has emerged as the prime noninvasive CRC screening option.
Ongoing randomized controlled trials are focused on head-to-head comparisons of colonoscopy versus FIT (or usual care); however, colonoscopy and FIT have complementary strengths and limitations, which make hybrid screening approaches logical and attractive from the clinical and economical standpoints. For example, in the Spanish ColonPrev study, subjects randomized to the FIT group were more likely to participate in screening; however, subjects in the colonoscopy group had more adenomas detected.
A hybrid strategy could capitalize on colonoscopy's higher effectiveness and FIT's lower cost and better adherence, while attenuating the drawbacks of colonoscopy's invasiveness and FIT's lower sensitivity for adenoma detection.
In the present simulation model, a hybrid strategy based on annual or biennial FIT starting at age 50, followed by a single colonoscopy at age 66, resulted in decreased CRC incidence and mortality, gain in quality-adjusted life-years, and reduction in cost comparable to those of single-test strategies.
The study findings, as with any simulation exercise, depend largely upon the baseline assumptions, notably regarding test sensitivity and patient adherence. However, Dinh et al.'s study is an important first step to determine the viability of hybrid screening approaches, and paves the way for future clinical studies.
Dr. Charles Kahi is associate professor of clinical medicine at the Indiana University School of Medicine, and gastroenterology section chief at Roudebush VA Medical Center, both in Indianapolis. He had no relevant financial disclosures.
Screening for colorectal cancer (CRC) is currently based on strategies employing single tests, with the exception of the sigmoidoscopy/fecal occult blood test combination. In the United States, colonoscopy has emerged as a dominant CRC screening modality, given its effectiveness for CRC prevention. Drawbacks include increased risk for complications, especially with older patients, and higher cost. Fecal immunochemical testing (FIT) outperforms the older-generation guaiac-based stool tests and has emerged as the prime noninvasive CRC screening option.
Ongoing randomized controlled trials are focused on head-to-head comparisons of colonoscopy versus FIT (or usual care); however, colonoscopy and FIT have complementary strengths and limitations, which make hybrid screening approaches logical and attractive from the clinical and economical standpoints. For example, in the Spanish ColonPrev study, subjects randomized to the FIT group were more likely to participate in screening; however, subjects in the colonoscopy group had more adenomas detected.
A hybrid strategy could capitalize on colonoscopy's higher effectiveness and FIT's lower cost and better adherence, while attenuating the drawbacks of colonoscopy's invasiveness and FIT's lower sensitivity for adenoma detection.
In the present simulation model, a hybrid strategy based on annual or biennial FIT starting at age 50, followed by a single colonoscopy at age 66, resulted in decreased CRC incidence and mortality, gain in quality-adjusted life-years, and reduction in cost comparable to those of single-test strategies.
The study findings, as with any simulation exercise, depend largely upon the baseline assumptions, notably regarding test sensitivity and patient adherence. However, Dinh et al.'s study is an important first step to determine the viability of hybrid screening approaches, and paves the way for future clinical studies.
Dr. Charles Kahi is associate professor of clinical medicine at the Indiana University School of Medicine, and gastroenterology section chief at Roudebush VA Medical Center, both in Indianapolis. He had no relevant financial disclosures.
A hybrid colorectal cancer screening strategy that incorporates annual fecal immunological testing beginning at age 50 years and a single colonoscopy at age 66 years proved both clinically effective and cost-effective in a simulation model.
Using the Archimedes Model – a "large-scale integrated simulation of human physiology, diseases, and health care systems" – Tuan Dinh, Ph.D., of Archimedes Inc., San Francisco, and colleagues found that compared with no screening, the hybrid strategy with annual fecal immunological testing (FIT) reduced colorectal cancer incidence by 73%, gained 11,200 quality-adjusted life years (QALYs), and saved $126.8 million for every 100,000 people screened during a 30-year period.
Source: American Gastroenterological Association
Without screening, a cohort of 100,000 members of Kaiser Permanente Northern California who were included in the virtual study experienced 6,004 colorectal cancers and 1,837 colorectal cancer deaths. All methods of screening that were evaluated in the model – including annual FIT, colonoscopy at 10-year intervals, sigmoidoscopy at 5-year intervals, both FIT and sigmoidoscopy, and both FIT and colonoscopy – substantially reduced the colorectal cancer incidence, by 53%-76%, and added a significant number of QALYs, compared with no screening, the investigators reported online March 28 in Clinical Gastroenterology and Hepatology.
Colonoscopy as a single-modality screening strategy was most effective for colorectal cancer reduction (76%), and FIT alone was the least costly approach (with savings of $142.6 million per 100,000 persons, compared with no screening), but FIT plus colonoscopy came close: The hybrid strategy reduced colorectal cancer by 73%, and compared with FIT alone, gained 1,400 QALYs/100,000 at an incremental cost of $9,700 per QALY gained. Colonoscopy gained 500 QALY/100,000 more than the hybrid strategy at an incremental cost of $35,100 per QALY gained.
Furthermore, the hybrid strategy required 55% fewer FITs and 41% more colonoscopies than FIT alone, and required 2.1-2.3 fewer colonoscopies per person during 30 years than screening by colonoscopy alone, they reported (Clin. Gastroenterol. Hepatol. 2013 March 28 [doi: 10.1016/j.cgh.2013.03.013]).
On sensitivity analysis, a hybrid approach using biennial FIT was also cost-effective, compared with either FIT or colonoscopy alone.
The core of the Archimedes Model is "a set of equations that represent physiological pathways at the clinical level (i.e., at the level of detail of basic medical tests, clinical trials, and patient charts)." The colorectal cancer submodel, which was derived from public databases, published epidemiologic studies, and clinical trials, was developed in collaboration with the American Cancer Society, the authors explained.
The simulated population included a cross section of 2008 Kaiser Permanente members who were aged 50-75 years at the start of the virtual trial comparing the screening strategies.
The findings are important, given that colorectal cancer is the second-leading cause of cancer deaths among adults in the United States, and although colonoscopy is the recommended approach for primary screening in most U.S. guidelines, it is the most invasive, risky, and costly screening modality, the investigators noted.
Conversely, stool tests with follow-up colonoscopy for positive results are the least expensive. In the past, stool test strategies have been hampered by low sensitivity for adenomas and low specificity, but recent improvements in sensitivity and specificity of FIT has renewed interest in the use of stool tests, they said.
Though the hybrid FIT/colonoscopy strategy is limited by several factors – for example, the accuracy of any simulation model is dependent on assumptions about test performance and adherence, which may vary – the findings of this study suggest the hybrid strategy could improve outcomes while lowering costs.
"The simulation results indicated that [the hybrid strategy] required 37% fewer colonoscopies than [colonoscopy alone], while delivering only slightly inferior health benefits," the investigators said. These results demonstrate that "it is possible to design hybrid colorectal cancer screening strategies that can deliver health benefits and cost-effectiveness that are comparable to those of single-modality strategies, with a favorable impact on resource demand," they noted.
Future clinical studies should address whether hybrid strategies have the additional advantage of increasing screening adherence, they concluded.
This study was carried out by Archimedes under a contract with The Permanente Medical Group (TPMG). One author, Dr. Theodore R. Levin, is a TPMG shareholder, and another, Cindy Caldwell, is a TPMG employee. The authors reported having no other conflicts of interest.
A hybrid colorectal cancer screening strategy that incorporates annual fecal immunological testing beginning at age 50 years and a single colonoscopy at age 66 years proved both clinically effective and cost-effective in a simulation model.
Using the Archimedes Model – a "large-scale integrated simulation of human physiology, diseases, and health care systems" – Tuan Dinh, Ph.D., of Archimedes Inc., San Francisco, and colleagues found that compared with no screening, the hybrid strategy with annual fecal immunological testing (FIT) reduced colorectal cancer incidence by 73%, gained 11,200 quality-adjusted life years (QALYs), and saved $126.8 million for every 100,000 people screened during a 30-year period.
Source: American Gastroenterological Association
Without screening, a cohort of 100,000 members of Kaiser Permanente Northern California who were included in the virtual study experienced 6,004 colorectal cancers and 1,837 colorectal cancer deaths. All methods of screening that were evaluated in the model – including annual FIT, colonoscopy at 10-year intervals, sigmoidoscopy at 5-year intervals, both FIT and sigmoidoscopy, and both FIT and colonoscopy – substantially reduced the colorectal cancer incidence, by 53%-76%, and added a significant number of QALYs, compared with no screening, the investigators reported online March 28 in Clinical Gastroenterology and Hepatology.
Colonoscopy as a single-modality screening strategy was most effective for colorectal cancer reduction (76%), and FIT alone was the least costly approach (with savings of $142.6 million per 100,000 persons, compared with no screening), but FIT plus colonoscopy came close: The hybrid strategy reduced colorectal cancer by 73%, and compared with FIT alone, gained 1,400 QALYs/100,000 at an incremental cost of $9,700 per QALY gained. Colonoscopy gained 500 QALY/100,000 more than the hybrid strategy at an incremental cost of $35,100 per QALY gained.
Furthermore, the hybrid strategy required 55% fewer FITs and 41% more colonoscopies than FIT alone, and required 2.1-2.3 fewer colonoscopies per person during 30 years than screening by colonoscopy alone, they reported (Clin. Gastroenterol. Hepatol. 2013 March 28 [doi: 10.1016/j.cgh.2013.03.013]).
On sensitivity analysis, a hybrid approach using biennial FIT was also cost-effective, compared with either FIT or colonoscopy alone.
The core of the Archimedes Model is "a set of equations that represent physiological pathways at the clinical level (i.e., at the level of detail of basic medical tests, clinical trials, and patient charts)." The colorectal cancer submodel, which was derived from public databases, published epidemiologic studies, and clinical trials, was developed in collaboration with the American Cancer Society, the authors explained.
The simulated population included a cross section of 2008 Kaiser Permanente members who were aged 50-75 years at the start of the virtual trial comparing the screening strategies.
The findings are important, given that colorectal cancer is the second-leading cause of cancer deaths among adults in the United States, and although colonoscopy is the recommended approach for primary screening in most U.S. guidelines, it is the most invasive, risky, and costly screening modality, the investigators noted.
Conversely, stool tests with follow-up colonoscopy for positive results are the least expensive. In the past, stool test strategies have been hampered by low sensitivity for adenomas and low specificity, but recent improvements in sensitivity and specificity of FIT has renewed interest in the use of stool tests, they said.
Though the hybrid FIT/colonoscopy strategy is limited by several factors – for example, the accuracy of any simulation model is dependent on assumptions about test performance and adherence, which may vary – the findings of this study suggest the hybrid strategy could improve outcomes while lowering costs.
"The simulation results indicated that [the hybrid strategy] required 37% fewer colonoscopies than [colonoscopy alone], while delivering only slightly inferior health benefits," the investigators said. These results demonstrate that "it is possible to design hybrid colorectal cancer screening strategies that can deliver health benefits and cost-effectiveness that are comparable to those of single-modality strategies, with a favorable impact on resource demand," they noted.
Future clinical studies should address whether hybrid strategies have the additional advantage of increasing screening adherence, they concluded.
This study was carried out by Archimedes under a contract with The Permanente Medical Group (TPMG). One author, Dr. Theodore R. Levin, is a TPMG shareholder, and another, Cindy Caldwell, is a TPMG employee. The authors reported having no other conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: A hybrid screening strategy reduced colorectal cancer by 73%.
Data source: A cost-effectiveness analysis using a simulation model in 100,000 subjects.
Disclosures: This study was carried out by Archimedes under a contract with The Permanente Medical Group (TPMG). One author, Dr. Theodore R. Levin, is a TPMG shareholder, and another, Cindy Caldwell, is a TPMG employee. The authors reported having no other conflicts of interest.
Carbonation affects brain processing of sweet stimuli
Carbonation produces a decrease in the neural processing of sweetness-related signals, particularly those from sucrose, a small functional neuroimaging study shows.
The findings, which suggest that the combination of CO2 and sucrose might increase consumption of sucrose, could have implications for dietary interventions designed to regulate caloric intake, according to Dr. Francesco Di Salle of Salerno (Italy) University and his colleagues.
To assess the interference between CO2 and perception of sweetness, as well as the differential effects of CO2 on sucrose and aspartame-acesulfame, (As-Ac, an artificial sweetener combination commonly used in diet beverages), the investigators performed two functional magnetic resonance imaging (fMRI) experiments to evaluate changes in regional brain activity.
The first experiment, performed in nine volunteers, analyzed the effect of carbonation in four sweet Sprite-based solutions, including one carbonated and sweetened with sucrose, one noncarbonated and sweetened with sucrose, one carbonated and sweetened with As-Ac, and one noncarbonated and sweetened with As-Ac. The second experiment evaluated the spatial location of the strongest neural effects of sour taste and CO2 within the insular cortex of eight subjects.
On fMRI, the presence of carbonation in sweet solutions "independently of the sweetening agent, reduced neural activity in the anterior insula (AI), orbitofrontal cortex (OFC), and posterior pons ... the effect of carbonation on sucrose was much higher than on perception of As-Ac," they noted, explaining that "at the perceptual level ... carbonation reduced the perception of sweetness and the differences between the sensory profiles of sucrose and As-Ac."
This effect may increase sucrose intake, but is also favorable to diet beverage formulations being perceived as similar to regular beverage formulations, the investigators reported online May 28 ahead of print in Gastroenterology.
"It is also coherent with a process of prioritization among perceptual inputs (chemesthetic and gustatory information) deriving from the same body topography and converging to the same cortical regions (AI, OFC), they said (Gastroenterology 2013 [doi:10.1053/j.gastro.2013.05.041]).
To correlate neuroimaging with behavioral data, the ability of carbonation to modulate perception of sweetness was assessed in 14 subjects, who scored the level of perceived sweetness of the solutions on a visual analog scale ranging from 0 to 100 mm. The effect of 1,585 ppm of CO2 added to a 10% glucose solution on the perception of sweetness was also tested in seven subjects.
CO2 was able to significantly reduce sweet-induced taste perceptions as assessed by the volunteers’ visual analog scale recordings: The perception of Sprite-associated sweetness was significantly reduced by CO2 (48 vs. 63 and 48 vs. 55 for As-Ac and sucrose, respectively).
"Similarly, in the presence of carbonation, sweet-induced perception of a 10% glucose solution was significantly reduced (36 vs. 53), the investigators said.
Given the widespread use of CO2 in sweet beverages, the modulation of sweet perception by CO2 is of interest, they noted.
The findings, which suggest that CO2 modulates the perception of sweetness thereby reducing the global neural processing of sweetness, the processing of sucrose more than of As-Ac, and the processing difference between sweetening agents via modulation of the perception of sweetness, is "of utmost importance for designing carbonated beverages and is relevant to the regulation of caloric intake," they said.
"This effect is driven by the integration of information on gastric fullness and on nutrient depletion, conveyed to a brain network where the autonomic brainstem circuitry and tractus solitarius neurons play a critical role in homeostatic functions," they added.
It may be that taste and CO2-related information influence food choices and intake through integration in the tractus solitarius with input from the gastrointestinal tract, they suggested, explaining that "the reduced discrimination between sucrose and As-Ac induced by CO2 would promote the consumptions of low-calorie beverages and would converge with CO2-induced gastric distention in limiting caloric intake."
This study was supported in part by the Coca-Cola Company. One author, Dr. Rosario Cuomo, was sponsored by the Coca-Cola Company. The remaining authors reported having no disclosures.
Carbonation produces a decrease in the neural processing of sweetness-related signals, particularly those from sucrose, a small functional neuroimaging study shows.
The findings, which suggest that the combination of CO2 and sucrose might increase consumption of sucrose, could have implications for dietary interventions designed to regulate caloric intake, according to Dr. Francesco Di Salle of Salerno (Italy) University and his colleagues.
To assess the interference between CO2 and perception of sweetness, as well as the differential effects of CO2 on sucrose and aspartame-acesulfame, (As-Ac, an artificial sweetener combination commonly used in diet beverages), the investigators performed two functional magnetic resonance imaging (fMRI) experiments to evaluate changes in regional brain activity.
The first experiment, performed in nine volunteers, analyzed the effect of carbonation in four sweet Sprite-based solutions, including one carbonated and sweetened with sucrose, one noncarbonated and sweetened with sucrose, one carbonated and sweetened with As-Ac, and one noncarbonated and sweetened with As-Ac. The second experiment evaluated the spatial location of the strongest neural effects of sour taste and CO2 within the insular cortex of eight subjects.
On fMRI, the presence of carbonation in sweet solutions "independently of the sweetening agent, reduced neural activity in the anterior insula (AI), orbitofrontal cortex (OFC), and posterior pons ... the effect of carbonation on sucrose was much higher than on perception of As-Ac," they noted, explaining that "at the perceptual level ... carbonation reduced the perception of sweetness and the differences between the sensory profiles of sucrose and As-Ac."
This effect may increase sucrose intake, but is also favorable to diet beverage formulations being perceived as similar to regular beverage formulations, the investigators reported online May 28 ahead of print in Gastroenterology.
"It is also coherent with a process of prioritization among perceptual inputs (chemesthetic and gustatory information) deriving from the same body topography and converging to the same cortical regions (AI, OFC), they said (Gastroenterology 2013 [doi:10.1053/j.gastro.2013.05.041]).
To correlate neuroimaging with behavioral data, the ability of carbonation to modulate perception of sweetness was assessed in 14 subjects, who scored the level of perceived sweetness of the solutions on a visual analog scale ranging from 0 to 100 mm. The effect of 1,585 ppm of CO2 added to a 10% glucose solution on the perception of sweetness was also tested in seven subjects.
CO2 was able to significantly reduce sweet-induced taste perceptions as assessed by the volunteers’ visual analog scale recordings: The perception of Sprite-associated sweetness was significantly reduced by CO2 (48 vs. 63 and 48 vs. 55 for As-Ac and sucrose, respectively).
"Similarly, in the presence of carbonation, sweet-induced perception of a 10% glucose solution was significantly reduced (36 vs. 53), the investigators said.
Given the widespread use of CO2 in sweet beverages, the modulation of sweet perception by CO2 is of interest, they noted.
The findings, which suggest that CO2 modulates the perception of sweetness thereby reducing the global neural processing of sweetness, the processing of sucrose more than of As-Ac, and the processing difference between sweetening agents via modulation of the perception of sweetness, is "of utmost importance for designing carbonated beverages and is relevant to the regulation of caloric intake," they said.
"This effect is driven by the integration of information on gastric fullness and on nutrient depletion, conveyed to a brain network where the autonomic brainstem circuitry and tractus solitarius neurons play a critical role in homeostatic functions," they added.
It may be that taste and CO2-related information influence food choices and intake through integration in the tractus solitarius with input from the gastrointestinal tract, they suggested, explaining that "the reduced discrimination between sucrose and As-Ac induced by CO2 would promote the consumptions of low-calorie beverages and would converge with CO2-induced gastric distention in limiting caloric intake."
This study was supported in part by the Coca-Cola Company. One author, Dr. Rosario Cuomo, was sponsored by the Coca-Cola Company. The remaining authors reported having no disclosures.
Carbonation produces a decrease in the neural processing of sweetness-related signals, particularly those from sucrose, a small functional neuroimaging study shows.
The findings, which suggest that the combination of CO2 and sucrose might increase consumption of sucrose, could have implications for dietary interventions designed to regulate caloric intake, according to Dr. Francesco Di Salle of Salerno (Italy) University and his colleagues.
To assess the interference between CO2 and perception of sweetness, as well as the differential effects of CO2 on sucrose and aspartame-acesulfame, (As-Ac, an artificial sweetener combination commonly used in diet beverages), the investigators performed two functional magnetic resonance imaging (fMRI) experiments to evaluate changes in regional brain activity.
The first experiment, performed in nine volunteers, analyzed the effect of carbonation in four sweet Sprite-based solutions, including one carbonated and sweetened with sucrose, one noncarbonated and sweetened with sucrose, one carbonated and sweetened with As-Ac, and one noncarbonated and sweetened with As-Ac. The second experiment evaluated the spatial location of the strongest neural effects of sour taste and CO2 within the insular cortex of eight subjects.
On fMRI, the presence of carbonation in sweet solutions "independently of the sweetening agent, reduced neural activity in the anterior insula (AI), orbitofrontal cortex (OFC), and posterior pons ... the effect of carbonation on sucrose was much higher than on perception of As-Ac," they noted, explaining that "at the perceptual level ... carbonation reduced the perception of sweetness and the differences between the sensory profiles of sucrose and As-Ac."
This effect may increase sucrose intake, but is also favorable to diet beverage formulations being perceived as similar to regular beverage formulations, the investigators reported online May 28 ahead of print in Gastroenterology.
"It is also coherent with a process of prioritization among perceptual inputs (chemesthetic and gustatory information) deriving from the same body topography and converging to the same cortical regions (AI, OFC), they said (Gastroenterology 2013 [doi:10.1053/j.gastro.2013.05.041]).
To correlate neuroimaging with behavioral data, the ability of carbonation to modulate perception of sweetness was assessed in 14 subjects, who scored the level of perceived sweetness of the solutions on a visual analog scale ranging from 0 to 100 mm. The effect of 1,585 ppm of CO2 added to a 10% glucose solution on the perception of sweetness was also tested in seven subjects.
CO2 was able to significantly reduce sweet-induced taste perceptions as assessed by the volunteers’ visual analog scale recordings: The perception of Sprite-associated sweetness was significantly reduced by CO2 (48 vs. 63 and 48 vs. 55 for As-Ac and sucrose, respectively).
"Similarly, in the presence of carbonation, sweet-induced perception of a 10% glucose solution was significantly reduced (36 vs. 53), the investigators said.
Given the widespread use of CO2 in sweet beverages, the modulation of sweet perception by CO2 is of interest, they noted.
The findings, which suggest that CO2 modulates the perception of sweetness thereby reducing the global neural processing of sweetness, the processing of sucrose more than of As-Ac, and the processing difference between sweetening agents via modulation of the perception of sweetness, is "of utmost importance for designing carbonated beverages and is relevant to the regulation of caloric intake," they said.
"This effect is driven by the integration of information on gastric fullness and on nutrient depletion, conveyed to a brain network where the autonomic brainstem circuitry and tractus solitarius neurons play a critical role in homeostatic functions," they added.
It may be that taste and CO2-related information influence food choices and intake through integration in the tractus solitarius with input from the gastrointestinal tract, they suggested, explaining that "the reduced discrimination between sucrose and As-Ac induced by CO2 would promote the consumptions of low-calorie beverages and would converge with CO2-induced gastric distention in limiting caloric intake."
This study was supported in part by the Coca-Cola Company. One author, Dr. Rosario Cuomo, was sponsored by the Coca-Cola Company. The remaining authors reported having no disclosures.
FROM GASTROENTEROLOGY
Major finding: The presence of CO2 produces an overall decrease in neural processing of sweetness-related signals.
Data source: A small brain neuroimaging study.
Disclosures: This study was supported in part by the Coca-Cola Company. One author, Dr. Rosario Cuomo, was sponsored by the Coca-Cola Company. The remaining authors reported having no disclosures.
Endoscopy, surgery for pancreatic pseudocysts show equal efficacy
Endoscopic cystogastrostomy was as effective as surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial comparing the two approaches.
None of the 20 patients randomized to undergo endoscopic treatment, and 1 of 20 patients randomized to undergo surgery, experienced pseudocyst recurrence within 24 months of follow-up, Dr. Shyam Varadarajulu of the University of Alabama at Birmingham and his colleagues reported online May 31, ahead of print in Gastroenterology.
Source: American Gastroenterological Association
Moreover, those in the endoscopy group had a shorter hospital length of stay than did the patients in the surgery group (median of 2 vs. 6 days) and a lower mean cost of care ($7,011 vs. $15,052), the investigators reported (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]).
Patients included in the study were adults with intrapancreatic or extrapancreatic pseudocysts who were enrolled between Jan. 20 and Dec. 28, 2009, following evaluation by a gastroenterologist or surgeon in an outpatient clinic or inpatient setting.
The 20 patients in the endoscopy group underwent cystogastrostomy using endoscopic ultrasound guidance and fluoroscopy while they were under conscious sedation.
"Once the pseudocyst was identified, it was accessed using a 19-gauge needle, and the gastric wall was dilated up to 15 mm using a wire-guided balloon. Two plastic stents then were deployed to facilitate the drainage of pseudocyst contents into the stomach," the investigators explained, noting that endoscopy patients were discharged following the procedure.
No procedural complications occurred in any of the 20 patients. However, one patient presented to the hospital 13 days later with persistent abdominal pain; a computed tomography scan showed a residual 7-cm pseudocyst, which was successfully treated by deployment of additional stents. At 8-week follow-up, abdominal CT scans showed pseudocyst resolution in all 20 patients.
Endoscopic retrograde cholangiopancreatography (ERCP), which was performed in all of the endoscopy patients to assess and treat any pancreatic duct leaks, was successful in 18 of the 20 patients. Magnetic resonance cholangiopancreatography (MRCP), performed in those two patients, showed a normal pancreatic duct in one and a disconnected duct in the other, the investigators said.
The 20 patients in the surgery group were all treated by the same pancreatic surgeon, who used an endovascular stapler to create at least a 6-cm cystogastrostomy after obtaining entry to the pseudocyst.
"A nasogastric tube then was left in the stomach and passed into the pseudocyst cavity to allow for intermittent irrigation until postoperative day 1 ... the nasogastric tube was removed on postoperative day 1 and clear liquids were started on day 2," they said.
Patients were discharged once a soft diet was tolerated and pain adequately controlled.
One patient with ongoing alcohol consumption developed pseudocyst recurrence at 4 months and was managed by endoscopic cystogastrostomy.
Two surgery patients experienced complications, including a wound infection treated by local debridement and antibiotics in one patient, and a case of hematemesis in one patient who was on anticoagulation and who was readmitted 9 days after discharge. "At endoscopy, a visible clot was noted at the site of surgical anastomosis, and hemostasis was achieved by application of electrocautery," the investigators said.
Two other patients were not able to tolerate oral intake postoperatively; one of them was managed conservatively, and one required surgical placement of a temporary enteral feeding tube. In addition, one patient presented at 6 months with abdominal pain and was found on ERCP to have a stricture in the pancreatic tail that required management by distal pancreatectomy.
Overall, there were no differences in the rates of treatment success, treatment failure, complications, or reinterventions between the endoscopy and surgery groups.
However, in addition to the shorter hospital stay and lower costs in the endoscopy group, patients in that group had significantly greater improvement over time in physical and mental health component scores on the Medical Outcomes Study 36-Item Short-Form General Survey. Although the scores improved for both cohorts, they were 4.48 points and 4.41 points lower, respectively, in the surgery group than the endoscopy group, the investigators said.
The findings are of note because although endoscopic drainage of pancreatic pseudocysts is increasingly performed, surgical cystogastrostomy is still considered the gold standard for treatment, as randomized trials comparing the two approaches had not previously been performed.
"The clinical relevance of this study is substantial because it shows that endoscopically managed patients can be discharged home earlier with a better health-related quality of life, and treatment can be delivered at a lower cost," the investigators said.
The authors reported having no disclosures.
There has been marked evolution in the understanding and management of acute and chronic pancreatitis over the last decade. Walled-off necroses and pseudocysts are consequences of pancreatitis that may be intrapancreatic, extrapancreatic, or both. These two entities are often confused. Fortunately, a recent international consensus has clarified that pseudocysts are liquid-filled, are almost always extrapancreatic, and rarely occur as the consequence of severe pancreatitis or involve "disconnected duct" (Gut 2013;62:102-11).
| Dr. Martin L. Freeman |
Dr. Varadarajulu and his colleagues are to be congratulated for performing a landmark study comparing surgery and endoscopy for internal drainage of pseudocysts (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]). They covered all the bases for an outstanding efficacy trial, including performance by experts at a tertiary center, and careful definitions of endpoints. Although the title of the paper is "Equal efficacy … [of the two approaches]," based on the primary endpoint of recurrence at 24 months, they addressed cost, hospital stay, and quality of life measures, all increasingly important in the current health care environment. In the latter regard, endoscopic ultrasound-guided cystgastrostomy emerged to be clearly superior to open surgery. If patients with more comorbidity such as portal hypertension were included, the differences would likely have been even more striking.
Thus, for pseudocysts, as for walled-off necroses, the picture is becoming increasingly clear: Minimally invasive and in particular endoscopic techniques are superior to open surgical approaches. This represents a paradigm shift in clinical practice. However, to be effective and safe in widespread applicability, it is incumbent that endoscopists attempting to manage these conditions have highly specialized expertise in pancreatic diseases and techniques, and manage these complex patients in close collaboration with their colleagues in surgery and interventional radiology.
Dr. Martin L. Freeman, FACG, FASGE, is professor of medicine at the University of Minnesota, Minneapolis. He disclosed receiving speaking honoraria from Boston Scientific and Cook, and consulting for Boston Scientific.
There has been marked evolution in the understanding and management of acute and chronic pancreatitis over the last decade. Walled-off necroses and pseudocysts are consequences of pancreatitis that may be intrapancreatic, extrapancreatic, or both. These two entities are often confused. Fortunately, a recent international consensus has clarified that pseudocysts are liquid-filled, are almost always extrapancreatic, and rarely occur as the consequence of severe pancreatitis or involve "disconnected duct" (Gut 2013;62:102-11).
| Dr. Martin L. Freeman |
Dr. Varadarajulu and his colleagues are to be congratulated for performing a landmark study comparing surgery and endoscopy for internal drainage of pseudocysts (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]). They covered all the bases for an outstanding efficacy trial, including performance by experts at a tertiary center, and careful definitions of endpoints. Although the title of the paper is "Equal efficacy … [of the two approaches]," based on the primary endpoint of recurrence at 24 months, they addressed cost, hospital stay, and quality of life measures, all increasingly important in the current health care environment. In the latter regard, endoscopic ultrasound-guided cystgastrostomy emerged to be clearly superior to open surgery. If patients with more comorbidity such as portal hypertension were included, the differences would likely have been even more striking.
Thus, for pseudocysts, as for walled-off necroses, the picture is becoming increasingly clear: Minimally invasive and in particular endoscopic techniques are superior to open surgical approaches. This represents a paradigm shift in clinical practice. However, to be effective and safe in widespread applicability, it is incumbent that endoscopists attempting to manage these conditions have highly specialized expertise in pancreatic diseases and techniques, and manage these complex patients in close collaboration with their colleagues in surgery and interventional radiology.
Dr. Martin L. Freeman, FACG, FASGE, is professor of medicine at the University of Minnesota, Minneapolis. He disclosed receiving speaking honoraria from Boston Scientific and Cook, and consulting for Boston Scientific.
There has been marked evolution in the understanding and management of acute and chronic pancreatitis over the last decade. Walled-off necroses and pseudocysts are consequences of pancreatitis that may be intrapancreatic, extrapancreatic, or both. These two entities are often confused. Fortunately, a recent international consensus has clarified that pseudocysts are liquid-filled, are almost always extrapancreatic, and rarely occur as the consequence of severe pancreatitis or involve "disconnected duct" (Gut 2013;62:102-11).
| Dr. Martin L. Freeman |
Dr. Varadarajulu and his colleagues are to be congratulated for performing a landmark study comparing surgery and endoscopy for internal drainage of pseudocysts (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]). They covered all the bases for an outstanding efficacy trial, including performance by experts at a tertiary center, and careful definitions of endpoints. Although the title of the paper is "Equal efficacy … [of the two approaches]," based on the primary endpoint of recurrence at 24 months, they addressed cost, hospital stay, and quality of life measures, all increasingly important in the current health care environment. In the latter regard, endoscopic ultrasound-guided cystgastrostomy emerged to be clearly superior to open surgery. If patients with more comorbidity such as portal hypertension were included, the differences would likely have been even more striking.
Thus, for pseudocysts, as for walled-off necroses, the picture is becoming increasingly clear: Minimally invasive and in particular endoscopic techniques are superior to open surgical approaches. This represents a paradigm shift in clinical practice. However, to be effective and safe in widespread applicability, it is incumbent that endoscopists attempting to manage these conditions have highly specialized expertise in pancreatic diseases and techniques, and manage these complex patients in close collaboration with their colleagues in surgery and interventional radiology.
Dr. Martin L. Freeman, FACG, FASGE, is professor of medicine at the University of Minnesota, Minneapolis. He disclosed receiving speaking honoraria from Boston Scientific and Cook, and consulting for Boston Scientific.
Endoscopic cystogastrostomy was as effective as surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial comparing the two approaches.
None of the 20 patients randomized to undergo endoscopic treatment, and 1 of 20 patients randomized to undergo surgery, experienced pseudocyst recurrence within 24 months of follow-up, Dr. Shyam Varadarajulu of the University of Alabama at Birmingham and his colleagues reported online May 31, ahead of print in Gastroenterology.
Source: American Gastroenterological Association
Moreover, those in the endoscopy group had a shorter hospital length of stay than did the patients in the surgery group (median of 2 vs. 6 days) and a lower mean cost of care ($7,011 vs. $15,052), the investigators reported (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]).
Patients included in the study were adults with intrapancreatic or extrapancreatic pseudocysts who were enrolled between Jan. 20 and Dec. 28, 2009, following evaluation by a gastroenterologist or surgeon in an outpatient clinic or inpatient setting.
The 20 patients in the endoscopy group underwent cystogastrostomy using endoscopic ultrasound guidance and fluoroscopy while they were under conscious sedation.
"Once the pseudocyst was identified, it was accessed using a 19-gauge needle, and the gastric wall was dilated up to 15 mm using a wire-guided balloon. Two plastic stents then were deployed to facilitate the drainage of pseudocyst contents into the stomach," the investigators explained, noting that endoscopy patients were discharged following the procedure.
No procedural complications occurred in any of the 20 patients. However, one patient presented to the hospital 13 days later with persistent abdominal pain; a computed tomography scan showed a residual 7-cm pseudocyst, which was successfully treated by deployment of additional stents. At 8-week follow-up, abdominal CT scans showed pseudocyst resolution in all 20 patients.
Endoscopic retrograde cholangiopancreatography (ERCP), which was performed in all of the endoscopy patients to assess and treat any pancreatic duct leaks, was successful in 18 of the 20 patients. Magnetic resonance cholangiopancreatography (MRCP), performed in those two patients, showed a normal pancreatic duct in one and a disconnected duct in the other, the investigators said.
The 20 patients in the surgery group were all treated by the same pancreatic surgeon, who used an endovascular stapler to create at least a 6-cm cystogastrostomy after obtaining entry to the pseudocyst.
"A nasogastric tube then was left in the stomach and passed into the pseudocyst cavity to allow for intermittent irrigation until postoperative day 1 ... the nasogastric tube was removed on postoperative day 1 and clear liquids were started on day 2," they said.
Patients were discharged once a soft diet was tolerated and pain adequately controlled.
One patient with ongoing alcohol consumption developed pseudocyst recurrence at 4 months and was managed by endoscopic cystogastrostomy.
Two surgery patients experienced complications, including a wound infection treated by local debridement and antibiotics in one patient, and a case of hematemesis in one patient who was on anticoagulation and who was readmitted 9 days after discharge. "At endoscopy, a visible clot was noted at the site of surgical anastomosis, and hemostasis was achieved by application of electrocautery," the investigators said.
Two other patients were not able to tolerate oral intake postoperatively; one of them was managed conservatively, and one required surgical placement of a temporary enteral feeding tube. In addition, one patient presented at 6 months with abdominal pain and was found on ERCP to have a stricture in the pancreatic tail that required management by distal pancreatectomy.
Overall, there were no differences in the rates of treatment success, treatment failure, complications, or reinterventions between the endoscopy and surgery groups.
However, in addition to the shorter hospital stay and lower costs in the endoscopy group, patients in that group had significantly greater improvement over time in physical and mental health component scores on the Medical Outcomes Study 36-Item Short-Form General Survey. Although the scores improved for both cohorts, they were 4.48 points and 4.41 points lower, respectively, in the surgery group than the endoscopy group, the investigators said.
The findings are of note because although endoscopic drainage of pancreatic pseudocysts is increasingly performed, surgical cystogastrostomy is still considered the gold standard for treatment, as randomized trials comparing the two approaches had not previously been performed.
"The clinical relevance of this study is substantial because it shows that endoscopically managed patients can be discharged home earlier with a better health-related quality of life, and treatment can be delivered at a lower cost," the investigators said.
The authors reported having no disclosures.
Endoscopic cystogastrostomy was as effective as surgical cystogastrostomy for pancreatic pseudocyst drainage in a randomized trial comparing the two approaches.
None of the 20 patients randomized to undergo endoscopic treatment, and 1 of 20 patients randomized to undergo surgery, experienced pseudocyst recurrence within 24 months of follow-up, Dr. Shyam Varadarajulu of the University of Alabama at Birmingham and his colleagues reported online May 31, ahead of print in Gastroenterology.
Source: American Gastroenterological Association
Moreover, those in the endoscopy group had a shorter hospital length of stay than did the patients in the surgery group (median of 2 vs. 6 days) and a lower mean cost of care ($7,011 vs. $15,052), the investigators reported (Gastroenterology 2013 May 31 [doi: 10.1053/j.gastro.2013.05.046]).
Patients included in the study were adults with intrapancreatic or extrapancreatic pseudocysts who were enrolled between Jan. 20 and Dec. 28, 2009, following evaluation by a gastroenterologist or surgeon in an outpatient clinic or inpatient setting.
The 20 patients in the endoscopy group underwent cystogastrostomy using endoscopic ultrasound guidance and fluoroscopy while they were under conscious sedation.
"Once the pseudocyst was identified, it was accessed using a 19-gauge needle, and the gastric wall was dilated up to 15 mm using a wire-guided balloon. Two plastic stents then were deployed to facilitate the drainage of pseudocyst contents into the stomach," the investigators explained, noting that endoscopy patients were discharged following the procedure.
No procedural complications occurred in any of the 20 patients. However, one patient presented to the hospital 13 days later with persistent abdominal pain; a computed tomography scan showed a residual 7-cm pseudocyst, which was successfully treated by deployment of additional stents. At 8-week follow-up, abdominal CT scans showed pseudocyst resolution in all 20 patients.
Endoscopic retrograde cholangiopancreatography (ERCP), which was performed in all of the endoscopy patients to assess and treat any pancreatic duct leaks, was successful in 18 of the 20 patients. Magnetic resonance cholangiopancreatography (MRCP), performed in those two patients, showed a normal pancreatic duct in one and a disconnected duct in the other, the investigators said.
The 20 patients in the surgery group were all treated by the same pancreatic surgeon, who used an endovascular stapler to create at least a 6-cm cystogastrostomy after obtaining entry to the pseudocyst.
"A nasogastric tube then was left in the stomach and passed into the pseudocyst cavity to allow for intermittent irrigation until postoperative day 1 ... the nasogastric tube was removed on postoperative day 1 and clear liquids were started on day 2," they said.
Patients were discharged once a soft diet was tolerated and pain adequately controlled.
One patient with ongoing alcohol consumption developed pseudocyst recurrence at 4 months and was managed by endoscopic cystogastrostomy.
Two surgery patients experienced complications, including a wound infection treated by local debridement and antibiotics in one patient, and a case of hematemesis in one patient who was on anticoagulation and who was readmitted 9 days after discharge. "At endoscopy, a visible clot was noted at the site of surgical anastomosis, and hemostasis was achieved by application of electrocautery," the investigators said.
Two other patients were not able to tolerate oral intake postoperatively; one of them was managed conservatively, and one required surgical placement of a temporary enteral feeding tube. In addition, one patient presented at 6 months with abdominal pain and was found on ERCP to have a stricture in the pancreatic tail that required management by distal pancreatectomy.
Overall, there were no differences in the rates of treatment success, treatment failure, complications, or reinterventions between the endoscopy and surgery groups.
However, in addition to the shorter hospital stay and lower costs in the endoscopy group, patients in that group had significantly greater improvement over time in physical and mental health component scores on the Medical Outcomes Study 36-Item Short-Form General Survey. Although the scores improved for both cohorts, they were 4.48 points and 4.41 points lower, respectively, in the surgery group than the endoscopy group, the investigators said.
The findings are of note because although endoscopic drainage of pancreatic pseudocysts is increasingly performed, surgical cystogastrostomy is still considered the gold standard for treatment, as randomized trials comparing the two approaches had not previously been performed.
"The clinical relevance of this study is substantial because it shows that endoscopically managed patients can be discharged home earlier with a better health-related quality of life, and treatment can be delivered at a lower cost," the investigators said.
The authors reported having no disclosures.
FROM GASTROENTEROLOGY
Major finding: Pseudocysts recurred in 0 of 20 endoscopy patients, and 1 of 20 surgery patients.
Data source: An open-label randomized trial involving 40 patients.
Disclosures: The authors reported having no disclosures.
Probiotic Saccharomyces boulardii doesn't stop Crohn's relapse
Treatment with the nonpathogenic probiotic yeast Saccharomyces boulardii does not prevent relapse in Crohn’s disease.
The finding – which nevertheless does "not allow the exclusion of the potential therapeutic efficacy of probiotics" – comes from the third study to assess S. boulardii in Crohn’s, wrote Dr. Arnaud Bourreille and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
In a yearlong, randomized, double-blind, placebo-controlled study, Dr. Bourreille, of the CHU de Nantes, France, and colleagues looked at 159 adult patients enrolled at 32 centers during the acute phase of Crohn’s disease.
Patients were treated for 4 weeks with corticosteroids, budesonide, and/or aminosalicylates, according to the preference of each investigator, until remission. They were then randomized to receive either oral S. boulardii at 1 g daily or a placebo until the end of the study at week 52 or earlier, in the case of relapse, with follow-up conducted every 12 weeks.
Relapse was defined as registering a Crohn’s disease activity index (CDAI) higher than 220 points on follow-up; registering a CDAI between 150 and 220 with an increase of at least 70 points over the baseline value; or requiring a surgical procedure or medical treatment specifically for CD.
The authors found that by 1 year, 80 patients had experienced relapse of Crohn’s: 38 in the S. boulardii group (47.5%) and 42 in the placebo group (52.5%), a nonsignificant difference (P less than .05).
The authors also found that the median time to relapse was not statistically different between groups, at 40.7 weeks for the treatment group (range, 2.6-56.0) and 39.0 weeks for patients taking placebo (range, 0.1-55.0) weeks (P = .78).
Indeed, the only finding that did reach statistical significance was among treatment group and smoking status, where nonsmokers given placebo had more relapses (72.0%) than did those treated with S. boulardii (34.5%; P = .016 for the difference between cohorts).
"However, in smokers and former smokers, the proportion of relapse was not significantly different," added the authors.
Looking at safety, Dr. Bourreille reported that just over half of patients in each group complained of adverse events, including diarrhea, arthralgia, constipation, and abdominal pain.
"One oral fungal infection occurred in one patient treated with S. boulardii," he added. "None of the drug-related AEs was serious."
Compliance was greater than 90% in both the treatment and placebo groups.
Dr. Bourreille conceded that the study was limited by the use of clinical relapse as an endpoint, versus relapse defined by endoscopic findings.
"Clinical recurrence was chosen as the primary endpoint because endoscopic evaluation was not deemed necessary for the surveillance of nonsevere CD," wrote the investigators. "Moreover, at the time the study was designed, the concept of mucosal healing was not considered to be as relevant as it now is."
They added, however, that biological parameters of inflammation "were measured at each visit to ensure that clinical recurrence was associated with objective inflammation."
Dr. Bourreille and several coauthors disclosed ties with multiple pharmaceutical companies, including Biocodex, the maker of S. boulardii. Two coauthors were Biocodex employees.
This article by Bourreille and colleagues provides some incremental insights into both probiotic therapy for Crohn’s disease and the prevention of postoperative recurrence.
With regard to probiotic therapy, to date, there has been one small trial of Escherichia coli Nissle 1917 for active Crohn’s disease that was negative, one small trial of Saccharomyces boulardii for maintenance of medically induced remission that showed a benefit, one small trial of Lactobacillus rhamnosus GG for medically induced remission that was negative, and two small trials of Lactobacillus johnsonii for prevention of postoperative recurrence that were negative.
The current trial of S. boulardii for prevention of postoperative recurrence is also negative. Taken in total, the existing randomized controlled trial data do not support treatment of Crohn’s disease with probiotic therapy using E. coli Nissle 1917, S. boulardii, L. rhamnosus GG, or L. johnsonii.
As for prevention of postoperative recurrence, randomized controlled trials have demonstrated minimal efficacy for mesalamine, no efficacy for ciprofloxacin, efficacy for the imidazole antibiotics metronidazole and ornidazole (but also treatment limiting toxicity), modest efficacy for azathioprine and 6-mercaptopurine, and marked efficacy for anti–tumor necrosis factor antibody therapy (but these data are limited by small sample size).
The current trial of probiotic with S. boulardii as well as previous trials with L. johnsonii did not demonstrate efficacy for prevention of postoperative recurrence, thus indicating that at present there is not a role for probiotics for this treatment indication. This study highlights the investigational nature of probiotic therapy for Crohn’s disease, and the need to investigate alternative treatment strategies to alter the microbial flora, such as fecal microbiota transplantation and personalized medicine probiotic cocktails that replace specific missing microbial flora in individual patients.
Dr. William J. Sandborn is professor of medicine and adjunct professor of surgery, chief of the division of gastroenterology, and director of the UCSD IBD Center, University of California San Diego and UC San Diego Health System. He had no relevant conflicts of interest.
This article by Bourreille and colleagues provides some incremental insights into both probiotic therapy for Crohn’s disease and the prevention of postoperative recurrence.
With regard to probiotic therapy, to date, there has been one small trial of Escherichia coli Nissle 1917 for active Crohn’s disease that was negative, one small trial of Saccharomyces boulardii for maintenance of medically induced remission that showed a benefit, one small trial of Lactobacillus rhamnosus GG for medically induced remission that was negative, and two small trials of Lactobacillus johnsonii for prevention of postoperative recurrence that were negative.
The current trial of S. boulardii for prevention of postoperative recurrence is also negative. Taken in total, the existing randomized controlled trial data do not support treatment of Crohn’s disease with probiotic therapy using E. coli Nissle 1917, S. boulardii, L. rhamnosus GG, or L. johnsonii.
As for prevention of postoperative recurrence, randomized controlled trials have demonstrated minimal efficacy for mesalamine, no efficacy for ciprofloxacin, efficacy for the imidazole antibiotics metronidazole and ornidazole (but also treatment limiting toxicity), modest efficacy for azathioprine and 6-mercaptopurine, and marked efficacy for anti–tumor necrosis factor antibody therapy (but these data are limited by small sample size).
The current trial of probiotic with S. boulardii as well as previous trials with L. johnsonii did not demonstrate efficacy for prevention of postoperative recurrence, thus indicating that at present there is not a role for probiotics for this treatment indication. This study highlights the investigational nature of probiotic therapy for Crohn’s disease, and the need to investigate alternative treatment strategies to alter the microbial flora, such as fecal microbiota transplantation and personalized medicine probiotic cocktails that replace specific missing microbial flora in individual patients.
Dr. William J. Sandborn is professor of medicine and adjunct professor of surgery, chief of the division of gastroenterology, and director of the UCSD IBD Center, University of California San Diego and UC San Diego Health System. He had no relevant conflicts of interest.
This article by Bourreille and colleagues provides some incremental insights into both probiotic therapy for Crohn’s disease and the prevention of postoperative recurrence.
With regard to probiotic therapy, to date, there has been one small trial of Escherichia coli Nissle 1917 for active Crohn’s disease that was negative, one small trial of Saccharomyces boulardii for maintenance of medically induced remission that showed a benefit, one small trial of Lactobacillus rhamnosus GG for medically induced remission that was negative, and two small trials of Lactobacillus johnsonii for prevention of postoperative recurrence that were negative.
The current trial of S. boulardii for prevention of postoperative recurrence is also negative. Taken in total, the existing randomized controlled trial data do not support treatment of Crohn’s disease with probiotic therapy using E. coli Nissle 1917, S. boulardii, L. rhamnosus GG, or L. johnsonii.
As for prevention of postoperative recurrence, randomized controlled trials have demonstrated minimal efficacy for mesalamine, no efficacy for ciprofloxacin, efficacy for the imidazole antibiotics metronidazole and ornidazole (but also treatment limiting toxicity), modest efficacy for azathioprine and 6-mercaptopurine, and marked efficacy for anti–tumor necrosis factor antibody therapy (but these data are limited by small sample size).
The current trial of probiotic with S. boulardii as well as previous trials with L. johnsonii did not demonstrate efficacy for prevention of postoperative recurrence, thus indicating that at present there is not a role for probiotics for this treatment indication. This study highlights the investigational nature of probiotic therapy for Crohn’s disease, and the need to investigate alternative treatment strategies to alter the microbial flora, such as fecal microbiota transplantation and personalized medicine probiotic cocktails that replace specific missing microbial flora in individual patients.
Dr. William J. Sandborn is professor of medicine and adjunct professor of surgery, chief of the division of gastroenterology, and director of the UCSD IBD Center, University of California San Diego and UC San Diego Health System. He had no relevant conflicts of interest.
Treatment with the nonpathogenic probiotic yeast Saccharomyces boulardii does not prevent relapse in Crohn’s disease.
The finding – which nevertheless does "not allow the exclusion of the potential therapeutic efficacy of probiotics" – comes from the third study to assess S. boulardii in Crohn’s, wrote Dr. Arnaud Bourreille and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
In a yearlong, randomized, double-blind, placebo-controlled study, Dr. Bourreille, of the CHU de Nantes, France, and colleagues looked at 159 adult patients enrolled at 32 centers during the acute phase of Crohn’s disease.
Patients were treated for 4 weeks with corticosteroids, budesonide, and/or aminosalicylates, according to the preference of each investigator, until remission. They were then randomized to receive either oral S. boulardii at 1 g daily or a placebo until the end of the study at week 52 or earlier, in the case of relapse, with follow-up conducted every 12 weeks.
Relapse was defined as registering a Crohn’s disease activity index (CDAI) higher than 220 points on follow-up; registering a CDAI between 150 and 220 with an increase of at least 70 points over the baseline value; or requiring a surgical procedure or medical treatment specifically for CD.
The authors found that by 1 year, 80 patients had experienced relapse of Crohn’s: 38 in the S. boulardii group (47.5%) and 42 in the placebo group (52.5%), a nonsignificant difference (P less than .05).
The authors also found that the median time to relapse was not statistically different between groups, at 40.7 weeks for the treatment group (range, 2.6-56.0) and 39.0 weeks for patients taking placebo (range, 0.1-55.0) weeks (P = .78).
Indeed, the only finding that did reach statistical significance was among treatment group and smoking status, where nonsmokers given placebo had more relapses (72.0%) than did those treated with S. boulardii (34.5%; P = .016 for the difference between cohorts).
"However, in smokers and former smokers, the proportion of relapse was not significantly different," added the authors.
Looking at safety, Dr. Bourreille reported that just over half of patients in each group complained of adverse events, including diarrhea, arthralgia, constipation, and abdominal pain.
"One oral fungal infection occurred in one patient treated with S. boulardii," he added. "None of the drug-related AEs was serious."
Compliance was greater than 90% in both the treatment and placebo groups.
Dr. Bourreille conceded that the study was limited by the use of clinical relapse as an endpoint, versus relapse defined by endoscopic findings.
"Clinical recurrence was chosen as the primary endpoint because endoscopic evaluation was not deemed necessary for the surveillance of nonsevere CD," wrote the investigators. "Moreover, at the time the study was designed, the concept of mucosal healing was not considered to be as relevant as it now is."
They added, however, that biological parameters of inflammation "were measured at each visit to ensure that clinical recurrence was associated with objective inflammation."
Dr. Bourreille and several coauthors disclosed ties with multiple pharmaceutical companies, including Biocodex, the maker of S. boulardii. Two coauthors were Biocodex employees.
Treatment with the nonpathogenic probiotic yeast Saccharomyces boulardii does not prevent relapse in Crohn’s disease.
The finding – which nevertheless does "not allow the exclusion of the potential therapeutic efficacy of probiotics" – comes from the third study to assess S. boulardii in Crohn’s, wrote Dr. Arnaud Bourreille and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
In a yearlong, randomized, double-blind, placebo-controlled study, Dr. Bourreille, of the CHU de Nantes, France, and colleagues looked at 159 adult patients enrolled at 32 centers during the acute phase of Crohn’s disease.
Patients were treated for 4 weeks with corticosteroids, budesonide, and/or aminosalicylates, according to the preference of each investigator, until remission. They were then randomized to receive either oral S. boulardii at 1 g daily or a placebo until the end of the study at week 52 or earlier, in the case of relapse, with follow-up conducted every 12 weeks.
Relapse was defined as registering a Crohn’s disease activity index (CDAI) higher than 220 points on follow-up; registering a CDAI between 150 and 220 with an increase of at least 70 points over the baseline value; or requiring a surgical procedure or medical treatment specifically for CD.
The authors found that by 1 year, 80 patients had experienced relapse of Crohn’s: 38 in the S. boulardii group (47.5%) and 42 in the placebo group (52.5%), a nonsignificant difference (P less than .05).
The authors also found that the median time to relapse was not statistically different between groups, at 40.7 weeks for the treatment group (range, 2.6-56.0) and 39.0 weeks for patients taking placebo (range, 0.1-55.0) weeks (P = .78).
Indeed, the only finding that did reach statistical significance was among treatment group and smoking status, where nonsmokers given placebo had more relapses (72.0%) than did those treated with S. boulardii (34.5%; P = .016 for the difference between cohorts).
"However, in smokers and former smokers, the proportion of relapse was not significantly different," added the authors.
Looking at safety, Dr. Bourreille reported that just over half of patients in each group complained of adverse events, including diarrhea, arthralgia, constipation, and abdominal pain.
"One oral fungal infection occurred in one patient treated with S. boulardii," he added. "None of the drug-related AEs was serious."
Compliance was greater than 90% in both the treatment and placebo groups.
Dr. Bourreille conceded that the study was limited by the use of clinical relapse as an endpoint, versus relapse defined by endoscopic findings.
"Clinical recurrence was chosen as the primary endpoint because endoscopic evaluation was not deemed necessary for the surveillance of nonsevere CD," wrote the investigators. "Moreover, at the time the study was designed, the concept of mucosal healing was not considered to be as relevant as it now is."
They added, however, that biological parameters of inflammation "were measured at each visit to ensure that clinical recurrence was associated with objective inflammation."
Dr. Bourreille and several coauthors disclosed ties with multiple pharmaceutical companies, including Biocodex, the maker of S. boulardii. Two coauthors were Biocodex employees.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: The probiotic yeast Saccharomyces boulardii does not appear to have any beneficial effects for patients with Crohn’s disease who are in remission.
Data source: FLORABEST, a 52-week, randomized, double-blind, placebo-controlled, 32-center trial of S. boulardii in Crohn’s disease.
Disclosures: Dr. Bourreille and several coauthors disclosed ties with multiple pharmaceutical companies, including Biocodex, the maker of S. boulardii. Two coauthors were Biocodex employees.
New estimates show lack of progress in liver mortality
Deaths from liver-related causes remained static between 1979 and 2008, according to updated estimates from the Rochester Epidemiology Project and the National Death Index.
The finding represents a dramatic correction to previous estimates from the National Center for Health Statistics (NCHS) at the Centers for Disease Control and Prevention, which used a very narrow definition of liver-related mortality, Dr. Sumeet K. Asrani and colleagues reported. The results were published in the August issue of Gastroenterology.
Courtesy: American Gastroenterological Association
According to Dr. Asrani of the Mayo Clinic, Rochester, Minn., "current estimates [from the NCHS] are solely based on one diagnostic category, namely chronic liver disease and cirrhosis, which fails to capture deaths attributed to other uniquely liver-related descriptors, such as hepatic encephalopathy or hepatorenal syndrome."
Deaths due to viral hepatitis or liver cancer are also not included.
"For example, the current estimates of liver-related deaths might not include the demise of a person with hepatitis C cirrhosis who died of hepatorenal syndrome."
In their updated assessment, the researchers looked at data from the Rochester Epidemiology Project, in which death records of Olmsted County residents are tracked from multiple sources, including county and state vital records, as well as individual medical charts.
The researchers found that when the CDC definition was used, which includes causes attributable to a single ICD-9 code of 571 and ICD-10 codes of K70, K73, and K74, there were 71 liver-related deaths among Olmsted County residents between 1999 and 2008.
In contrast, when the updated definition was applied, which included additional diagnoses specific to liver disease yet excluded in the CDC definition, as well as viral hepatitis and malignant neoplasm of the liver and intrahepatic bile ducts, there were 261 liver-related deaths in the county between 1999 and 2008.
That included 85 deaths (32.6%) from viral hepatitis and 70 deaths (26.8%) from hepatobiliary malignancies.
The researchers then looked at national mortality rates.
According to the restrictive CDC estimates, in 2008, there were 29,951 deaths due to liver disease in the United States, for a death rate of 11.7 per 100,000 persons (16.2 for men and 7.6 for women).
Using the updated definition, however, Dr. Asrani tallied 66,007 deaths, including 10,256 from the expanded liver disease diagnosis, 7,625 from viral hepatitis, and 18,175 from hepatobiliary malignancies, for a death rate of 25.7 per 100,000 persons (35.7 for men and 16.8 for women).
Finally, the investigators compared longitudinal trends in death rates.
They found that when the CDC definition was used, deaths due to liver disease (per 100,000 persons) decreased from 16.5 in the era encompassing the years 1979 to 1988 to 11.7 in the years between 1999 and 2008, a reduction of 38%.
"However, when the updated definition was applied, the downward trend disappeared," they wrote.
Indeed, liver-related mortality per 100,000 persons was basically unchanged over 30 years: 23.9 for the era between 1979 and1988 and 24.4 for the period from 1999 to 2008.
"This discrepancy was accounted for by deaths due to viral hepatitis, which increased from 0.5 to 2.5 per 100,000 and hepatobiliary cancer, which increased from 3.1 to 6.4 per 100,000," wrote the authors.
"These data support that deaths due to viral hepatitis and hepatobiliary cancers should be included in the enumeration of liver-related deaths to accurately represent the burden of chronic liver disease."
"Underappreciation of the prevalence and natural history of liver disease can lead to suboptimal care," they concluded.
The authors disclosed no conflicts of interest related to this study, which was partially supported by grants from the National Institutes of Health.
Deaths from liver-related causes remained static between 1979 and 2008, according to updated estimates from the Rochester Epidemiology Project and the National Death Index.
The finding represents a dramatic correction to previous estimates from the National Center for Health Statistics (NCHS) at the Centers for Disease Control and Prevention, which used a very narrow definition of liver-related mortality, Dr. Sumeet K. Asrani and colleagues reported. The results were published in the August issue of Gastroenterology.
Courtesy: American Gastroenterological Association
According to Dr. Asrani of the Mayo Clinic, Rochester, Minn., "current estimates [from the NCHS] are solely based on one diagnostic category, namely chronic liver disease and cirrhosis, which fails to capture deaths attributed to other uniquely liver-related descriptors, such as hepatic encephalopathy or hepatorenal syndrome."
Deaths due to viral hepatitis or liver cancer are also not included.
"For example, the current estimates of liver-related deaths might not include the demise of a person with hepatitis C cirrhosis who died of hepatorenal syndrome."
In their updated assessment, the researchers looked at data from the Rochester Epidemiology Project, in which death records of Olmsted County residents are tracked from multiple sources, including county and state vital records, as well as individual medical charts.
The researchers found that when the CDC definition was used, which includes causes attributable to a single ICD-9 code of 571 and ICD-10 codes of K70, K73, and K74, there were 71 liver-related deaths among Olmsted County residents between 1999 and 2008.
In contrast, when the updated definition was applied, which included additional diagnoses specific to liver disease yet excluded in the CDC definition, as well as viral hepatitis and malignant neoplasm of the liver and intrahepatic bile ducts, there were 261 liver-related deaths in the county between 1999 and 2008.
That included 85 deaths (32.6%) from viral hepatitis and 70 deaths (26.8%) from hepatobiliary malignancies.
The researchers then looked at national mortality rates.
According to the restrictive CDC estimates, in 2008, there were 29,951 deaths due to liver disease in the United States, for a death rate of 11.7 per 100,000 persons (16.2 for men and 7.6 for women).
Using the updated definition, however, Dr. Asrani tallied 66,007 deaths, including 10,256 from the expanded liver disease diagnosis, 7,625 from viral hepatitis, and 18,175 from hepatobiliary malignancies, for a death rate of 25.7 per 100,000 persons (35.7 for men and 16.8 for women).
Finally, the investigators compared longitudinal trends in death rates.
They found that when the CDC definition was used, deaths due to liver disease (per 100,000 persons) decreased from 16.5 in the era encompassing the years 1979 to 1988 to 11.7 in the years between 1999 and 2008, a reduction of 38%.
"However, when the updated definition was applied, the downward trend disappeared," they wrote.
Indeed, liver-related mortality per 100,000 persons was basically unchanged over 30 years: 23.9 for the era between 1979 and1988 and 24.4 for the period from 1999 to 2008.
"This discrepancy was accounted for by deaths due to viral hepatitis, which increased from 0.5 to 2.5 per 100,000 and hepatobiliary cancer, which increased from 3.1 to 6.4 per 100,000," wrote the authors.
"These data support that deaths due to viral hepatitis and hepatobiliary cancers should be included in the enumeration of liver-related deaths to accurately represent the burden of chronic liver disease."
"Underappreciation of the prevalence and natural history of liver disease can lead to suboptimal care," they concluded.
The authors disclosed no conflicts of interest related to this study, which was partially supported by grants from the National Institutes of Health.
Deaths from liver-related causes remained static between 1979 and 2008, according to updated estimates from the Rochester Epidemiology Project and the National Death Index.
The finding represents a dramatic correction to previous estimates from the National Center for Health Statistics (NCHS) at the Centers for Disease Control and Prevention, which used a very narrow definition of liver-related mortality, Dr. Sumeet K. Asrani and colleagues reported. The results were published in the August issue of Gastroenterology.
Courtesy: American Gastroenterological Association
According to Dr. Asrani of the Mayo Clinic, Rochester, Minn., "current estimates [from the NCHS] are solely based on one diagnostic category, namely chronic liver disease and cirrhosis, which fails to capture deaths attributed to other uniquely liver-related descriptors, such as hepatic encephalopathy or hepatorenal syndrome."
Deaths due to viral hepatitis or liver cancer are also not included.
"For example, the current estimates of liver-related deaths might not include the demise of a person with hepatitis C cirrhosis who died of hepatorenal syndrome."
In their updated assessment, the researchers looked at data from the Rochester Epidemiology Project, in which death records of Olmsted County residents are tracked from multiple sources, including county and state vital records, as well as individual medical charts.
The researchers found that when the CDC definition was used, which includes causes attributable to a single ICD-9 code of 571 and ICD-10 codes of K70, K73, and K74, there were 71 liver-related deaths among Olmsted County residents between 1999 and 2008.
In contrast, when the updated definition was applied, which included additional diagnoses specific to liver disease yet excluded in the CDC definition, as well as viral hepatitis and malignant neoplasm of the liver and intrahepatic bile ducts, there were 261 liver-related deaths in the county between 1999 and 2008.
That included 85 deaths (32.6%) from viral hepatitis and 70 deaths (26.8%) from hepatobiliary malignancies.
The researchers then looked at national mortality rates.
According to the restrictive CDC estimates, in 2008, there were 29,951 deaths due to liver disease in the United States, for a death rate of 11.7 per 100,000 persons (16.2 for men and 7.6 for women).
Using the updated definition, however, Dr. Asrani tallied 66,007 deaths, including 10,256 from the expanded liver disease diagnosis, 7,625 from viral hepatitis, and 18,175 from hepatobiliary malignancies, for a death rate of 25.7 per 100,000 persons (35.7 for men and 16.8 for women).
Finally, the investigators compared longitudinal trends in death rates.
They found that when the CDC definition was used, deaths due to liver disease (per 100,000 persons) decreased from 16.5 in the era encompassing the years 1979 to 1988 to 11.7 in the years between 1999 and 2008, a reduction of 38%.
"However, when the updated definition was applied, the downward trend disappeared," they wrote.
Indeed, liver-related mortality per 100,000 persons was basically unchanged over 30 years: 23.9 for the era between 1979 and1988 and 24.4 for the period from 1999 to 2008.
"This discrepancy was accounted for by deaths due to viral hepatitis, which increased from 0.5 to 2.5 per 100,000 and hepatobiliary cancer, which increased from 3.1 to 6.4 per 100,000," wrote the authors.
"These data support that deaths due to viral hepatitis and hepatobiliary cancers should be included in the enumeration of liver-related deaths to accurately represent the burden of chronic liver disease."
"Underappreciation of the prevalence and natural history of liver disease can lead to suboptimal care," they concluded.
The authors disclosed no conflicts of interest related to this study, which was partially supported by grants from the National Institutes of Health.
FROM GASTROENTEROLOGY
Major finding: An updated definition of liver-related mortality shows that death rates are unchanged over 30 years: 23.9 per 100,000 persons for the era between 1979 and 1988 and 24.4 for the period from 1999 to 2008.
Data source: Records from the Rochester Epidemiology Project and the National Death Registry.
Disclosures: The authors disclosed no conflicts of interest related to this study, which was partially supported by grants from the National Institutes of Health.
POEM is safe, effective in achalasia
Peroral endoscopic myotomy is a safe and effective therapy for achalasia, with 82% of patients in symptom remission at 12 months post treatment.
"With [peroral endoscopic myotomy] it seems possible to emulate the surgical principles of laparoscopic Heller myotomy without the need for skin incisions and to reduce the procedural trauma," reported Dr. Daniel Von Renteln and colleagues. The findings are in the August issue of Gastroenterology.
According to Dr. Von Renteln of the University Hospital Hamburg-Eppendorf in Hamburg, Germany, peroral endoscopic myotomy (POEM) is a novel alternative achalasia treatment.
As described previously (Endoscopy 2010;42:265-71), under general anesthesia and following endoscopy to visualize the gastroesophageal junction, a mucosal incision is made to create entry to the submucosal space. A submucosal tunnel is then created, extending downward, allowing myotomy of the esophageal sphincter. The mucosal entry site is then closed with hemostatic clips.
In the present study, the researchers looked at 70 patients who underwent the procedure at five centers in Europe and North America.
The mean procedure time for POEM was 105 minutes and the mean length of myotomy was 13 cm. Patients experienced a small but significant drop in hemoglobin post procedure (from 13 to 12 g/dL, P less than .001) as well as small but significant increases in leukocyte count and C-reactive protein levels.
At 3 months post procedure, treatment success was achieved in 97% of cases, with mean Eckhardt scores decreasing from 7 pre procedure to 1 post procedure (P less than .001).
Of the 61 patients who underwent manometry at 3 months, the researchers found that the mean pretreatment and posttreatment lower esophageal sphincter pressures were 28 mm Hg versus 9 mm Hg, respectively (P less than .001).
Results at 6 months and 12 months were comparable, with treatment success of 88.5% and 82.4%, respectively, and mean Eckhardt scores of 1.3 and 1.7, respectively (P less than .001 for both).
Patients who failed treatment subsequently underwent laparoscopic Heller myotomy (n = 3) or balloon dilatation (n = 5), with safe and effective outcomes, reported the authors.
"Because the target area for the myotomy during POEM is lateral (on the lesser curvature side) and the myotomy during LHM is anterior, subsequent LHM seems to be a feasible second-line treatment if POEM fails."
Moreover, roughly half of the patients in the current study had previously undergone endoscopic balloon dilatation or botulinum toxin injection before POEM, the researchers wrote. "This shows that POEM is safe and efficient after previous treatments."
Nevertheless, the procedure is not without risk. "Visible complete transmural openings into the mediastinum and into the peritoneal cavity occurred in the majority of patients," they pointed out. "Therefore, POEM potentially carries the risk of mediastinitis/peritonitis and/or damage to surrounding organs."
Clip dislocation at mucosal closure (n = 3), mucosal injury through electrocautery or laceration (n = 3), and bleeding requiring intervention also occurred (n = 1).
Finally, looking at postprocedure reflux rates, at 12 months, roughly 37% of patients complained of gastroesophageal reflux, with just under 8% of these patients reporting reflux symptoms daily.
Overall, 29% were prescribed a proton pump inhibitor; 19.6% of these used a PPI daily.
The authors disclosed no conflicts of interest. The study was supported by EURO-NOTES Foundation – a partnership between the European Association for Endoscopic Surgery and the European Society of Gastrointestinal Endoscopy – and Olympus, maker of endotherapeutic supplies.
Peroral endoscopic myotomy is a safe and effective therapy for achalasia, with 82% of patients in symptom remission at 12 months post treatment.
"With [peroral endoscopic myotomy] it seems possible to emulate the surgical principles of laparoscopic Heller myotomy without the need for skin incisions and to reduce the procedural trauma," reported Dr. Daniel Von Renteln and colleagues. The findings are in the August issue of Gastroenterology.
According to Dr. Von Renteln of the University Hospital Hamburg-Eppendorf in Hamburg, Germany, peroral endoscopic myotomy (POEM) is a novel alternative achalasia treatment.
As described previously (Endoscopy 2010;42:265-71), under general anesthesia and following endoscopy to visualize the gastroesophageal junction, a mucosal incision is made to create entry to the submucosal space. A submucosal tunnel is then created, extending downward, allowing myotomy of the esophageal sphincter. The mucosal entry site is then closed with hemostatic clips.
In the present study, the researchers looked at 70 patients who underwent the procedure at five centers in Europe and North America.
The mean procedure time for POEM was 105 minutes and the mean length of myotomy was 13 cm. Patients experienced a small but significant drop in hemoglobin post procedure (from 13 to 12 g/dL, P less than .001) as well as small but significant increases in leukocyte count and C-reactive protein levels.
At 3 months post procedure, treatment success was achieved in 97% of cases, with mean Eckhardt scores decreasing from 7 pre procedure to 1 post procedure (P less than .001).
Of the 61 patients who underwent manometry at 3 months, the researchers found that the mean pretreatment and posttreatment lower esophageal sphincter pressures were 28 mm Hg versus 9 mm Hg, respectively (P less than .001).
Results at 6 months and 12 months were comparable, with treatment success of 88.5% and 82.4%, respectively, and mean Eckhardt scores of 1.3 and 1.7, respectively (P less than .001 for both).
Patients who failed treatment subsequently underwent laparoscopic Heller myotomy (n = 3) or balloon dilatation (n = 5), with safe and effective outcomes, reported the authors.
"Because the target area for the myotomy during POEM is lateral (on the lesser curvature side) and the myotomy during LHM is anterior, subsequent LHM seems to be a feasible second-line treatment if POEM fails."
Moreover, roughly half of the patients in the current study had previously undergone endoscopic balloon dilatation or botulinum toxin injection before POEM, the researchers wrote. "This shows that POEM is safe and efficient after previous treatments."
Nevertheless, the procedure is not without risk. "Visible complete transmural openings into the mediastinum and into the peritoneal cavity occurred in the majority of patients," they pointed out. "Therefore, POEM potentially carries the risk of mediastinitis/peritonitis and/or damage to surrounding organs."
Clip dislocation at mucosal closure (n = 3), mucosal injury through electrocautery or laceration (n = 3), and bleeding requiring intervention also occurred (n = 1).
Finally, looking at postprocedure reflux rates, at 12 months, roughly 37% of patients complained of gastroesophageal reflux, with just under 8% of these patients reporting reflux symptoms daily.
Overall, 29% were prescribed a proton pump inhibitor; 19.6% of these used a PPI daily.
The authors disclosed no conflicts of interest. The study was supported by EURO-NOTES Foundation – a partnership between the European Association for Endoscopic Surgery and the European Society of Gastrointestinal Endoscopy – and Olympus, maker of endotherapeutic supplies.
Peroral endoscopic myotomy is a safe and effective therapy for achalasia, with 82% of patients in symptom remission at 12 months post treatment.
"With [peroral endoscopic myotomy] it seems possible to emulate the surgical principles of laparoscopic Heller myotomy without the need for skin incisions and to reduce the procedural trauma," reported Dr. Daniel Von Renteln and colleagues. The findings are in the August issue of Gastroenterology.
According to Dr. Von Renteln of the University Hospital Hamburg-Eppendorf in Hamburg, Germany, peroral endoscopic myotomy (POEM) is a novel alternative achalasia treatment.
As described previously (Endoscopy 2010;42:265-71), under general anesthesia and following endoscopy to visualize the gastroesophageal junction, a mucosal incision is made to create entry to the submucosal space. A submucosal tunnel is then created, extending downward, allowing myotomy of the esophageal sphincter. The mucosal entry site is then closed with hemostatic clips.
In the present study, the researchers looked at 70 patients who underwent the procedure at five centers in Europe and North America.
The mean procedure time for POEM was 105 minutes and the mean length of myotomy was 13 cm. Patients experienced a small but significant drop in hemoglobin post procedure (from 13 to 12 g/dL, P less than .001) as well as small but significant increases in leukocyte count and C-reactive protein levels.
At 3 months post procedure, treatment success was achieved in 97% of cases, with mean Eckhardt scores decreasing from 7 pre procedure to 1 post procedure (P less than .001).
Of the 61 patients who underwent manometry at 3 months, the researchers found that the mean pretreatment and posttreatment lower esophageal sphincter pressures were 28 mm Hg versus 9 mm Hg, respectively (P less than .001).
Results at 6 months and 12 months were comparable, with treatment success of 88.5% and 82.4%, respectively, and mean Eckhardt scores of 1.3 and 1.7, respectively (P less than .001 for both).
Patients who failed treatment subsequently underwent laparoscopic Heller myotomy (n = 3) or balloon dilatation (n = 5), with safe and effective outcomes, reported the authors.
"Because the target area for the myotomy during POEM is lateral (on the lesser curvature side) and the myotomy during LHM is anterior, subsequent LHM seems to be a feasible second-line treatment if POEM fails."
Moreover, roughly half of the patients in the current study had previously undergone endoscopic balloon dilatation or botulinum toxin injection before POEM, the researchers wrote. "This shows that POEM is safe and efficient after previous treatments."
Nevertheless, the procedure is not without risk. "Visible complete transmural openings into the mediastinum and into the peritoneal cavity occurred in the majority of patients," they pointed out. "Therefore, POEM potentially carries the risk of mediastinitis/peritonitis and/or damage to surrounding organs."
Clip dislocation at mucosal closure (n = 3), mucosal injury through electrocautery or laceration (n = 3), and bleeding requiring intervention also occurred (n = 1).
Finally, looking at postprocedure reflux rates, at 12 months, roughly 37% of patients complained of gastroesophageal reflux, with just under 8% of these patients reporting reflux symptoms daily.
Overall, 29% were prescribed a proton pump inhibitor; 19.6% of these used a PPI daily.
The authors disclosed no conflicts of interest. The study was supported by EURO-NOTES Foundation – a partnership between the European Association for Endoscopic Surgery and the European Society of Gastrointestinal Endoscopy – and Olympus, maker of endotherapeutic supplies.
Major finding: At 12 months following peroral endoscopic myotomy, 82.4% of patients reported sustained treatment success, with a mean Eckhardt score of 1.7.
Data source: A prospective, international study of 70 patients who underwent POEM at five centers in Europe and North America.
Disclosures: The authors disclosed no conflicts of interest. The study was supported by EURO-NOTES Foundation – a partnership between the European Association for Endoscopic Surgery and the European Society of Gastrointestinal Endoscopy – and Olympus, maker of endotherapeutic supplies.
Triple therapy underutilized in HCV
Fewer than 20% of patients with hepatitis C virus genotype 1 receive triple therapy with boceprevir or telaprevir, according to Dr. Emerson Y. Chen and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
The "disappointingly low use of the new therapies, even after a decade without novel medications," suggests that real-world use of these drugs is hampered by factors including safety and the low predicted response rates in prior nonresponders, they wrote.
Dr. Chen of the University of Texas Southwestern Medical School, Dallas, looked at 487 patients with HCV genotype 1 presenting to one of two academic outpatient hepatology practices in Dallas and Miami.
The majority of patients were between 50 and 60 years of age, male, and white. More than two-thirds had private insurance, and half of the patients were treatment naive.
Overall, the authors found that 91 patients (18.7%) were started on triple therapy (boceprevir or telaprevir plus pegylated interferon and ribavirin) while the remaining 396 patients remained untreated.
Dr. Chen then assessed the reasons for treatment deferral. The most common, he found, was contraindication (50.5%), including complications of liver disease and medical comorbidities. The next biggest reason cited for treatment deferral was "patient choice" (22.5%), which included concerns about side effects, limited success rates, financial issues, or inability to commit time for treatment. Finally, the presence of less advanced liver disease (17.4%) and the anticipation (among providers and patients alike) of better future therapies (9.6%) were also factors in triple therapy refusal.
"Although several expert panels for treatment recommendations are available, there exists an uncertainty among providers regarding selecting patients without major contraindications for triple therapy now over newer direct-acting antiviral therapy later," they added.
Next, the researchers compared the triple therapy patients with therapy deferrals.
In univariate analysis, they found that about three-fourths of patients who chose to initiate triple therapy had fibrosis stage 3 or were cirrhotic, although less than 10% had a history of overt decompensation.
"In contrast, among those who deferred treatment, more than 20% had decompensated liver disease, and 46% had early [an] fibrosis stage," they wrote.
Looking at patients’ mean Model for End-Stage Liver Disease scores, cirrhotic patients who deferred treatment registered a 9.3, compared with 7.3 for patients who elected to undergo triple therapy treatment (P less than .001). Additionally, 90% of cirrhotic patients on treatment were Child-Pugh class A compared with 63% in the nontreatment group (P = .003).
Commenting on the findings, Dr. Chen wrote that "although patients with mild fibrosis or persistently normal liver function tests have often not been recommended for repeat biopsy or treatment, they would likely benefit from treatment before they become older or develop contraindications.
"Nevertheless, many hepatologists appear to be recommending deferral of treatment for patients with mild to moderate fibrosis, waiting for the second-generation direct-acting antivirals with even higher sustained virologic response rates" and fewer side effects.
"Newly diagnosed treatment-naive patients along with those with less advanced liver disease will benefit from society-wide consensus regarding therapy initiation now or at a later time."
The authors disclosed funding from the Doris Duke Charitable Foundation and the University of Miami. Two authors disclosed ties to Merck, maker of boceprevir and ribavirin, and Vertex, which makes telaprevir.
Fewer than 20% of patients with hepatitis C virus genotype 1 receive triple therapy with boceprevir or telaprevir, according to Dr. Emerson Y. Chen and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
The "disappointingly low use of the new therapies, even after a decade without novel medications," suggests that real-world use of these drugs is hampered by factors including safety and the low predicted response rates in prior nonresponders, they wrote.
Dr. Chen of the University of Texas Southwestern Medical School, Dallas, looked at 487 patients with HCV genotype 1 presenting to one of two academic outpatient hepatology practices in Dallas and Miami.
The majority of patients were between 50 and 60 years of age, male, and white. More than two-thirds had private insurance, and half of the patients were treatment naive.
Overall, the authors found that 91 patients (18.7%) were started on triple therapy (boceprevir or telaprevir plus pegylated interferon and ribavirin) while the remaining 396 patients remained untreated.
Dr. Chen then assessed the reasons for treatment deferral. The most common, he found, was contraindication (50.5%), including complications of liver disease and medical comorbidities. The next biggest reason cited for treatment deferral was "patient choice" (22.5%), which included concerns about side effects, limited success rates, financial issues, or inability to commit time for treatment. Finally, the presence of less advanced liver disease (17.4%) and the anticipation (among providers and patients alike) of better future therapies (9.6%) were also factors in triple therapy refusal.
"Although several expert panels for treatment recommendations are available, there exists an uncertainty among providers regarding selecting patients without major contraindications for triple therapy now over newer direct-acting antiviral therapy later," they added.
Next, the researchers compared the triple therapy patients with therapy deferrals.
In univariate analysis, they found that about three-fourths of patients who chose to initiate triple therapy had fibrosis stage 3 or were cirrhotic, although less than 10% had a history of overt decompensation.
"In contrast, among those who deferred treatment, more than 20% had decompensated liver disease, and 46% had early [an] fibrosis stage," they wrote.
Looking at patients’ mean Model for End-Stage Liver Disease scores, cirrhotic patients who deferred treatment registered a 9.3, compared with 7.3 for patients who elected to undergo triple therapy treatment (P less than .001). Additionally, 90% of cirrhotic patients on treatment were Child-Pugh class A compared with 63% in the nontreatment group (P = .003).
Commenting on the findings, Dr. Chen wrote that "although patients with mild fibrosis or persistently normal liver function tests have often not been recommended for repeat biopsy or treatment, they would likely benefit from treatment before they become older or develop contraindications.
"Nevertheless, many hepatologists appear to be recommending deferral of treatment for patients with mild to moderate fibrosis, waiting for the second-generation direct-acting antivirals with even higher sustained virologic response rates" and fewer side effects.
"Newly diagnosed treatment-naive patients along with those with less advanced liver disease will benefit from society-wide consensus regarding therapy initiation now or at a later time."
The authors disclosed funding from the Doris Duke Charitable Foundation and the University of Miami. Two authors disclosed ties to Merck, maker of boceprevir and ribavirin, and Vertex, which makes telaprevir.
Fewer than 20% of patients with hepatitis C virus genotype 1 receive triple therapy with boceprevir or telaprevir, according to Dr. Emerson Y. Chen and colleagues. The results are in the August issue of Clinical Gastroenterology and Hepatology.
The "disappointingly low use of the new therapies, even after a decade without novel medications," suggests that real-world use of these drugs is hampered by factors including safety and the low predicted response rates in prior nonresponders, they wrote.
Dr. Chen of the University of Texas Southwestern Medical School, Dallas, looked at 487 patients with HCV genotype 1 presenting to one of two academic outpatient hepatology practices in Dallas and Miami.
The majority of patients were between 50 and 60 years of age, male, and white. More than two-thirds had private insurance, and half of the patients were treatment naive.
Overall, the authors found that 91 patients (18.7%) were started on triple therapy (boceprevir or telaprevir plus pegylated interferon and ribavirin) while the remaining 396 patients remained untreated.
Dr. Chen then assessed the reasons for treatment deferral. The most common, he found, was contraindication (50.5%), including complications of liver disease and medical comorbidities. The next biggest reason cited for treatment deferral was "patient choice" (22.5%), which included concerns about side effects, limited success rates, financial issues, or inability to commit time for treatment. Finally, the presence of less advanced liver disease (17.4%) and the anticipation (among providers and patients alike) of better future therapies (9.6%) were also factors in triple therapy refusal.
"Although several expert panels for treatment recommendations are available, there exists an uncertainty among providers regarding selecting patients without major contraindications for triple therapy now over newer direct-acting antiviral therapy later," they added.
Next, the researchers compared the triple therapy patients with therapy deferrals.
In univariate analysis, they found that about three-fourths of patients who chose to initiate triple therapy had fibrosis stage 3 or were cirrhotic, although less than 10% had a history of overt decompensation.
"In contrast, among those who deferred treatment, more than 20% had decompensated liver disease, and 46% had early [an] fibrosis stage," they wrote.
Looking at patients’ mean Model for End-Stage Liver Disease scores, cirrhotic patients who deferred treatment registered a 9.3, compared with 7.3 for patients who elected to undergo triple therapy treatment (P less than .001). Additionally, 90% of cirrhotic patients on treatment were Child-Pugh class A compared with 63% in the nontreatment group (P = .003).
Commenting on the findings, Dr. Chen wrote that "although patients with mild fibrosis or persistently normal liver function tests have often not been recommended for repeat biopsy or treatment, they would likely benefit from treatment before they become older or develop contraindications.
"Nevertheless, many hepatologists appear to be recommending deferral of treatment for patients with mild to moderate fibrosis, waiting for the second-generation direct-acting antivirals with even higher sustained virologic response rates" and fewer side effects.
"Newly diagnosed treatment-naive patients along with those with less advanced liver disease will benefit from society-wide consensus regarding therapy initiation now or at a later time."
The authors disclosed funding from the Doris Duke Charitable Foundation and the University of Miami. Two authors disclosed ties to Merck, maker of boceprevir and ribavirin, and Vertex, which makes telaprevir.
Major finding: From a cohort of nearly 500 hepatitis C patients, 91 (18.7%) were started on triple therapy.
Data source: A retrospective, 1-year, cross-sectional study of adults with HCV infection presenting to two outpatient hepatology practices in Dallas and Miami.
Disclosures: The authors disclosed funding from the Doris Duke Charitable Foundation and the University of Miami. Two authors disclosed ties to Merck, maker of boceprevir and ribavirin, and Vertex, which makes telaprevir.
High discontinuation rate noted for direct-acting antiviral therapy
Despite high rates of early clinical response, 30% of veterans taking direct-acting antivirals for hepatitis C discontinued treatment by week 24.
The findings highlight "the challenges associated with maintenance of these regimens" in a real-world cohort, wrote Pamela S. Belperio, Pharm.D., and colleagues. The study appears in the August issue of Clinical Gastroenterology and Hepatology.
Dr. Belperio of Veterans Affairs Palo Alto (Calif.) Health Care System, looked at 859 patients registered with the VA’s Clinical Case Registry for HCV who initiated triple therapy with pegylated interferon, ribavirin, and either boceprevir (n = 661) or telaprevir (n = 198) before Jan. 1, 2012, at 94 different VA facilities.
The authors determined how long the patient took the drug by looking at medication dispensed dates, and patients were classified as having response rates of "undetectable" if HCV RNA levels at their most recent assay were undetectable.
Patients’ mean age was 57 years; most of the patients were male. Patients with HIV or hepatitis B virus coinfection, hepatocellular carcinoma, or liver transplantation were excluded.
Even so, "Up to 13% of boceprevir-treated veterans and 24% of telaprevir-treated veterans would have been excluded from the phase III trials [of these therapies] based on hematologic exclusion criteria used in the telaprevir trials," Dr. Belperio wrote.
Despite this, the authors found that by week 12, 76% of all treatment-naive, noncirrhotic patients taking boceprevir had achieved undetectable viral loads, as had 78% of telaprevir patients.
By 24 weeks, those numbers were 74% for boceprevir patients and 60% for telaprevir patients.
However, the researchers also found that about one-third of patients discontinued treatment with either boceprevir or telaprevir before 24 weeks (30% and 34%, respectively; P = .37), with an incidence of hematologic adverse events that was both higher and more pronounced than has been reported in clinical trials of direct-acting antivirals (DAAs), wrote Dr. Belperio.
Indeed, anemia of at least grade 1 (hemoglobin below normal limits but greater than or equal to 10 g/dL) occurred in 50% of patients in both DAA groups, "which is up to a 15% increased incidence of anemia over what has been reported elsewhere."
There was also significant thrombocytopenia for both drugs, which Dr. Belperio said was four times higher than in clinical trials. In fact, her group reported that 66% of patients taking boceprevir had grade 1 thrombocytopenia. (platelets below normal limits but greater than or equal to 75,000/mm3), as did 59% of patients taking telaprevir.
"This may reflect the greater proportion of cirrhotic patients in our cohort compared with the clinical trials; however, it is concerning given the limited strategies currently available to manage thrombocytopenia and the unknown effect that PEG dose reductions may have on sustained virologic response in the context of DAA-based therapies," they wrote.
There also was a portion of patients for whom treatment was determined to be futile, according to Food and Drug Administration specifications: 9% of patients receiving boceprevir at week 12 and 5% at week 24, as well as 6% of telaprevir patients at week 4, 4% at week 12, and 7% at week 24.
The authors conceded several limitations to this study, including that they were unable to assess reasons for early treatment discontinuation.
Nevertheless, the data "offer clinicians a perspective on expectations of early response and safety of DAA regimens in routine clinical practice, allowing a more nuanced discussion between providers and patients regarding the risks and benefits of embarking on DAA-based treatment."
The authors disclosed that one coinvestigator has previously received grant support from Gilead Sciences, maker of HCV therapies. They disclosed no other conflicts of interest.
Despite high rates of early clinical response, 30% of veterans taking direct-acting antivirals for hepatitis C discontinued treatment by week 24.
The findings highlight "the challenges associated with maintenance of these regimens" in a real-world cohort, wrote Pamela S. Belperio, Pharm.D., and colleagues. The study appears in the August issue of Clinical Gastroenterology and Hepatology.
Dr. Belperio of Veterans Affairs Palo Alto (Calif.) Health Care System, looked at 859 patients registered with the VA’s Clinical Case Registry for HCV who initiated triple therapy with pegylated interferon, ribavirin, and either boceprevir (n = 661) or telaprevir (n = 198) before Jan. 1, 2012, at 94 different VA facilities.
The authors determined how long the patient took the drug by looking at medication dispensed dates, and patients were classified as having response rates of "undetectable" if HCV RNA levels at their most recent assay were undetectable.
Patients’ mean age was 57 years; most of the patients were male. Patients with HIV or hepatitis B virus coinfection, hepatocellular carcinoma, or liver transplantation were excluded.
Even so, "Up to 13% of boceprevir-treated veterans and 24% of telaprevir-treated veterans would have been excluded from the phase III trials [of these therapies] based on hematologic exclusion criteria used in the telaprevir trials," Dr. Belperio wrote.
Despite this, the authors found that by week 12, 76% of all treatment-naive, noncirrhotic patients taking boceprevir had achieved undetectable viral loads, as had 78% of telaprevir patients.
By 24 weeks, those numbers were 74% for boceprevir patients and 60% for telaprevir patients.
However, the researchers also found that about one-third of patients discontinued treatment with either boceprevir or telaprevir before 24 weeks (30% and 34%, respectively; P = .37), with an incidence of hematologic adverse events that was both higher and more pronounced than has been reported in clinical trials of direct-acting antivirals (DAAs), wrote Dr. Belperio.
Indeed, anemia of at least grade 1 (hemoglobin below normal limits but greater than or equal to 10 g/dL) occurred in 50% of patients in both DAA groups, "which is up to a 15% increased incidence of anemia over what has been reported elsewhere."
There was also significant thrombocytopenia for both drugs, which Dr. Belperio said was four times higher than in clinical trials. In fact, her group reported that 66% of patients taking boceprevir had grade 1 thrombocytopenia. (platelets below normal limits but greater than or equal to 75,000/mm3), as did 59% of patients taking telaprevir.
"This may reflect the greater proportion of cirrhotic patients in our cohort compared with the clinical trials; however, it is concerning given the limited strategies currently available to manage thrombocytopenia and the unknown effect that PEG dose reductions may have on sustained virologic response in the context of DAA-based therapies," they wrote.
There also was a portion of patients for whom treatment was determined to be futile, according to Food and Drug Administration specifications: 9% of patients receiving boceprevir at week 12 and 5% at week 24, as well as 6% of telaprevir patients at week 4, 4% at week 12, and 7% at week 24.
The authors conceded several limitations to this study, including that they were unable to assess reasons for early treatment discontinuation.
Nevertheless, the data "offer clinicians a perspective on expectations of early response and safety of DAA regimens in routine clinical practice, allowing a more nuanced discussion between providers and patients regarding the risks and benefits of embarking on DAA-based treatment."
The authors disclosed that one coinvestigator has previously received grant support from Gilead Sciences, maker of HCV therapies. They disclosed no other conflicts of interest.
Despite high rates of early clinical response, 30% of veterans taking direct-acting antivirals for hepatitis C discontinued treatment by week 24.
The findings highlight "the challenges associated with maintenance of these regimens" in a real-world cohort, wrote Pamela S. Belperio, Pharm.D., and colleagues. The study appears in the August issue of Clinical Gastroenterology and Hepatology.
Dr. Belperio of Veterans Affairs Palo Alto (Calif.) Health Care System, looked at 859 patients registered with the VA’s Clinical Case Registry for HCV who initiated triple therapy with pegylated interferon, ribavirin, and either boceprevir (n = 661) or telaprevir (n = 198) before Jan. 1, 2012, at 94 different VA facilities.
The authors determined how long the patient took the drug by looking at medication dispensed dates, and patients were classified as having response rates of "undetectable" if HCV RNA levels at their most recent assay were undetectable.
Patients’ mean age was 57 years; most of the patients were male. Patients with HIV or hepatitis B virus coinfection, hepatocellular carcinoma, or liver transplantation were excluded.
Even so, "Up to 13% of boceprevir-treated veterans and 24% of telaprevir-treated veterans would have been excluded from the phase III trials [of these therapies] based on hematologic exclusion criteria used in the telaprevir trials," Dr. Belperio wrote.
Despite this, the authors found that by week 12, 76% of all treatment-naive, noncirrhotic patients taking boceprevir had achieved undetectable viral loads, as had 78% of telaprevir patients.
By 24 weeks, those numbers were 74% for boceprevir patients and 60% for telaprevir patients.
However, the researchers also found that about one-third of patients discontinued treatment with either boceprevir or telaprevir before 24 weeks (30% and 34%, respectively; P = .37), with an incidence of hematologic adverse events that was both higher and more pronounced than has been reported in clinical trials of direct-acting antivirals (DAAs), wrote Dr. Belperio.
Indeed, anemia of at least grade 1 (hemoglobin below normal limits but greater than or equal to 10 g/dL) occurred in 50% of patients in both DAA groups, "which is up to a 15% increased incidence of anemia over what has been reported elsewhere."
There was also significant thrombocytopenia for both drugs, which Dr. Belperio said was four times higher than in clinical trials. In fact, her group reported that 66% of patients taking boceprevir had grade 1 thrombocytopenia. (platelets below normal limits but greater than or equal to 75,000/mm3), as did 59% of patients taking telaprevir.
"This may reflect the greater proportion of cirrhotic patients in our cohort compared with the clinical trials; however, it is concerning given the limited strategies currently available to manage thrombocytopenia and the unknown effect that PEG dose reductions may have on sustained virologic response in the context of DAA-based therapies," they wrote.
There also was a portion of patients for whom treatment was determined to be futile, according to Food and Drug Administration specifications: 9% of patients receiving boceprevir at week 12 and 5% at week 24, as well as 6% of telaprevir patients at week 4, 4% at week 12, and 7% at week 24.
The authors conceded several limitations to this study, including that they were unable to assess reasons for early treatment discontinuation.
Nevertheless, the data "offer clinicians a perspective on expectations of early response and safety of DAA regimens in routine clinical practice, allowing a more nuanced discussion between providers and patients regarding the risks and benefits of embarking on DAA-based treatment."
The authors disclosed that one coinvestigator has previously received grant support from Gilead Sciences, maker of HCV therapies. They disclosed no other conflicts of interest.
Major finding: By 24 weeks, 74% of hepatitis C patients taking boceprevir and 60% taking telaprevir had an early virologic response, but nearly one-third had discontinued treatment.
Data source: A cohort of 859 HCV patients from the Veterans Affairs Clinical Case Registry for HCV.
Disclosures: The authors disclosed that one coinvestigator has previously received grant support from Gilead Sciences, maker of HCV therapies. They disclosed no other conflicts of interest.