User login
Aim for deep remission, high troughs in Crohn’s
"Deep remission" of Crohn’s disease is associated with better quality of life, physical function, and cost savings.
Moreover, this composite clinical and endoscopic endpoint may be best accomplished through monitoring of the tumor necrosis factor antagonists’ plasma concentration, meaning that the pharmacokinetics of these therapies must be accounted for when optimizing drug treatment.
Those are the findings from two new analyses in the March issue of Clinical Gastroenterology and Hepatology, both by Dr. Jean-Frédéric Colombel of the Mount Sinai Hospital School of Medicine in New York.
In the first study, Dr. Colombel and his colleagues looked at 135 adults with moderate to severe ileocolonic Crohn’s enrolled in the EXTEND trial, a 52-week, randomized, double-blind placebo-controlled trial of adalimumab (doi:10.1016/j.cgh.2013.06.019).
The goal of EXTEND was to assess the effect of induction plus maintenance dosing of adalimumab, versus induction only. All patients received open-label adalimumab (160 mg and 80 mg at weeks 0 and 2, respectively) during the 4-week induction phase and were randomized at week 4 to receive adalimumab 40 mg every other week or placebo.
Overall, 19% of patients who received both adalimumab induction and maintenance therapy achieved so-called deep remission, defined as the absence of mucosal ulceration plus clinical remission (Crohn’s Disease Activity Index [CDAI], less than 150). Of the remaining responders, 8 had an absence of mucosal ulceration without clinical remission, and 19 had clinical remission with persistent mucosal ulceration.
Dr. Colombel and his coinvestigators found that patients in deep remission registered significantly fewer days absent from work, less work productivity impairment, greater work productivity, and less daily nonwork activity impairment, compared with patients who did not achieve deep remission (P less than .05 for all).
Moreover, at 1 year after therapy induction, significantly more patients in the deep remission group achieved remission according to the Inflammatory Bowel Disease Questionnaire (64% vs. 26%; P less than .05) and normal Short Form–36 Health Survey status (55% vs. 19%; P less than .05), compared with patients who were not in deep remission.
Finally, during the 40 weeks after early deep remission achievement, estimated savings among deep remission patients were $6,117 for direct medical costs and $4,243 for indirect costs related to productivity, compared with the costs for patients not in deep remission.
"This treatment target in CD [of deep remission], which combines symptom control with an objective indicator of inflammatory disease activity, still is evolving," wrote the investigators.
However, "current analysis provides preliminary evidence that early [deep remission] is achievable and may be a useful treatment target," they added.
The second analysis, also by Dr. Colombel, looked at the optimization of TNF antagonists in Crohn’s disease – specifically, at certolizumab pegol (doi:10.1016/j.cgh.2013.10.025).
In this post hoc analysis, Dr. Colombel looked at patients enrolled in the MUSIC trial, a 54-week, multicenter, single-arm open-label study assessing endoscopic improvement in patients with moderate to severe Crohn’s disease.
All patients received loading doses of certolizumab pegol 400 mg at 0, 2, and 4 weeks, followed by a maintenance dose of 400 mg every 4 weeks.
The authors found that at week 10, patients who achieved a clinical and an endoscopic response had "nominally higher" trough plasma certolizumab pegol concentrations at week 8 than those with no response or remission (16.5 mcg/mL for patients with a clinical response and 19.8 mcg/mL for patients with an endoscopic response, vs. 13.7 mcg/mL and 11.5 mcg/mL in the clinical and endoscopic nonresponders).
Similarly, patients who achieved clinical and endoscopic remission at week 10 also had nominally higher trough plasma concentrations at week 8, with clinical remission patients registering 17.6 mcg/mL (vs. 11.1 mcg/mL in patients without clinical remission) and endoscopic remission patients measuring 19.2 mcg/mL (vs. 12.6 mcg/mL in patients who did not achieve that endpoint).
"Plasma certolizumab pegol concentration is not readily measured in standard clinical practice at this time," wrote the authors.
However, "additional studies are needed to understand the relationships among clinical response, body weight, and plasma concentration of the TNF antagonist," they added.
Dr. Colombel and his colleagues disclosed numerous financial relationships with pharmaceutical companies, including the makers of adalimumab and certolizumab pegol; the EXTEND trial was funded by AbbieVie, the maker of adalimumab, and the MUSIC trial was funded by UCB Pharma, maker of certolizumab pegol.
This month's issue of Clinical Gastroenterology and Hepatology contains two articles that explore important, novel concepts in the management of patients with Crohn's disease - that of mucosal healing as a potential therapeutic endpoint and that of therapeutic drug monitoring of tumor necrosis factor antagonists.
New endpoints are required because the currently accepted endpoint of a Crohn's Disease Activity Index score of less than 150 points does not optimally track with endoscopic healing, which appears to correlate with favorable longer-term outcomes such as reduced need for corticosteroids, hospitalizations, and major abdominal surgeries.
In a secondary analysis of the EXTEND trial (a study of endoscopic healing of Crohn's disease with adalimumab), the authors were able to show that patients who had achieved the endpoint of "deep remission" (that is, clinical remission and absence of ulcerations on endoscopy) by week 12 had fewer hospitalizations and surgeries than those who had not achieved deep remission.
Interestingly, those patients achieving deep remission had better outcomes than those who achieved endoscopic but not clinical remission, whereas there didn't appear to be an incremental benefit of achieving deep remission over clinical remission alone.
One should consider this a proof of concept study of the potential utility of deep remission as an endpoint, and we need to see prospective trials before concluding that deep remission is an endpoint to strive for in all of our patients.
The second article was a secondary analysis of the MUSIC trial (a study of endoscopic improvement of Crohn's disease with certolizumab pegol) and attempted to correlate certolizumab pegol concentrations with endoscopic response or remission.
The authors found that levels at week 8 were significantly associated with rates of endoscopic response and remission at week 10. They also found that higher body weight and serum C-reactive protein levels were associated with lower certolizumab levels.
This study corroborates evidence from other studies that concentrations of infliximab and adalimumab can be correlated to clinical outcomes.
Furthermore, the relationship between certolizumab plasma levels and other factors such as body weight and C-reactive protein levels highlights the complex pharmacokinetics of these biologic agents and underscores the potential need for therapeutic drug monitoring.
Dr. Edward V. Loftus Jr., AGAF, is professor of medicine and director, inflammatory bowel disease clinic, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn. He is a consultant for AbbVie, UCB, Janssen, Takeda-Millennium, and Immune Pharmaceuticals and receives research support from AbbVie, UCB, Janssen, Takeda-Millennium, Pfizer, Amgen, Santarus, Robarts Clinical Trials, Bristol-Myers Squibb, GlaxoSmithKline, Genentech, Shire, and Braintree Labs.
This month's issue of Clinical Gastroenterology and Hepatology contains two articles that explore important, novel concepts in the management of patients with Crohn's disease - that of mucosal healing as a potential therapeutic endpoint and that of therapeutic drug monitoring of tumor necrosis factor antagonists.
New endpoints are required because the currently accepted endpoint of a Crohn's Disease Activity Index score of less than 150 points does not optimally track with endoscopic healing, which appears to correlate with favorable longer-term outcomes such as reduced need for corticosteroids, hospitalizations, and major abdominal surgeries.
In a secondary analysis of the EXTEND trial (a study of endoscopic healing of Crohn's disease with adalimumab), the authors were able to show that patients who had achieved the endpoint of "deep remission" (that is, clinical remission and absence of ulcerations on endoscopy) by week 12 had fewer hospitalizations and surgeries than those who had not achieved deep remission.
Interestingly, those patients achieving deep remission had better outcomes than those who achieved endoscopic but not clinical remission, whereas there didn't appear to be an incremental benefit of achieving deep remission over clinical remission alone.
One should consider this a proof of concept study of the potential utility of deep remission as an endpoint, and we need to see prospective trials before concluding that deep remission is an endpoint to strive for in all of our patients.
The second article was a secondary analysis of the MUSIC trial (a study of endoscopic improvement of Crohn's disease with certolizumab pegol) and attempted to correlate certolizumab pegol concentrations with endoscopic response or remission.
The authors found that levels at week 8 were significantly associated with rates of endoscopic response and remission at week 10. They also found that higher body weight and serum C-reactive protein levels were associated with lower certolizumab levels.
This study corroborates evidence from other studies that concentrations of infliximab and adalimumab can be correlated to clinical outcomes.
Furthermore, the relationship between certolizumab plasma levels and other factors such as body weight and C-reactive protein levels highlights the complex pharmacokinetics of these biologic agents and underscores the potential need for therapeutic drug monitoring.
Dr. Edward V. Loftus Jr., AGAF, is professor of medicine and director, inflammatory bowel disease clinic, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn. He is a consultant for AbbVie, UCB, Janssen, Takeda-Millennium, and Immune Pharmaceuticals and receives research support from AbbVie, UCB, Janssen, Takeda-Millennium, Pfizer, Amgen, Santarus, Robarts Clinical Trials, Bristol-Myers Squibb, GlaxoSmithKline, Genentech, Shire, and Braintree Labs.
This month's issue of Clinical Gastroenterology and Hepatology contains two articles that explore important, novel concepts in the management of patients with Crohn's disease - that of mucosal healing as a potential therapeutic endpoint and that of therapeutic drug monitoring of tumor necrosis factor antagonists.
New endpoints are required because the currently accepted endpoint of a Crohn's Disease Activity Index score of less than 150 points does not optimally track with endoscopic healing, which appears to correlate with favorable longer-term outcomes such as reduced need for corticosteroids, hospitalizations, and major abdominal surgeries.
In a secondary analysis of the EXTEND trial (a study of endoscopic healing of Crohn's disease with adalimumab), the authors were able to show that patients who had achieved the endpoint of "deep remission" (that is, clinical remission and absence of ulcerations on endoscopy) by week 12 had fewer hospitalizations and surgeries than those who had not achieved deep remission.
Interestingly, those patients achieving deep remission had better outcomes than those who achieved endoscopic but not clinical remission, whereas there didn't appear to be an incremental benefit of achieving deep remission over clinical remission alone.
One should consider this a proof of concept study of the potential utility of deep remission as an endpoint, and we need to see prospective trials before concluding that deep remission is an endpoint to strive for in all of our patients.
The second article was a secondary analysis of the MUSIC trial (a study of endoscopic improvement of Crohn's disease with certolizumab pegol) and attempted to correlate certolizumab pegol concentrations with endoscopic response or remission.
The authors found that levels at week 8 were significantly associated with rates of endoscopic response and remission at week 10. They also found that higher body weight and serum C-reactive protein levels were associated with lower certolizumab levels.
This study corroborates evidence from other studies that concentrations of infliximab and adalimumab can be correlated to clinical outcomes.
Furthermore, the relationship between certolizumab plasma levels and other factors such as body weight and C-reactive protein levels highlights the complex pharmacokinetics of these biologic agents and underscores the potential need for therapeutic drug monitoring.
Dr. Edward V. Loftus Jr., AGAF, is professor of medicine and director, inflammatory bowel disease clinic, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn. He is a consultant for AbbVie, UCB, Janssen, Takeda-Millennium, and Immune Pharmaceuticals and receives research support from AbbVie, UCB, Janssen, Takeda-Millennium, Pfizer, Amgen, Santarus, Robarts Clinical Trials, Bristol-Myers Squibb, GlaxoSmithKline, Genentech, Shire, and Braintree Labs.
"Deep remission" of Crohn’s disease is associated with better quality of life, physical function, and cost savings.
Moreover, this composite clinical and endoscopic endpoint may be best accomplished through monitoring of the tumor necrosis factor antagonists’ plasma concentration, meaning that the pharmacokinetics of these therapies must be accounted for when optimizing drug treatment.
Those are the findings from two new analyses in the March issue of Clinical Gastroenterology and Hepatology, both by Dr. Jean-Frédéric Colombel of the Mount Sinai Hospital School of Medicine in New York.
In the first study, Dr. Colombel and his colleagues looked at 135 adults with moderate to severe ileocolonic Crohn’s enrolled in the EXTEND trial, a 52-week, randomized, double-blind placebo-controlled trial of adalimumab (doi:10.1016/j.cgh.2013.06.019).
The goal of EXTEND was to assess the effect of induction plus maintenance dosing of adalimumab, versus induction only. All patients received open-label adalimumab (160 mg and 80 mg at weeks 0 and 2, respectively) during the 4-week induction phase and were randomized at week 4 to receive adalimumab 40 mg every other week or placebo.
Overall, 19% of patients who received both adalimumab induction and maintenance therapy achieved so-called deep remission, defined as the absence of mucosal ulceration plus clinical remission (Crohn’s Disease Activity Index [CDAI], less than 150). Of the remaining responders, 8 had an absence of mucosal ulceration without clinical remission, and 19 had clinical remission with persistent mucosal ulceration.
Dr. Colombel and his coinvestigators found that patients in deep remission registered significantly fewer days absent from work, less work productivity impairment, greater work productivity, and less daily nonwork activity impairment, compared with patients who did not achieve deep remission (P less than .05 for all).
Moreover, at 1 year after therapy induction, significantly more patients in the deep remission group achieved remission according to the Inflammatory Bowel Disease Questionnaire (64% vs. 26%; P less than .05) and normal Short Form–36 Health Survey status (55% vs. 19%; P less than .05), compared with patients who were not in deep remission.
Finally, during the 40 weeks after early deep remission achievement, estimated savings among deep remission patients were $6,117 for direct medical costs and $4,243 for indirect costs related to productivity, compared with the costs for patients not in deep remission.
"This treatment target in CD [of deep remission], which combines symptom control with an objective indicator of inflammatory disease activity, still is evolving," wrote the investigators.
However, "current analysis provides preliminary evidence that early [deep remission] is achievable and may be a useful treatment target," they added.
The second analysis, also by Dr. Colombel, looked at the optimization of TNF antagonists in Crohn’s disease – specifically, at certolizumab pegol (doi:10.1016/j.cgh.2013.10.025).
In this post hoc analysis, Dr. Colombel looked at patients enrolled in the MUSIC trial, a 54-week, multicenter, single-arm open-label study assessing endoscopic improvement in patients with moderate to severe Crohn’s disease.
All patients received loading doses of certolizumab pegol 400 mg at 0, 2, and 4 weeks, followed by a maintenance dose of 400 mg every 4 weeks.
The authors found that at week 10, patients who achieved a clinical and an endoscopic response had "nominally higher" trough plasma certolizumab pegol concentrations at week 8 than those with no response or remission (16.5 mcg/mL for patients with a clinical response and 19.8 mcg/mL for patients with an endoscopic response, vs. 13.7 mcg/mL and 11.5 mcg/mL in the clinical and endoscopic nonresponders).
Similarly, patients who achieved clinical and endoscopic remission at week 10 also had nominally higher trough plasma concentrations at week 8, with clinical remission patients registering 17.6 mcg/mL (vs. 11.1 mcg/mL in patients without clinical remission) and endoscopic remission patients measuring 19.2 mcg/mL (vs. 12.6 mcg/mL in patients who did not achieve that endpoint).
"Plasma certolizumab pegol concentration is not readily measured in standard clinical practice at this time," wrote the authors.
However, "additional studies are needed to understand the relationships among clinical response, body weight, and plasma concentration of the TNF antagonist," they added.
Dr. Colombel and his colleagues disclosed numerous financial relationships with pharmaceutical companies, including the makers of adalimumab and certolizumab pegol; the EXTEND trial was funded by AbbieVie, the maker of adalimumab, and the MUSIC trial was funded by UCB Pharma, maker of certolizumab pegol.
"Deep remission" of Crohn’s disease is associated with better quality of life, physical function, and cost savings.
Moreover, this composite clinical and endoscopic endpoint may be best accomplished through monitoring of the tumor necrosis factor antagonists’ plasma concentration, meaning that the pharmacokinetics of these therapies must be accounted for when optimizing drug treatment.
Those are the findings from two new analyses in the March issue of Clinical Gastroenterology and Hepatology, both by Dr. Jean-Frédéric Colombel of the Mount Sinai Hospital School of Medicine in New York.
In the first study, Dr. Colombel and his colleagues looked at 135 adults with moderate to severe ileocolonic Crohn’s enrolled in the EXTEND trial, a 52-week, randomized, double-blind placebo-controlled trial of adalimumab (doi:10.1016/j.cgh.2013.06.019).
The goal of EXTEND was to assess the effect of induction plus maintenance dosing of adalimumab, versus induction only. All patients received open-label adalimumab (160 mg and 80 mg at weeks 0 and 2, respectively) during the 4-week induction phase and were randomized at week 4 to receive adalimumab 40 mg every other week or placebo.
Overall, 19% of patients who received both adalimumab induction and maintenance therapy achieved so-called deep remission, defined as the absence of mucosal ulceration plus clinical remission (Crohn’s Disease Activity Index [CDAI], less than 150). Of the remaining responders, 8 had an absence of mucosal ulceration without clinical remission, and 19 had clinical remission with persistent mucosal ulceration.
Dr. Colombel and his coinvestigators found that patients in deep remission registered significantly fewer days absent from work, less work productivity impairment, greater work productivity, and less daily nonwork activity impairment, compared with patients who did not achieve deep remission (P less than .05 for all).
Moreover, at 1 year after therapy induction, significantly more patients in the deep remission group achieved remission according to the Inflammatory Bowel Disease Questionnaire (64% vs. 26%; P less than .05) and normal Short Form–36 Health Survey status (55% vs. 19%; P less than .05), compared with patients who were not in deep remission.
Finally, during the 40 weeks after early deep remission achievement, estimated savings among deep remission patients were $6,117 for direct medical costs and $4,243 for indirect costs related to productivity, compared with the costs for patients not in deep remission.
"This treatment target in CD [of deep remission], which combines symptom control with an objective indicator of inflammatory disease activity, still is evolving," wrote the investigators.
However, "current analysis provides preliminary evidence that early [deep remission] is achievable and may be a useful treatment target," they added.
The second analysis, also by Dr. Colombel, looked at the optimization of TNF antagonists in Crohn’s disease – specifically, at certolizumab pegol (doi:10.1016/j.cgh.2013.10.025).
In this post hoc analysis, Dr. Colombel looked at patients enrolled in the MUSIC trial, a 54-week, multicenter, single-arm open-label study assessing endoscopic improvement in patients with moderate to severe Crohn’s disease.
All patients received loading doses of certolizumab pegol 400 mg at 0, 2, and 4 weeks, followed by a maintenance dose of 400 mg every 4 weeks.
The authors found that at week 10, patients who achieved a clinical and an endoscopic response had "nominally higher" trough plasma certolizumab pegol concentrations at week 8 than those with no response or remission (16.5 mcg/mL for patients with a clinical response and 19.8 mcg/mL for patients with an endoscopic response, vs. 13.7 mcg/mL and 11.5 mcg/mL in the clinical and endoscopic nonresponders).
Similarly, patients who achieved clinical and endoscopic remission at week 10 also had nominally higher trough plasma concentrations at week 8, with clinical remission patients registering 17.6 mcg/mL (vs. 11.1 mcg/mL in patients without clinical remission) and endoscopic remission patients measuring 19.2 mcg/mL (vs. 12.6 mcg/mL in patients who did not achieve that endpoint).
"Plasma certolizumab pegol concentration is not readily measured in standard clinical practice at this time," wrote the authors.
However, "additional studies are needed to understand the relationships among clinical response, body weight, and plasma concentration of the TNF antagonist," they added.
Dr. Colombel and his colleagues disclosed numerous financial relationships with pharmaceutical companies, including the makers of adalimumab and certolizumab pegol; the EXTEND trial was funded by AbbieVie, the maker of adalimumab, and the MUSIC trial was funded by UCB Pharma, maker of certolizumab pegol.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Deep remission in Crohn’s disease is a novel and useful treatment target; reaching it may depend on monitoring blood levels of the drugs used to treat the disease.
Data source: Analyses of the EXTEND and MUSIC trials.
Disclosures: Dr. Colombel and his colleagues disclosed numerous financial relationships with pharmaceutical companies, including the makers of adalimumab and certolizumab pegol; the EXTEND trial was funded by AbbieVie, the maker of adalimumab, and the MUSIC trial was funded by UCB Pharma, maker of certolizumab pegol.
Standard cardiovascular risk models no good in rheumatoid arthritis
Established cardiovascular risk models such as the Systematic Coronary Risk Evaluation and the Framingham risk score either underestimated or overestimated the risk of cardiovascular events in a retrospective analysis of prospectively collected data from a cohort of patients with early rheumatoid arthritis.
These findings illustrate that what is needed is an RA-specific CV risk model whose performance "should be compared with the performance of the current risk algorithms," according to lead investigator Elke E.A. Arts of Radboud University Nijmegen (Netherlands) Medical Centre and her colleagues.
The four different risk algorithms that failed to correctly estimate the 10-year risk of a CV event were the Framingham risk score, the Systematic Coronary Risk Evaluation (SCORE), the Reynolds risk score, and the QRisk II risk score.
Proposals to fix to these models suggest that patients at risk might be better identified by adjusting the cutoff points in CV risk that are used as indications for primary prevention for RA patients, but "this could also lead to overtreatment, as the majority of patients in the lower risk group do not develop events," cautioned Ms. Arts, a PhD student at the university, and her associates.
They noted that "alternatively, a correction factor could be used to adjust the CV risk in patients with RA, as was suggested by the European League Against Rheumatism (EULAR) recommendations for CV risk management," they added. The EULAR recommendations advise multiplying the results of the SCORE risk algorithm by 1.5 for RA patients when they fulfill two out of the following three criteria: disease duration greater than 10 years, rheumatoid factor or anti–cyclic citrullinated peptide positivity, and extra-articular manifestations. However, "there are no data supporting such a multiplicator; it was based on expert opinion," they wrote.
In the current study, the investigators applied the four risk algorithms to data from 1,050 patients who were originally enrolled in the Nijmegen Early RA Inception cohort. Patients with preexisting CV disease were excluded, and in cases with less than 10 years of follow-up, risk scores were "adjusted proportionally according to the length of actual follow-up and calculated as a proportion of 10 years" (Ann. Rheum. Dis. 2014 Jan. 3 [doi: 10.1136/annrheumdis-2013-204024]).
Overall, over the course of 9,957 patient-years available for analysis, there were 149 episodes of a first CV event, the primary endpoint of the study (1.14 events per 100 patient-years). This total included 67 cases of acute/unstable coronary syndrome (myocardial infarction or unstable angina), 24 cases of stable angina, 26 cerebrovascular accidents, 10 transient ischemic attacks, 18 cases of peripheral vascular disease, and 4 cases of heart failure. A total of 15 of these events were fatal.
The sensitivity of the models was 68%-87% for the risk score cutoff value of 10% (marking the difference between low risk and intermediate to high risk) and 40%-65% for the risk score cutoff value of 20% (marking the difference between low- to intermediate-risk and high-risk patients). The corresponding specificity ranges were 55%-76% and 77%-88%, respectively. Those cutoff points are recommended to be used as indicators for CV preventive treatment such as lifestyle adjustments and drug therapy interventions.
For the SCORE risk model, the researchers found that CV risk predictions deviated from observed CV events in all risk deciles, but especially in the middle and top deciles. Indeed, tests of the fit of the SCORE model to the data with the Hosmer-Lemeshow (H-L) test yielded a P value of less than .001, indicating poor model fit (where good fit corresponds to P values greater than .05).
Looking at the Framingham model, the number of CV events predicted was similar to the observed number of CV events, showing a modest difference in predicted and observed CV risk in the lower- and middle-risk deciles, according to the authors. However, the top two deciles decidedly under- and overestimated risk, respectively, with an H-L test indicating poor fit for the whole model (P =.024).
The next tested model, the Reynolds risk score, underestimated the number of CV events, with an overall P value of .020 on the H-L test.
The QRisk II had a moderately good fit on the H-L test (P = .20) but still mainly overestimated the observed CV risk.
Ms. Arts stated in an e-mail that this study was originally presented at the 2013 American College of Rheumatology annual meeting. She added that she is currently collaborating with a group on an RA-specific CV risk model.
Ms. Arts disclosed that the study was partially funded by the Rheumatology Research University Nijmegen foundation. Three of her coauthors disclosed financial relationships with multiple pharmaceutical companies, including the makers of drugs and therapies for RA.
Established cardiovascular risk models such as the Systematic Coronary Risk Evaluation and the Framingham risk score either underestimated or overestimated the risk of cardiovascular events in a retrospective analysis of prospectively collected data from a cohort of patients with early rheumatoid arthritis.
These findings illustrate that what is needed is an RA-specific CV risk model whose performance "should be compared with the performance of the current risk algorithms," according to lead investigator Elke E.A. Arts of Radboud University Nijmegen (Netherlands) Medical Centre and her colleagues.
The four different risk algorithms that failed to correctly estimate the 10-year risk of a CV event were the Framingham risk score, the Systematic Coronary Risk Evaluation (SCORE), the Reynolds risk score, and the QRisk II risk score.
Proposals to fix to these models suggest that patients at risk might be better identified by adjusting the cutoff points in CV risk that are used as indications for primary prevention for RA patients, but "this could also lead to overtreatment, as the majority of patients in the lower risk group do not develop events," cautioned Ms. Arts, a PhD student at the university, and her associates.
They noted that "alternatively, a correction factor could be used to adjust the CV risk in patients with RA, as was suggested by the European League Against Rheumatism (EULAR) recommendations for CV risk management," they added. The EULAR recommendations advise multiplying the results of the SCORE risk algorithm by 1.5 for RA patients when they fulfill two out of the following three criteria: disease duration greater than 10 years, rheumatoid factor or anti–cyclic citrullinated peptide positivity, and extra-articular manifestations. However, "there are no data supporting such a multiplicator; it was based on expert opinion," they wrote.
In the current study, the investigators applied the four risk algorithms to data from 1,050 patients who were originally enrolled in the Nijmegen Early RA Inception cohort. Patients with preexisting CV disease were excluded, and in cases with less than 10 years of follow-up, risk scores were "adjusted proportionally according to the length of actual follow-up and calculated as a proportion of 10 years" (Ann. Rheum. Dis. 2014 Jan. 3 [doi: 10.1136/annrheumdis-2013-204024]).
Overall, over the course of 9,957 patient-years available for analysis, there were 149 episodes of a first CV event, the primary endpoint of the study (1.14 events per 100 patient-years). This total included 67 cases of acute/unstable coronary syndrome (myocardial infarction or unstable angina), 24 cases of stable angina, 26 cerebrovascular accidents, 10 transient ischemic attacks, 18 cases of peripheral vascular disease, and 4 cases of heart failure. A total of 15 of these events were fatal.
The sensitivity of the models was 68%-87% for the risk score cutoff value of 10% (marking the difference between low risk and intermediate to high risk) and 40%-65% for the risk score cutoff value of 20% (marking the difference between low- to intermediate-risk and high-risk patients). The corresponding specificity ranges were 55%-76% and 77%-88%, respectively. Those cutoff points are recommended to be used as indicators for CV preventive treatment such as lifestyle adjustments and drug therapy interventions.
For the SCORE risk model, the researchers found that CV risk predictions deviated from observed CV events in all risk deciles, but especially in the middle and top deciles. Indeed, tests of the fit of the SCORE model to the data with the Hosmer-Lemeshow (H-L) test yielded a P value of less than .001, indicating poor model fit (where good fit corresponds to P values greater than .05).
Looking at the Framingham model, the number of CV events predicted was similar to the observed number of CV events, showing a modest difference in predicted and observed CV risk in the lower- and middle-risk deciles, according to the authors. However, the top two deciles decidedly under- and overestimated risk, respectively, with an H-L test indicating poor fit for the whole model (P =.024).
The next tested model, the Reynolds risk score, underestimated the number of CV events, with an overall P value of .020 on the H-L test.
The QRisk II had a moderately good fit on the H-L test (P = .20) but still mainly overestimated the observed CV risk.
Ms. Arts stated in an e-mail that this study was originally presented at the 2013 American College of Rheumatology annual meeting. She added that she is currently collaborating with a group on an RA-specific CV risk model.
Ms. Arts disclosed that the study was partially funded by the Rheumatology Research University Nijmegen foundation. Three of her coauthors disclosed financial relationships with multiple pharmaceutical companies, including the makers of drugs and therapies for RA.
Established cardiovascular risk models such as the Systematic Coronary Risk Evaluation and the Framingham risk score either underestimated or overestimated the risk of cardiovascular events in a retrospective analysis of prospectively collected data from a cohort of patients with early rheumatoid arthritis.
These findings illustrate that what is needed is an RA-specific CV risk model whose performance "should be compared with the performance of the current risk algorithms," according to lead investigator Elke E.A. Arts of Radboud University Nijmegen (Netherlands) Medical Centre and her colleagues.
The four different risk algorithms that failed to correctly estimate the 10-year risk of a CV event were the Framingham risk score, the Systematic Coronary Risk Evaluation (SCORE), the Reynolds risk score, and the QRisk II risk score.
Proposals to fix to these models suggest that patients at risk might be better identified by adjusting the cutoff points in CV risk that are used as indications for primary prevention for RA patients, but "this could also lead to overtreatment, as the majority of patients in the lower risk group do not develop events," cautioned Ms. Arts, a PhD student at the university, and her associates.
They noted that "alternatively, a correction factor could be used to adjust the CV risk in patients with RA, as was suggested by the European League Against Rheumatism (EULAR) recommendations for CV risk management," they added. The EULAR recommendations advise multiplying the results of the SCORE risk algorithm by 1.5 for RA patients when they fulfill two out of the following three criteria: disease duration greater than 10 years, rheumatoid factor or anti–cyclic citrullinated peptide positivity, and extra-articular manifestations. However, "there are no data supporting such a multiplicator; it was based on expert opinion," they wrote.
In the current study, the investigators applied the four risk algorithms to data from 1,050 patients who were originally enrolled in the Nijmegen Early RA Inception cohort. Patients with preexisting CV disease were excluded, and in cases with less than 10 years of follow-up, risk scores were "adjusted proportionally according to the length of actual follow-up and calculated as a proportion of 10 years" (Ann. Rheum. Dis. 2014 Jan. 3 [doi: 10.1136/annrheumdis-2013-204024]).
Overall, over the course of 9,957 patient-years available for analysis, there were 149 episodes of a first CV event, the primary endpoint of the study (1.14 events per 100 patient-years). This total included 67 cases of acute/unstable coronary syndrome (myocardial infarction or unstable angina), 24 cases of stable angina, 26 cerebrovascular accidents, 10 transient ischemic attacks, 18 cases of peripheral vascular disease, and 4 cases of heart failure. A total of 15 of these events were fatal.
The sensitivity of the models was 68%-87% for the risk score cutoff value of 10% (marking the difference between low risk and intermediate to high risk) and 40%-65% for the risk score cutoff value of 20% (marking the difference between low- to intermediate-risk and high-risk patients). The corresponding specificity ranges were 55%-76% and 77%-88%, respectively. Those cutoff points are recommended to be used as indicators for CV preventive treatment such as lifestyle adjustments and drug therapy interventions.
For the SCORE risk model, the researchers found that CV risk predictions deviated from observed CV events in all risk deciles, but especially in the middle and top deciles. Indeed, tests of the fit of the SCORE model to the data with the Hosmer-Lemeshow (H-L) test yielded a P value of less than .001, indicating poor model fit (where good fit corresponds to P values greater than .05).
Looking at the Framingham model, the number of CV events predicted was similar to the observed number of CV events, showing a modest difference in predicted and observed CV risk in the lower- and middle-risk deciles, according to the authors. However, the top two deciles decidedly under- and overestimated risk, respectively, with an H-L test indicating poor fit for the whole model (P =.024).
The next tested model, the Reynolds risk score, underestimated the number of CV events, with an overall P value of .020 on the H-L test.
The QRisk II had a moderately good fit on the H-L test (P = .20) but still mainly overestimated the observed CV risk.
Ms. Arts stated in an e-mail that this study was originally presented at the 2013 American College of Rheumatology annual meeting. She added that she is currently collaborating with a group on an RA-specific CV risk model.
Ms. Arts disclosed that the study was partially funded by the Rheumatology Research University Nijmegen foundation. Three of her coauthors disclosed financial relationships with multiple pharmaceutical companies, including the makers of drugs and therapies for RA.
FROM ANNALS OF THE RHEUMATIC DISEASES
Major finding: The sensitivity of the models were 68%-87% for the risk score cut-off value of 10% (marking the difference between low-risk and intermediate-to-high risk) and 40%-65% for the risk score cut-off value of 20% (marking the difference between low-intermediate risk and high-risk patients). The corresponding specificity ranges were 55%-76% and 77%-88%, respectively.
Data source: A retrospective analysis of prospectively collected data from 1,050 patients in the Nijmegen Early RA Inception cohort.
Disclosures: Ms. Arts disclosed that the study was partially funded by the Rheumatology Research University Nijmegen foundation. Three of her coauthors disclosed financial relationships with multiple pharmaceutical companies, including the makers of drugs and therapies for RA.
Despite benefits, ‘bundling’ endoscopy is not norm
In more than one-third of cases, same-day bidirectional endoscopy is not performed in Medicare beneficiaries, despite the fact that so-called "bundled" endoscopies offer significant cost savings to Medicare, wrote Dr. Hashem B. El-Serag and colleagues in the January issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.07.021).
Indeed, "Although in some instances the referral patterns and clinical indications may have precluded bundling, the persistence of the findings in analyses that adjusted for indications and the considerable geographic variation in practice suggest a component of physician discretion," they wrote.
Dr. El-Serag of Baylor College of Medicine, Houston, and colleagues looked at data from 12,982 Medicare beneficiaries with claims for colonoscopy and EGD within 180 days of each other captured by the Surveillance Epidemiology and End Results (SEER) Program.
Overall, 8,404 of these patients (64.7%) had "bundled" procedures, meaning that both upper and lower endoscopy occurred on the same day during the same session.
On the other hand, 2,359 patients (18.2%) did not have their procedures bundled, but rather had both procedures within 30 days of each other.
And an additional 2,219 patients (17.1%) underwent the procedures at an even greater interval, between 30 and 180 days of each other.
"Patients with bundled procedures were slightly younger, more likely to be white, more likely to reside in an urban area with a higher median educational level, and more likely to have low comorbidity scores," wrote the authors.
However, there was also a "strong and significant" geographic component to the frequency of bundling, whereby patients undergoing procedures in the Northeastern United States had the lowest rates, while patients in the Western portion of the country had the highest rates.
They also found that patients with GI bleeding were significantly more likely to have bundled procedures compared with patients undergoing screening or surveillance.
"This association between indications and bundling status persisted in a subgroup analysis of patients with a comorbidity score of 2 or greater (n = 2,961)," they wrote.
In an attempt to explain their findings, the researchers postulated that since Medicare reimburses bundled procedures at a rate that is less than the sum of each charged separately, physicians have a financial disincentive to bundle procedures.
To that end, "it is worth considering that insurers reimburse physicians fully for bundled procedures," they wrote, pending a formal cost-effectiveness analysis.
They conceded, however, that the data used in this study were collected primarily for billing purposes, such that clinical details – for example, procedural findings – were lacking.
"For example, in an open-access endoscopy system a patient may have been referred for only one procedure, and, based on the outcome, a second procedure may have been requested at a later time," they explained.
Nevertheless, "The missed opportunities related to nonbundled EGD and colonoscopy are likely to be associated with considerable increase in cost related to physician (gastroenterology, anesthesia, pathology) and facility fees," concluded the authors, as well as indirect costs of work days lost.
Dr. El-Serag and his colleagues stated that they had no conflicts of interest. They disclosed grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institutes of Health/National Cancer Institute, the Houston VA Health Services Research & Development Center of Excellence, and the Texas Digestive Disease Center National Institutes of Health, as well as the National Center for Advancing Translational Sciences.
In more than one-third of cases, same-day bidirectional endoscopy is not performed in Medicare beneficiaries, despite the fact that so-called "bundled" endoscopies offer significant cost savings to Medicare, wrote Dr. Hashem B. El-Serag and colleagues in the January issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.07.021).
Indeed, "Although in some instances the referral patterns and clinical indications may have precluded bundling, the persistence of the findings in analyses that adjusted for indications and the considerable geographic variation in practice suggest a component of physician discretion," they wrote.
Dr. El-Serag of Baylor College of Medicine, Houston, and colleagues looked at data from 12,982 Medicare beneficiaries with claims for colonoscopy and EGD within 180 days of each other captured by the Surveillance Epidemiology and End Results (SEER) Program.
Overall, 8,404 of these patients (64.7%) had "bundled" procedures, meaning that both upper and lower endoscopy occurred on the same day during the same session.
On the other hand, 2,359 patients (18.2%) did not have their procedures bundled, but rather had both procedures within 30 days of each other.
And an additional 2,219 patients (17.1%) underwent the procedures at an even greater interval, between 30 and 180 days of each other.
"Patients with bundled procedures were slightly younger, more likely to be white, more likely to reside in an urban area with a higher median educational level, and more likely to have low comorbidity scores," wrote the authors.
However, there was also a "strong and significant" geographic component to the frequency of bundling, whereby patients undergoing procedures in the Northeastern United States had the lowest rates, while patients in the Western portion of the country had the highest rates.
They also found that patients with GI bleeding were significantly more likely to have bundled procedures compared with patients undergoing screening or surveillance.
"This association between indications and bundling status persisted in a subgroup analysis of patients with a comorbidity score of 2 or greater (n = 2,961)," they wrote.
In an attempt to explain their findings, the researchers postulated that since Medicare reimburses bundled procedures at a rate that is less than the sum of each charged separately, physicians have a financial disincentive to bundle procedures.
To that end, "it is worth considering that insurers reimburse physicians fully for bundled procedures," they wrote, pending a formal cost-effectiveness analysis.
They conceded, however, that the data used in this study were collected primarily for billing purposes, such that clinical details – for example, procedural findings – were lacking.
"For example, in an open-access endoscopy system a patient may have been referred for only one procedure, and, based on the outcome, a second procedure may have been requested at a later time," they explained.
Nevertheless, "The missed opportunities related to nonbundled EGD and colonoscopy are likely to be associated with considerable increase in cost related to physician (gastroenterology, anesthesia, pathology) and facility fees," concluded the authors, as well as indirect costs of work days lost.
Dr. El-Serag and his colleagues stated that they had no conflicts of interest. They disclosed grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institutes of Health/National Cancer Institute, the Houston VA Health Services Research & Development Center of Excellence, and the Texas Digestive Disease Center National Institutes of Health, as well as the National Center for Advancing Translational Sciences.
In more than one-third of cases, same-day bidirectional endoscopy is not performed in Medicare beneficiaries, despite the fact that so-called "bundled" endoscopies offer significant cost savings to Medicare, wrote Dr. Hashem B. El-Serag and colleagues in the January issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.07.021).
Indeed, "Although in some instances the referral patterns and clinical indications may have precluded bundling, the persistence of the findings in analyses that adjusted for indications and the considerable geographic variation in practice suggest a component of physician discretion," they wrote.
Dr. El-Serag of Baylor College of Medicine, Houston, and colleagues looked at data from 12,982 Medicare beneficiaries with claims for colonoscopy and EGD within 180 days of each other captured by the Surveillance Epidemiology and End Results (SEER) Program.
Overall, 8,404 of these patients (64.7%) had "bundled" procedures, meaning that both upper and lower endoscopy occurred on the same day during the same session.
On the other hand, 2,359 patients (18.2%) did not have their procedures bundled, but rather had both procedures within 30 days of each other.
And an additional 2,219 patients (17.1%) underwent the procedures at an even greater interval, between 30 and 180 days of each other.
"Patients with bundled procedures were slightly younger, more likely to be white, more likely to reside in an urban area with a higher median educational level, and more likely to have low comorbidity scores," wrote the authors.
However, there was also a "strong and significant" geographic component to the frequency of bundling, whereby patients undergoing procedures in the Northeastern United States had the lowest rates, while patients in the Western portion of the country had the highest rates.
They also found that patients with GI bleeding were significantly more likely to have bundled procedures compared with patients undergoing screening or surveillance.
"This association between indications and bundling status persisted in a subgroup analysis of patients with a comorbidity score of 2 or greater (n = 2,961)," they wrote.
In an attempt to explain their findings, the researchers postulated that since Medicare reimburses bundled procedures at a rate that is less than the sum of each charged separately, physicians have a financial disincentive to bundle procedures.
To that end, "it is worth considering that insurers reimburse physicians fully for bundled procedures," they wrote, pending a formal cost-effectiveness analysis.
They conceded, however, that the data used in this study were collected primarily for billing purposes, such that clinical details – for example, procedural findings – were lacking.
"For example, in an open-access endoscopy system a patient may have been referred for only one procedure, and, based on the outcome, a second procedure may have been requested at a later time," they explained.
Nevertheless, "The missed opportunities related to nonbundled EGD and colonoscopy are likely to be associated with considerable increase in cost related to physician (gastroenterology, anesthesia, pathology) and facility fees," concluded the authors, as well as indirect costs of work days lost.
Dr. El-Serag and his colleagues stated that they had no conflicts of interest. They disclosed grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institutes of Health/National Cancer Institute, the Houston VA Health Services Research & Development Center of Excellence, and the Texas Digestive Disease Center National Institutes of Health, as well as the National Center for Advancing Translational Sciences.
Major finding: Of more than 12,000 Medicare patients who underwent upper and lower endoscopy, fewer than two-thirds had the procedures "bundled" on the same day, despite significant cost savings for Medicare and convenience for patients.
Data source: A total of 12,982 Medicare patients in the Surveillance Epidemiology and End Results (SEER) Program.
Disclosures: Dr. El-Serag and his colleagues stated that they had no conflicts of interest. They disclosed grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institutes of Health/National Cancer Institute, the Houston VA Health Services Research & Development Center of Excellence, and the Texas Digestive Disease Center National Institutes of Health, as well as the National Center for Advancing Translational Sciences.
Low-FODMAP diet reduced IBS symptoms
Diets low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols reduced functional gastrointestinal symptoms in patients with irritable bowel syndrome.
The finding, published by Dr. Emma P. Halmos in the January issue of Gastroenterology (2013; [doi:10.1053/j.gastro.2013.09.046]), confirms that the diet’s "growing popularity" is warranted, and supports its use as a first-line therapy for IBS.
Dr. Halmos of Monash University, in Box Hill, Australia, and her colleagues looked at 30 patients with IBS and 8 healthy controls.
At baseline, all patients recorded their normal, daily dietary intake in a food diary for 1 week, as well as IBS symptoms.
Patients were then randomized to receive either 21 days’ worth of a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs), meaning less than or equal to 5 grams per sitting of these ingredients, or food representing a typical Australian diet.
For example, instead of a breakfast of "wheat biscuit–type cereal with 1/2 cup lactose-free milk, two slices wheat toast," the low-FODMAP adherents were given one cup corn flakes with 1/2 cup lactose-free milk and two slices of spelt toast.
Almost all food was provided, including three main meals and three snacks daily.
Next, there was a washout period during which each participant resumed their usual diet and then crossed-over to the alternate, which was not begun until the symptoms had returned to the same level as baseline.
Patients were prohibited from taking any other therapies for IBS during the study period, nor were they permitted to take any pharmacologic agents to alter their symptoms, including laxatives or antidiarrheals.
At baseline, the mean overall gastrointestinal symptoms score for IBS patients was 36.0 mm on the Visual Analogue Scale.
By the final 14 days of the intervention, IBS patients on the low-FODMAP diet reported a mean overall symptom score of 22.8 mm (P less than .001, compared with baseline).
That was in contrast to the 44.9 mm among IBS patients on the typical Australian diet (P less than .001, compared with baseline), with the difference between the two scores also reaching statistical significance (P less than .001).
Moreover, 21 of 30 IBS patients reported improvements of 10 mm or more on the low-FODMAP diet, wrote the authors.
Healthy controls, meanwhile, had very low scores at baseline (17.8 mm), and there was no change in symptoms on the low-FODMAP or typical Australian diets.
The authors also assessed adherence, as recorded in food diaries. IBS patients were adherent for a median 41 of the 42 days of the combined diets, and healthy controls were adherent for the entire 42 days.
Additionally, "If adherence for at least 17 days of the 21 days of controlled diet (greater than 81% of the days) was arbitrarily considered compliant, then all participants were adherent to the typical Australian diet, and 80% of IBS participants (24 of 30) and 100% of healthy controls were adherent to the low-FODMAP diet," wrote the authors.
According to the authors, one of the strengths of this study was in comparing the low-FODMAP diet with a typical Australian diet, as opposed to an intentionally very-high FODMAP regimen, as earlier studies have done.
They added that providing almost all food to participants facilitated a high degree of adherence.
However, "In life, the low-FODMAP diet is dietitian taught. Dietary restriction would have more varying degrees of compliance and depend on the patients’ degree of understanding, food choices, and motivation for altering dietary habits, as well as the dietitians’ advice on level of FODMAP restriction required," they wrote.
Dr. Halmos disclosed that two coauthors have previously published books on food intolerances and a low-FODMAP diet. They wrote that the study was supported by the National Health and Medical Research Council of Australia, the Les and Eva Erdi Foundation, and by a scholarship from Monash University.
Diets low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols reduced functional gastrointestinal symptoms in patients with irritable bowel syndrome.
The finding, published by Dr. Emma P. Halmos in the January issue of Gastroenterology (2013; [doi:10.1053/j.gastro.2013.09.046]), confirms that the diet’s "growing popularity" is warranted, and supports its use as a first-line therapy for IBS.
Dr. Halmos of Monash University, in Box Hill, Australia, and her colleagues looked at 30 patients with IBS and 8 healthy controls.
At baseline, all patients recorded their normal, daily dietary intake in a food diary for 1 week, as well as IBS symptoms.
Patients were then randomized to receive either 21 days’ worth of a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs), meaning less than or equal to 5 grams per sitting of these ingredients, or food representing a typical Australian diet.
For example, instead of a breakfast of "wheat biscuit–type cereal with 1/2 cup lactose-free milk, two slices wheat toast," the low-FODMAP adherents were given one cup corn flakes with 1/2 cup lactose-free milk and two slices of spelt toast.
Almost all food was provided, including three main meals and three snacks daily.
Next, there was a washout period during which each participant resumed their usual diet and then crossed-over to the alternate, which was not begun until the symptoms had returned to the same level as baseline.
Patients were prohibited from taking any other therapies for IBS during the study period, nor were they permitted to take any pharmacologic agents to alter their symptoms, including laxatives or antidiarrheals.
At baseline, the mean overall gastrointestinal symptoms score for IBS patients was 36.0 mm on the Visual Analogue Scale.
By the final 14 days of the intervention, IBS patients on the low-FODMAP diet reported a mean overall symptom score of 22.8 mm (P less than .001, compared with baseline).
That was in contrast to the 44.9 mm among IBS patients on the typical Australian diet (P less than .001, compared with baseline), with the difference between the two scores also reaching statistical significance (P less than .001).
Moreover, 21 of 30 IBS patients reported improvements of 10 mm or more on the low-FODMAP diet, wrote the authors.
Healthy controls, meanwhile, had very low scores at baseline (17.8 mm), and there was no change in symptoms on the low-FODMAP or typical Australian diets.
The authors also assessed adherence, as recorded in food diaries. IBS patients were adherent for a median 41 of the 42 days of the combined diets, and healthy controls were adherent for the entire 42 days.
Additionally, "If adherence for at least 17 days of the 21 days of controlled diet (greater than 81% of the days) was arbitrarily considered compliant, then all participants were adherent to the typical Australian diet, and 80% of IBS participants (24 of 30) and 100% of healthy controls were adherent to the low-FODMAP diet," wrote the authors.
According to the authors, one of the strengths of this study was in comparing the low-FODMAP diet with a typical Australian diet, as opposed to an intentionally very-high FODMAP regimen, as earlier studies have done.
They added that providing almost all food to participants facilitated a high degree of adherence.
However, "In life, the low-FODMAP diet is dietitian taught. Dietary restriction would have more varying degrees of compliance and depend on the patients’ degree of understanding, food choices, and motivation for altering dietary habits, as well as the dietitians’ advice on level of FODMAP restriction required," they wrote.
Dr. Halmos disclosed that two coauthors have previously published books on food intolerances and a low-FODMAP diet. They wrote that the study was supported by the National Health and Medical Research Council of Australia, the Les and Eva Erdi Foundation, and by a scholarship from Monash University.
Diets low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols reduced functional gastrointestinal symptoms in patients with irritable bowel syndrome.
The finding, published by Dr. Emma P. Halmos in the January issue of Gastroenterology (2013; [doi:10.1053/j.gastro.2013.09.046]), confirms that the diet’s "growing popularity" is warranted, and supports its use as a first-line therapy for IBS.
Dr. Halmos of Monash University, in Box Hill, Australia, and her colleagues looked at 30 patients with IBS and 8 healthy controls.
At baseline, all patients recorded their normal, daily dietary intake in a food diary for 1 week, as well as IBS symptoms.
Patients were then randomized to receive either 21 days’ worth of a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs), meaning less than or equal to 5 grams per sitting of these ingredients, or food representing a typical Australian diet.
For example, instead of a breakfast of "wheat biscuit–type cereal with 1/2 cup lactose-free milk, two slices wheat toast," the low-FODMAP adherents were given one cup corn flakes with 1/2 cup lactose-free milk and two slices of spelt toast.
Almost all food was provided, including three main meals and three snacks daily.
Next, there was a washout period during which each participant resumed their usual diet and then crossed-over to the alternate, which was not begun until the symptoms had returned to the same level as baseline.
Patients were prohibited from taking any other therapies for IBS during the study period, nor were they permitted to take any pharmacologic agents to alter their symptoms, including laxatives or antidiarrheals.
At baseline, the mean overall gastrointestinal symptoms score for IBS patients was 36.0 mm on the Visual Analogue Scale.
By the final 14 days of the intervention, IBS patients on the low-FODMAP diet reported a mean overall symptom score of 22.8 mm (P less than .001, compared with baseline).
That was in contrast to the 44.9 mm among IBS patients on the typical Australian diet (P less than .001, compared with baseline), with the difference between the two scores also reaching statistical significance (P less than .001).
Moreover, 21 of 30 IBS patients reported improvements of 10 mm or more on the low-FODMAP diet, wrote the authors.
Healthy controls, meanwhile, had very low scores at baseline (17.8 mm), and there was no change in symptoms on the low-FODMAP or typical Australian diets.
The authors also assessed adherence, as recorded in food diaries. IBS patients were adherent for a median 41 of the 42 days of the combined diets, and healthy controls were adherent for the entire 42 days.
Additionally, "If adherence for at least 17 days of the 21 days of controlled diet (greater than 81% of the days) was arbitrarily considered compliant, then all participants were adherent to the typical Australian diet, and 80% of IBS participants (24 of 30) and 100% of healthy controls were adherent to the low-FODMAP diet," wrote the authors.
According to the authors, one of the strengths of this study was in comparing the low-FODMAP diet with a typical Australian diet, as opposed to an intentionally very-high FODMAP regimen, as earlier studies have done.
They added that providing almost all food to participants facilitated a high degree of adherence.
However, "In life, the low-FODMAP diet is dietitian taught. Dietary restriction would have more varying degrees of compliance and depend on the patients’ degree of understanding, food choices, and motivation for altering dietary habits, as well as the dietitians’ advice on level of FODMAP restriction required," they wrote.
Dr. Halmos disclosed that two coauthors have previously published books on food intolerances and a low-FODMAP diet. They wrote that the study was supported by the National Health and Medical Research Council of Australia, the Les and Eva Erdi Foundation, and by a scholarship from Monash University.
FROM GASTROENTEROLOGY
Golimumab maintains response in UC
Subcutaneous golimumab induces and maintains clinical response in ulcerative colitis, according to two studies in the January issue of Gastroenterology.
In the first, Dr. William J. Sandborn of the University of California San Diego, and his colleagues, looked at 1,064 patients enrolled in the PURSUIT-SC study (Program of Ulcerative Colitis Research Studies Utilizing an Investigational Treatment–Subcutaneous), an integrated phase II and phase III study of subcutaneous golimumab.
Golimumab, developed by Janssen Pharmaceutical Companies of Johnson & Johnson – the sponsor of both studies in this month’s Gastroenterology – is a fully human monoclonal antibody to tumor necrosis factor–alpha.
All patients had an established diagnosis of UC and moderate to severe disease activity, defined as a Mayo score of 6-12, with an endoscopic subscore of at least 2.
Additionally, all patients had previously tried and failed oral 5-aminosalicylates, oral corticosteroids, azathioprine (AZA), and/or 6-mercaptopurine. Alternatively, they were simply unable to wean off corticosteroids.
Patients enrolled in the phase II, dose-finding portion of the study were randomized to subcutaneous injections of placebo or one of three different golimumab doses at weeks 0 and 2: 100/50 mg (i.e., 100 mg at week 0 and 50 mg at week 2), 200/100 mg, or 400/200 mg, respectively (n = 169).
"An additional 122 patients were enrolled while phase II data were being analyzed," wrote the authors.
"Clinical response" was defined as a decrease from baseline in the Mayo score of at least 30% and 3 or more points, accompanied by either a rectal bleeding subscore of 0 or 1, or a decrease from baseline in the rectal bleeding subscore of at least 1.
"Clinical remission" was defined as a Mayo score less than or equal to 2, with no individual subscore greater than 1.
Finally, "mucosal healing" was defined as a Mayo endoscopy subscore of 0 or 1.
The authors found that the median changes from the baseline Mayo score were –1.0, –3.0, –2.0, and –3.0 for placebo and golimumab 100/50 mg, 200/100 mg, and 400/200 mg, respectively.
Moreover, at week 6, a numerically greater proportion of patients assigned to golimumab 400/200 mg were in clinical response or remission, had mucosal healing, or had Inflammatory Bowel Disease Questionnaire (IBDQ) scores superior to that of placebo patients.
Next, after analysis of the dose-finding data, the 200/100–mg and 400/200–mg doses were selected for evaluation in phase III, a dose confirmation trial.
Overall, 774 patients participated in this portion of the trial, randomized to placebo, golimumab 200/100 mg, or 400/200 mg at weeks 0 and 2.
By week 6, significantly greater proportions of patients in the golimumab 200/100–mg and golimumab 400/200–mg groups (51.0%, and 54.9%, respectively) were in clinical response compared with patients assigned to placebo (30.3%; P less than 0.0001 for both comparisons).
"The efficacy of both golimumab induction regimens was also demonstrated for the major secondary endpoints of clinical remission, mucosal healing, and improvement from baseline in the IBDQ score, all at week 6," they added.
Looking at safety, nearly 40% of all phase III cohorts reported adverse events, most commonly headache and nasopharyngitis.
The most common serious adverse event – exacerbation of UC – was reported by 8 (1.1%) golimumab-treated and 8 (2.4%) placebo-treated patients.
A second study, also led by Dr. Sandborn, looked at whether continuous golimumab dosing would result in remission maintained to 1 year.
To that end, Dr. Sandborn next randomized the golimumab initial responders in the first study (n = 464) to placebo or injections of 50 or 100 mg golimumab every 4 weeks through 1 year.
Overall, 49.7% of patients in the 100-mg cohort maintained a clinical response through week 54, compared with 47% in the 50-mg group and 31.2% in the placebo group (P less than .001 and P = .010 for each dose versus placebo, respectively).
Remission was also seen with greater frequency among patients receiving 100 mg golimumab at both 30 and 54 weeks (27.8%) compared with placebo (15.6%; P = .004).
Finally, the proportion of patients with mucosal healing at both weeks 30 and 54 was significantly greater for patients receiving golimumab 100 mg (42.4%) compared with placebo (26.6%; P = .002). The mucosal healing rate for the 50-mg golimumab cohort was 41.7%.
Looking at safety, the authors did concede that "the proportions of patients who experienced at least one serious adverse event or discontinued because of an adverse event were greater for golimumab 100 mg compared with placebo or golimumab 50 mg."
However, "the duration of follow-up evaluation in the placebo group was notably shorter than either of the golimumab groups, and when adjusted for follow-up time, difference in the incidences of serious adverse events per 100 patient-years were less remarkable across treatment groups."
Dr. Sandborn and colleagues disclosed ties with Janssen Research and Development, which sponsored the studies. Janssen Biotech is the maker of golimumab.
Golimumab recently gained regulatory approval for the treatment of patients with moderate to severe ulcerative colitis (UC). Sandborn and his colleagues recently published data from a multicenter, phase III international trial (PURSUIT) demonstrating that golimumab, a fully humanized, subcutaneously injectable anti-TNF agent, results in significantly higher rates of clinical response, compared with placebo (at 6 weeks), in a cohort of individuals with moderate to severe UC. Golimumab maintained clinical response for a duration of 54 weeks in addition to achieving higher rates of clinical remission and mucosal healing, compared with placebo. Patients who achieved remission at week 6 were able to more effectively maintain remission up to week 52 when receiving the higher dose of golimumab every 4 weeks (100 mg vs. 50 mg). Based on these findings, golimumab became the third anti-TNF to market for patients with UC.
There are several interesting considerations when interpreting these results. Lower Mayo scores were associated with increased clinical remission rates, while lower serum markers of inflammation such as CRP and fecal lactoferrin were associated with greater initial response, likely representing decreasing disease severity. Several different induction regimens and modes of delivery (intravenous and subcutaneous) were used; however, the rates of response appeared comparable between groups. Lastly, there were more adverse events in the higher dosing arm (100 mg vs. 50 mg) of the study.
As with other anti-TNF agents, response rates increased with higher serum drug levels, further highlighting that we may not be reaching the full clinical potential of this class of medications until therapeutic monitoring becomes more accessible. In summary, this research adds another weapon to the current armamentarium in the management of patients with moderate to severe UC who are not yet considering definitive surgical therapy.
Dr. Frank I. Scott, MSCE, is an instructor of medicine, division of gastroenterology, and a faculty fellow, Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia. Dr. Gary R. Lichtenstein is professor of medicine, University of Pennsylvania, and director, Center for Inflammatory Bowel Disease, department of medicine, division of gastroenterology. He is a consultant for Abbott/AbbVie, has received research funding from and consulted for Janssen Biotech and UCB.
Golimumab recently gained regulatory approval for the treatment of patients with moderate to severe ulcerative colitis (UC). Sandborn and his colleagues recently published data from a multicenter, phase III international trial (PURSUIT) demonstrating that golimumab, a fully humanized, subcutaneously injectable anti-TNF agent, results in significantly higher rates of clinical response, compared with placebo (at 6 weeks), in a cohort of individuals with moderate to severe UC. Golimumab maintained clinical response for a duration of 54 weeks in addition to achieving higher rates of clinical remission and mucosal healing, compared with placebo. Patients who achieved remission at week 6 were able to more effectively maintain remission up to week 52 when receiving the higher dose of golimumab every 4 weeks (100 mg vs. 50 mg). Based on these findings, golimumab became the third anti-TNF to market for patients with UC.
There are several interesting considerations when interpreting these results. Lower Mayo scores were associated with increased clinical remission rates, while lower serum markers of inflammation such as CRP and fecal lactoferrin were associated with greater initial response, likely representing decreasing disease severity. Several different induction regimens and modes of delivery (intravenous and subcutaneous) were used; however, the rates of response appeared comparable between groups. Lastly, there were more adverse events in the higher dosing arm (100 mg vs. 50 mg) of the study.
As with other anti-TNF agents, response rates increased with higher serum drug levels, further highlighting that we may not be reaching the full clinical potential of this class of medications until therapeutic monitoring becomes more accessible. In summary, this research adds another weapon to the current armamentarium in the management of patients with moderate to severe UC who are not yet considering definitive surgical therapy.
Dr. Frank I. Scott, MSCE, is an instructor of medicine, division of gastroenterology, and a faculty fellow, Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia. Dr. Gary R. Lichtenstein is professor of medicine, University of Pennsylvania, and director, Center for Inflammatory Bowel Disease, department of medicine, division of gastroenterology. He is a consultant for Abbott/AbbVie, has received research funding from and consulted for Janssen Biotech and UCB.
Golimumab recently gained regulatory approval for the treatment of patients with moderate to severe ulcerative colitis (UC). Sandborn and his colleagues recently published data from a multicenter, phase III international trial (PURSUIT) demonstrating that golimumab, a fully humanized, subcutaneously injectable anti-TNF agent, results in significantly higher rates of clinical response, compared with placebo (at 6 weeks), in a cohort of individuals with moderate to severe UC. Golimumab maintained clinical response for a duration of 54 weeks in addition to achieving higher rates of clinical remission and mucosal healing, compared with placebo. Patients who achieved remission at week 6 were able to more effectively maintain remission up to week 52 when receiving the higher dose of golimumab every 4 weeks (100 mg vs. 50 mg). Based on these findings, golimumab became the third anti-TNF to market for patients with UC.
There are several interesting considerations when interpreting these results. Lower Mayo scores were associated with increased clinical remission rates, while lower serum markers of inflammation such as CRP and fecal lactoferrin were associated with greater initial response, likely representing decreasing disease severity. Several different induction regimens and modes of delivery (intravenous and subcutaneous) were used; however, the rates of response appeared comparable between groups. Lastly, there were more adverse events in the higher dosing arm (100 mg vs. 50 mg) of the study.
As with other anti-TNF agents, response rates increased with higher serum drug levels, further highlighting that we may not be reaching the full clinical potential of this class of medications until therapeutic monitoring becomes more accessible. In summary, this research adds another weapon to the current armamentarium in the management of patients with moderate to severe UC who are not yet considering definitive surgical therapy.
Dr. Frank I. Scott, MSCE, is an instructor of medicine, division of gastroenterology, and a faculty fellow, Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia. Dr. Gary R. Lichtenstein is professor of medicine, University of Pennsylvania, and director, Center for Inflammatory Bowel Disease, department of medicine, division of gastroenterology. He is a consultant for Abbott/AbbVie, has received research funding from and consulted for Janssen Biotech and UCB.
Subcutaneous golimumab induces and maintains clinical response in ulcerative colitis, according to two studies in the January issue of Gastroenterology.
In the first, Dr. William J. Sandborn of the University of California San Diego, and his colleagues, looked at 1,064 patients enrolled in the PURSUIT-SC study (Program of Ulcerative Colitis Research Studies Utilizing an Investigational Treatment–Subcutaneous), an integrated phase II and phase III study of subcutaneous golimumab.
Golimumab, developed by Janssen Pharmaceutical Companies of Johnson & Johnson – the sponsor of both studies in this month’s Gastroenterology – is a fully human monoclonal antibody to tumor necrosis factor–alpha.
All patients had an established diagnosis of UC and moderate to severe disease activity, defined as a Mayo score of 6-12, with an endoscopic subscore of at least 2.
Additionally, all patients had previously tried and failed oral 5-aminosalicylates, oral corticosteroids, azathioprine (AZA), and/or 6-mercaptopurine. Alternatively, they were simply unable to wean off corticosteroids.
Patients enrolled in the phase II, dose-finding portion of the study were randomized to subcutaneous injections of placebo or one of three different golimumab doses at weeks 0 and 2: 100/50 mg (i.e., 100 mg at week 0 and 50 mg at week 2), 200/100 mg, or 400/200 mg, respectively (n = 169).
"An additional 122 patients were enrolled while phase II data were being analyzed," wrote the authors.
"Clinical response" was defined as a decrease from baseline in the Mayo score of at least 30% and 3 or more points, accompanied by either a rectal bleeding subscore of 0 or 1, or a decrease from baseline in the rectal bleeding subscore of at least 1.
"Clinical remission" was defined as a Mayo score less than or equal to 2, with no individual subscore greater than 1.
Finally, "mucosal healing" was defined as a Mayo endoscopy subscore of 0 or 1.
The authors found that the median changes from the baseline Mayo score were –1.0, –3.0, –2.0, and –3.0 for placebo and golimumab 100/50 mg, 200/100 mg, and 400/200 mg, respectively.
Moreover, at week 6, a numerically greater proportion of patients assigned to golimumab 400/200 mg were in clinical response or remission, had mucosal healing, or had Inflammatory Bowel Disease Questionnaire (IBDQ) scores superior to that of placebo patients.
Next, after analysis of the dose-finding data, the 200/100–mg and 400/200–mg doses were selected for evaluation in phase III, a dose confirmation trial.
Overall, 774 patients participated in this portion of the trial, randomized to placebo, golimumab 200/100 mg, or 400/200 mg at weeks 0 and 2.
By week 6, significantly greater proportions of patients in the golimumab 200/100–mg and golimumab 400/200–mg groups (51.0%, and 54.9%, respectively) were in clinical response compared with patients assigned to placebo (30.3%; P less than 0.0001 for both comparisons).
"The efficacy of both golimumab induction regimens was also demonstrated for the major secondary endpoints of clinical remission, mucosal healing, and improvement from baseline in the IBDQ score, all at week 6," they added.
Looking at safety, nearly 40% of all phase III cohorts reported adverse events, most commonly headache and nasopharyngitis.
The most common serious adverse event – exacerbation of UC – was reported by 8 (1.1%) golimumab-treated and 8 (2.4%) placebo-treated patients.
A second study, also led by Dr. Sandborn, looked at whether continuous golimumab dosing would result in remission maintained to 1 year.
To that end, Dr. Sandborn next randomized the golimumab initial responders in the first study (n = 464) to placebo or injections of 50 or 100 mg golimumab every 4 weeks through 1 year.
Overall, 49.7% of patients in the 100-mg cohort maintained a clinical response through week 54, compared with 47% in the 50-mg group and 31.2% in the placebo group (P less than .001 and P = .010 for each dose versus placebo, respectively).
Remission was also seen with greater frequency among patients receiving 100 mg golimumab at both 30 and 54 weeks (27.8%) compared with placebo (15.6%; P = .004).
Finally, the proportion of patients with mucosal healing at both weeks 30 and 54 was significantly greater for patients receiving golimumab 100 mg (42.4%) compared with placebo (26.6%; P = .002). The mucosal healing rate for the 50-mg golimumab cohort was 41.7%.
Looking at safety, the authors did concede that "the proportions of patients who experienced at least one serious adverse event or discontinued because of an adverse event were greater for golimumab 100 mg compared with placebo or golimumab 50 mg."
However, "the duration of follow-up evaluation in the placebo group was notably shorter than either of the golimumab groups, and when adjusted for follow-up time, difference in the incidences of serious adverse events per 100 patient-years were less remarkable across treatment groups."
Dr. Sandborn and colleagues disclosed ties with Janssen Research and Development, which sponsored the studies. Janssen Biotech is the maker of golimumab.
Subcutaneous golimumab induces and maintains clinical response in ulcerative colitis, according to two studies in the January issue of Gastroenterology.
In the first, Dr. William J. Sandborn of the University of California San Diego, and his colleagues, looked at 1,064 patients enrolled in the PURSUIT-SC study (Program of Ulcerative Colitis Research Studies Utilizing an Investigational Treatment–Subcutaneous), an integrated phase II and phase III study of subcutaneous golimumab.
Golimumab, developed by Janssen Pharmaceutical Companies of Johnson & Johnson – the sponsor of both studies in this month’s Gastroenterology – is a fully human monoclonal antibody to tumor necrosis factor–alpha.
All patients had an established diagnosis of UC and moderate to severe disease activity, defined as a Mayo score of 6-12, with an endoscopic subscore of at least 2.
Additionally, all patients had previously tried and failed oral 5-aminosalicylates, oral corticosteroids, azathioprine (AZA), and/or 6-mercaptopurine. Alternatively, they were simply unable to wean off corticosteroids.
Patients enrolled in the phase II, dose-finding portion of the study were randomized to subcutaneous injections of placebo or one of three different golimumab doses at weeks 0 and 2: 100/50 mg (i.e., 100 mg at week 0 and 50 mg at week 2), 200/100 mg, or 400/200 mg, respectively (n = 169).
"An additional 122 patients were enrolled while phase II data were being analyzed," wrote the authors.
"Clinical response" was defined as a decrease from baseline in the Mayo score of at least 30% and 3 or more points, accompanied by either a rectal bleeding subscore of 0 or 1, or a decrease from baseline in the rectal bleeding subscore of at least 1.
"Clinical remission" was defined as a Mayo score less than or equal to 2, with no individual subscore greater than 1.
Finally, "mucosal healing" was defined as a Mayo endoscopy subscore of 0 or 1.
The authors found that the median changes from the baseline Mayo score were –1.0, –3.0, –2.0, and –3.0 for placebo and golimumab 100/50 mg, 200/100 mg, and 400/200 mg, respectively.
Moreover, at week 6, a numerically greater proportion of patients assigned to golimumab 400/200 mg were in clinical response or remission, had mucosal healing, or had Inflammatory Bowel Disease Questionnaire (IBDQ) scores superior to that of placebo patients.
Next, after analysis of the dose-finding data, the 200/100–mg and 400/200–mg doses were selected for evaluation in phase III, a dose confirmation trial.
Overall, 774 patients participated in this portion of the trial, randomized to placebo, golimumab 200/100 mg, or 400/200 mg at weeks 0 and 2.
By week 6, significantly greater proportions of patients in the golimumab 200/100–mg and golimumab 400/200–mg groups (51.0%, and 54.9%, respectively) were in clinical response compared with patients assigned to placebo (30.3%; P less than 0.0001 for both comparisons).
"The efficacy of both golimumab induction regimens was also demonstrated for the major secondary endpoints of clinical remission, mucosal healing, and improvement from baseline in the IBDQ score, all at week 6," they added.
Looking at safety, nearly 40% of all phase III cohorts reported adverse events, most commonly headache and nasopharyngitis.
The most common serious adverse event – exacerbation of UC – was reported by 8 (1.1%) golimumab-treated and 8 (2.4%) placebo-treated patients.
A second study, also led by Dr. Sandborn, looked at whether continuous golimumab dosing would result in remission maintained to 1 year.
To that end, Dr. Sandborn next randomized the golimumab initial responders in the first study (n = 464) to placebo or injections of 50 or 100 mg golimumab every 4 weeks through 1 year.
Overall, 49.7% of patients in the 100-mg cohort maintained a clinical response through week 54, compared with 47% in the 50-mg group and 31.2% in the placebo group (P less than .001 and P = .010 for each dose versus placebo, respectively).
Remission was also seen with greater frequency among patients receiving 100 mg golimumab at both 30 and 54 weeks (27.8%) compared with placebo (15.6%; P = .004).
Finally, the proportion of patients with mucosal healing at both weeks 30 and 54 was significantly greater for patients receiving golimumab 100 mg (42.4%) compared with placebo (26.6%; P = .002). The mucosal healing rate for the 50-mg golimumab cohort was 41.7%.
Looking at safety, the authors did concede that "the proportions of patients who experienced at least one serious adverse event or discontinued because of an adverse event were greater for golimumab 100 mg compared with placebo or golimumab 50 mg."
However, "the duration of follow-up evaluation in the placebo group was notably shorter than either of the golimumab groups, and when adjusted for follow-up time, difference in the incidences of serious adverse events per 100 patient-years were less remarkable across treatment groups."
Dr. Sandborn and colleagues disclosed ties with Janssen Research and Development, which sponsored the studies. Janssen Biotech is the maker of golimumab.
Major finding: At 6 weeks, more than half of ulcerative colitis patients taking golimumab had achieved clinical response, compared with less than a third of patients taking placebo.
Data source: The Program of Ulcerative Colitis Research Studies Utilizing an Investigational Treatment – Subcutaneous (PURSUIT-SC) study.
Disclosures: Dr. Sandborn and colleagues disclosed ties with Janssen Research and Development, which sponsored the studies. Janssen Biotech is the maker of golimumab.
Bisphosphonate safe, effective in IBD
Bisphosphonates are safe and effective for treating low bone mineral density in inflammatory bowel disease, according to a meta-analysis of 19 randomized controlled studies published in the January issue of Clinical Gastroenterology and Hepatology.
On the other hand, alternative therapies such as calcium plus vitamin D, calcitonin, and low-impact exercise demonstrated questionable efficacy, leading the authors to conclude that bisphosphonates alone "should be more aggressively considered" in this population.
Dr. John Melek of Mercy Hospital and Medical Center, Chicago, and Dr. Atsushi Sakuraba of the Inflammatory Bowel Disease Center at the University of Chicago Medicine, searched the MEDLINE and EMBASE databases as well as Google scholar, the UMIN Clinical Trials Registry, and the Cochrane Central Register for randomized controlled trials conducted between 1981 and 2011 assessing treatment for low BMD in IBD.
Overall, 11 of the 19 included studies evaluated bisphosphonates versus placebo or no treatment, while 4 looked at sodium fluoride versus placebo/no treatment, and 2 assessed calcium plus vitamin D versus placebo/no treatment.
The remaining analyses tested calcitonin versus placebo (1); low-impact exercise versus habitual physical activity (1); and bisphosphonates versus vitamin D (1) and fluoride (1). (Three studies compared multiple arms within the same study.)
Among all data on bisphosphonate efficacy, the authors found that the pooled overall effect by mixed-effect analysis revealed bisphosphonates to be significantly superior to control therapies in improving lumbar spine BMD, with a standard difference in means (SDm) of 0.51 (95% confidence interval [CI], 0.29-0.72; P less than .01).
Indeed, among the seven studies that reported improvements in the hip BMD, for example, a pooled overall effect by mixed-effect analysis showed that bisphosphonates were significantly superior to controls (both other treatments and no treatment; SDm, 0.26; 95% CI, 0.04-0.49; P = .02).
Moreover, among studies which reported the incidences of nonvertebral and vertebral fractures, the pooled ORs were 0.35 (95% CI, 0.06-1.95; P = .23) and 0.38 (95% CI, 0.15-0.96; P = .04), respectively.
Looking at adverse effects, meanwhile, the pooled odds ratio of adverse effects was a nonsignificant 1.24 (95% CI, 0.83-1.85; P = .29), "demonstrating that bisphosphonate treatment was not associated with an increased incidence of adverse effects."
Sodium fluoride, meanwhile, showed some efficacy: The four studies that assessed this treatment showed it was superior to placebo/no treatment in improving lumbar spine BMD (SDm, 1.18; 95% CI, 0.10-2.26; P = .03).
Fluoride did not, however, significantly improve hip BMD, compared with placebo, nor did it reduce the incidence of vertebral or nonvertebral fractures.
Similarly, the one study that looked at calcitonin (and assessed only children and adolescents with active or quiescent Crohn’s disease or ulcerative colitis, plus osteopenia or osteoporosis) found that this treatment was not superior to placebo in improving BMD at the lumbar spine. It did not find changes in hip BMD or the incidence of fractures.
Nor was low-impact exercise superior to control (habitual physical activity) in improving BMD at the hip or lumbar spine; fracture incidence was not assessed.
The authors conceded several limitations, the greatest of which was the presence of marked heterogeneity among studies, which necessitated use of the random-effects model or mixed-effect analysis. Indeed, much of this heterogeneity was due to the fact that some studies aimed to prevent bone loss, whereas others treated established osteopenia, they wrote.
Additionally, although many IBD patients are today treated with biologic therapies, few studies evaluated BMD regimens in IBD patients who underwent biologic treatment.
The authors disclosed no conflicts of interest related to this analysis. Dr. Sakuraba was supported by the Foreign Clinical Pharmacology Training Program of the Japanese Society of Clinical Pharmacology and Therapeutics.
Dr. Stephen B. Hanauer |
My colleague, Atsushi Sakuraba, is the senior author of the meta-analysis evaluating the efficacy and safety of medical therapies to prevent or treat osteoporosis in a wide spectrum of inflammatory bowel disorder patients. Both ulcerative colitis and Crohn's disease have numerous risk factors for the development of osteoporosis and the younger ages of IBD patients create a longer duration of risk such that the AGA considers monitoring of bone mineral density and treatment with calcium and vitamin D in patients exposed to corticosteroids for greater than 3 months as important indicators of quality care. Active inflammation, malabsorption, vitamin D deficiency, and treatment with glucocorticoids all contribute to the risk of decreased bone density and fractures. Because of the heterogeneity of risk factors and many small, individual clinical trials, a meta-analytic approach was required to substantiate benefits in the diverse patient subpopulations (active vs. quiescent ulcerative colitis or Crohn's disease) and in patients taking glucocorticoids. The most significant finding is that bisphosphonates, whether oral or parenteral, were the singular therapeutic drug class that provided benefits across the disease states; calcium plus vitamin D alone in very small patient samples, fluoride, or calcitonin were not effective. It is reassuring that, even in the presence of IBD, these agents were well tolerated. Of course, while bisphosphonates may be the "mortar," adequate replacement of calcium and vitamin D, are the necessary "bricks" in the wall.
Dr. Stephen B. Hanauer is the Joseph B. Kirsner Professor of Medicine and Clinical Pharmacology, University of Chicago. He has no relevant conflicts of interest.
Dr. Stephen B. Hanauer |
My colleague, Atsushi Sakuraba, is the senior author of the meta-analysis evaluating the efficacy and safety of medical therapies to prevent or treat osteoporosis in a wide spectrum of inflammatory bowel disorder patients. Both ulcerative colitis and Crohn's disease have numerous risk factors for the development of osteoporosis and the younger ages of IBD patients create a longer duration of risk such that the AGA considers monitoring of bone mineral density and treatment with calcium and vitamin D in patients exposed to corticosteroids for greater than 3 months as important indicators of quality care. Active inflammation, malabsorption, vitamin D deficiency, and treatment with glucocorticoids all contribute to the risk of decreased bone density and fractures. Because of the heterogeneity of risk factors and many small, individual clinical trials, a meta-analytic approach was required to substantiate benefits in the diverse patient subpopulations (active vs. quiescent ulcerative colitis or Crohn's disease) and in patients taking glucocorticoids. The most significant finding is that bisphosphonates, whether oral or parenteral, were the singular therapeutic drug class that provided benefits across the disease states; calcium plus vitamin D alone in very small patient samples, fluoride, or calcitonin were not effective. It is reassuring that, even in the presence of IBD, these agents were well tolerated. Of course, while bisphosphonates may be the "mortar," adequate replacement of calcium and vitamin D, are the necessary "bricks" in the wall.
Dr. Stephen B. Hanauer is the Joseph B. Kirsner Professor of Medicine and Clinical Pharmacology, University of Chicago. He has no relevant conflicts of interest.
Dr. Stephen B. Hanauer |
My colleague, Atsushi Sakuraba, is the senior author of the meta-analysis evaluating the efficacy and safety of medical therapies to prevent or treat osteoporosis in a wide spectrum of inflammatory bowel disorder patients. Both ulcerative colitis and Crohn's disease have numerous risk factors for the development of osteoporosis and the younger ages of IBD patients create a longer duration of risk such that the AGA considers monitoring of bone mineral density and treatment with calcium and vitamin D in patients exposed to corticosteroids for greater than 3 months as important indicators of quality care. Active inflammation, malabsorption, vitamin D deficiency, and treatment with glucocorticoids all contribute to the risk of decreased bone density and fractures. Because of the heterogeneity of risk factors and many small, individual clinical trials, a meta-analytic approach was required to substantiate benefits in the diverse patient subpopulations (active vs. quiescent ulcerative colitis or Crohn's disease) and in patients taking glucocorticoids. The most significant finding is that bisphosphonates, whether oral or parenteral, were the singular therapeutic drug class that provided benefits across the disease states; calcium plus vitamin D alone in very small patient samples, fluoride, or calcitonin were not effective. It is reassuring that, even in the presence of IBD, these agents were well tolerated. Of course, while bisphosphonates may be the "mortar," adequate replacement of calcium and vitamin D, are the necessary "bricks" in the wall.
Dr. Stephen B. Hanauer is the Joseph B. Kirsner Professor of Medicine and Clinical Pharmacology, University of Chicago. He has no relevant conflicts of interest.
Bisphosphonates are safe and effective for treating low bone mineral density in inflammatory bowel disease, according to a meta-analysis of 19 randomized controlled studies published in the January issue of Clinical Gastroenterology and Hepatology.
On the other hand, alternative therapies such as calcium plus vitamin D, calcitonin, and low-impact exercise demonstrated questionable efficacy, leading the authors to conclude that bisphosphonates alone "should be more aggressively considered" in this population.
Dr. John Melek of Mercy Hospital and Medical Center, Chicago, and Dr. Atsushi Sakuraba of the Inflammatory Bowel Disease Center at the University of Chicago Medicine, searched the MEDLINE and EMBASE databases as well as Google scholar, the UMIN Clinical Trials Registry, and the Cochrane Central Register for randomized controlled trials conducted between 1981 and 2011 assessing treatment for low BMD in IBD.
Overall, 11 of the 19 included studies evaluated bisphosphonates versus placebo or no treatment, while 4 looked at sodium fluoride versus placebo/no treatment, and 2 assessed calcium plus vitamin D versus placebo/no treatment.
The remaining analyses tested calcitonin versus placebo (1); low-impact exercise versus habitual physical activity (1); and bisphosphonates versus vitamin D (1) and fluoride (1). (Three studies compared multiple arms within the same study.)
Among all data on bisphosphonate efficacy, the authors found that the pooled overall effect by mixed-effect analysis revealed bisphosphonates to be significantly superior to control therapies in improving lumbar spine BMD, with a standard difference in means (SDm) of 0.51 (95% confidence interval [CI], 0.29-0.72; P less than .01).
Indeed, among the seven studies that reported improvements in the hip BMD, for example, a pooled overall effect by mixed-effect analysis showed that bisphosphonates were significantly superior to controls (both other treatments and no treatment; SDm, 0.26; 95% CI, 0.04-0.49; P = .02).
Moreover, among studies which reported the incidences of nonvertebral and vertebral fractures, the pooled ORs were 0.35 (95% CI, 0.06-1.95; P = .23) and 0.38 (95% CI, 0.15-0.96; P = .04), respectively.
Looking at adverse effects, meanwhile, the pooled odds ratio of adverse effects was a nonsignificant 1.24 (95% CI, 0.83-1.85; P = .29), "demonstrating that bisphosphonate treatment was not associated with an increased incidence of adverse effects."
Sodium fluoride, meanwhile, showed some efficacy: The four studies that assessed this treatment showed it was superior to placebo/no treatment in improving lumbar spine BMD (SDm, 1.18; 95% CI, 0.10-2.26; P = .03).
Fluoride did not, however, significantly improve hip BMD, compared with placebo, nor did it reduce the incidence of vertebral or nonvertebral fractures.
Similarly, the one study that looked at calcitonin (and assessed only children and adolescents with active or quiescent Crohn’s disease or ulcerative colitis, plus osteopenia or osteoporosis) found that this treatment was not superior to placebo in improving BMD at the lumbar spine. It did not find changes in hip BMD or the incidence of fractures.
Nor was low-impact exercise superior to control (habitual physical activity) in improving BMD at the hip or lumbar spine; fracture incidence was not assessed.
The authors conceded several limitations, the greatest of which was the presence of marked heterogeneity among studies, which necessitated use of the random-effects model or mixed-effect analysis. Indeed, much of this heterogeneity was due to the fact that some studies aimed to prevent bone loss, whereas others treated established osteopenia, they wrote.
Additionally, although many IBD patients are today treated with biologic therapies, few studies evaluated BMD regimens in IBD patients who underwent biologic treatment.
The authors disclosed no conflicts of interest related to this analysis. Dr. Sakuraba was supported by the Foreign Clinical Pharmacology Training Program of the Japanese Society of Clinical Pharmacology and Therapeutics.
Bisphosphonates are safe and effective for treating low bone mineral density in inflammatory bowel disease, according to a meta-analysis of 19 randomized controlled studies published in the January issue of Clinical Gastroenterology and Hepatology.
On the other hand, alternative therapies such as calcium plus vitamin D, calcitonin, and low-impact exercise demonstrated questionable efficacy, leading the authors to conclude that bisphosphonates alone "should be more aggressively considered" in this population.
Dr. John Melek of Mercy Hospital and Medical Center, Chicago, and Dr. Atsushi Sakuraba of the Inflammatory Bowel Disease Center at the University of Chicago Medicine, searched the MEDLINE and EMBASE databases as well as Google scholar, the UMIN Clinical Trials Registry, and the Cochrane Central Register for randomized controlled trials conducted between 1981 and 2011 assessing treatment for low BMD in IBD.
Overall, 11 of the 19 included studies evaluated bisphosphonates versus placebo or no treatment, while 4 looked at sodium fluoride versus placebo/no treatment, and 2 assessed calcium plus vitamin D versus placebo/no treatment.
The remaining analyses tested calcitonin versus placebo (1); low-impact exercise versus habitual physical activity (1); and bisphosphonates versus vitamin D (1) and fluoride (1). (Three studies compared multiple arms within the same study.)
Among all data on bisphosphonate efficacy, the authors found that the pooled overall effect by mixed-effect analysis revealed bisphosphonates to be significantly superior to control therapies in improving lumbar spine BMD, with a standard difference in means (SDm) of 0.51 (95% confidence interval [CI], 0.29-0.72; P less than .01).
Indeed, among the seven studies that reported improvements in the hip BMD, for example, a pooled overall effect by mixed-effect analysis showed that bisphosphonates were significantly superior to controls (both other treatments and no treatment; SDm, 0.26; 95% CI, 0.04-0.49; P = .02).
Moreover, among studies which reported the incidences of nonvertebral and vertebral fractures, the pooled ORs were 0.35 (95% CI, 0.06-1.95; P = .23) and 0.38 (95% CI, 0.15-0.96; P = .04), respectively.
Looking at adverse effects, meanwhile, the pooled odds ratio of adverse effects was a nonsignificant 1.24 (95% CI, 0.83-1.85; P = .29), "demonstrating that bisphosphonate treatment was not associated with an increased incidence of adverse effects."
Sodium fluoride, meanwhile, showed some efficacy: The four studies that assessed this treatment showed it was superior to placebo/no treatment in improving lumbar spine BMD (SDm, 1.18; 95% CI, 0.10-2.26; P = .03).
Fluoride did not, however, significantly improve hip BMD, compared with placebo, nor did it reduce the incidence of vertebral or nonvertebral fractures.
Similarly, the one study that looked at calcitonin (and assessed only children and adolescents with active or quiescent Crohn’s disease or ulcerative colitis, plus osteopenia or osteoporosis) found that this treatment was not superior to placebo in improving BMD at the lumbar spine. It did not find changes in hip BMD or the incidence of fractures.
Nor was low-impact exercise superior to control (habitual physical activity) in improving BMD at the hip or lumbar spine; fracture incidence was not assessed.
The authors conceded several limitations, the greatest of which was the presence of marked heterogeneity among studies, which necessitated use of the random-effects model or mixed-effect analysis. Indeed, much of this heterogeneity was due to the fact that some studies aimed to prevent bone loss, whereas others treated established osteopenia, they wrote.
Additionally, although many IBD patients are today treated with biologic therapies, few studies evaluated BMD regimens in IBD patients who underwent biologic treatment.
The authors disclosed no conflicts of interest related to this analysis. Dr. Sakuraba was supported by the Foreign Clinical Pharmacology Training Program of the Japanese Society of Clinical Pharmacology and Therapeutics.
Major finding: Bisphosphonate treatment beat sodium fluoride, calcium plus vitamin D, and low-impact exercise in improving bone mineral density and reducing fractures.
Data source: A meta-analysis of 19 randomized controlled trials.
Disclosures: The authors disclosed no conflicts of interest related to this analysis. Dr. Sakuraba was supported by the Foreign Clinical Pharmacology Training Program of the Japanese Society of Clinical Pharmacology and Therapeutics.
New diverticulosis data challenge long-held beliefs
Not only is there no link between low-fiber diets and diverticulosis, but the incidence of diverticulitis is not nearly as common as was previously believed.
Those are the conclusions of two new studies in the December issue of Clinical Gastroenterology and Hepatology, both of which challenge long-held beliefs about the causes of these conditions.
In the first study, Dr. Anne F. Peery of the University of North Carolina at Chapel Hill, and her colleagues looked at 539 patients with colonic diverticula and 1,569 controls, all culled from the Vitamin D and Calcium Polyp Prevention Study, a double-blind, placebo-controlled trial of vitamin D and/or calcium for the prevention of colonic adenomas (doi:10.1016/j.cgh.2013.06.033).
Patients with a self-reported history of diverticulosis or diverticulitis were excluded, as were cases with a history of colon resection, inflammatory bowel disease, or familial history of colon cancer. Most cases (88%) had descending or sigmoid colon diverticula, and these patients were significantly older and more likely to be male than were the controls.
According to Dr. Peery and colleagues, there was no difference between cases and controls in terms of mean dietary fiber intake (14.8 g per day versus 15.3 g per day, P = .2) and reported supplemental fiber intake (5% versus 5%, P = .7).
Nor was there any significant link when investigators compared the highest quartile of fiber intake (mean, 25 g/day) to the lowest (mean, 8 g/day) (odds ratio = 0.96; 95% confidence interval, 0.71-1.30).
Finally, the investigators found no associations between dietary fiber intake by subtype (for instance, beans, grains, fruits, and vegetables) and diverticulosis.
"Forty years ago, Dr. Neil Painter popularized the hypothesis that inadequate dietary fiber intake and constipation were the cause of sigmoid diverticulosis," wrote Dr. Peery. However, "Although the fiber hypothesis is conceptually attractive and widely accepted, it has not been rigorously examined."
And while Dr. Peery’s data were based on a food frequency questionnaire – which could be subject to measurement bias – she added that "the mean total fiber intake in the highest quartile was 25 g, versus 8 g in the lowest.
"This wide range makes it unlikely that homogeneity of intake accounts for the null association of fiber with the presence of diverticula," she wrote.
A second study by Dr. Kamyar Shahedi of the University of California Los Angeles/Veteran’s Affairs Center for Outcomes Research and Education, also sought evidence for the commonly held belief that up to 25% of patients with diverticulosis will develop diverticulitis.
Dr. Shahedi and colleagues performed a retrospective survival analysis of 2,222 patients from the Veteran’s Affairs Greater Los Angeles Healthcare System with colonic diverticulosis and a median follow-up of 6.75 years (doi:10.1016/j.cgh.2013.06.020). Patients were excluded if they had any ICD-9 code for diverticulitis or documentation of diverticulitis in the medical record notes at any point before the index date of diverticulosis.
When the researchers looked only at imaging-confirmed or surgical specimen–confirmed cases, just 23 patients (1%) developed acute diverticulitis during the study period, Dr. Shahedi found. This jumped to 95 patients (4.3%) when clinical diagnoses were also used, for an incidence of 6 cases per 1,000 patient years.
Looking at predictors for progression, the authors found that only age was related to the development of diverticulitis, with every year of age at diverticulosis detection conferring a 2.4% lower hazard of developing diverticulitis.
According to the authors, the "widely cited figures" that up to a quarter of patients with diverticulosis will develop acute diverticulitis is based on data collected before the time of routine colon screening. "Therefore, the true denominator of individuals harboring diverticulosis was not accounted for in these calculations," they concluded.
And while their retrospective study does leave room for the possibility that cases were missed, "Future series or patient registries may better standardize the definition of diverticulitis in a prospective cohort," wrote the investigators.
In the meantime, prevalence data such as these "may help to reframe discussions with patients regarding their probability of developing clinically significant diverticulitis."
However, even as these two findings change the way providers counsel patients about the cause and impact of diverticula, a third study, also in December’s issue of Clinical Gastroenterology and Hepatology, adds another wrinkle: Patients who do develop diverticulitis are at increased risk for a diagnosis of irritable bowel syndrome later on.
Dr. Erica Cohen of the VA Greater Los Angeles Healthcare System, and colleagues looked at 1,105 chart-confirmed cases of diverticulitis, identified retrospectively from the same dataset used by Dr. Shahedi (doi:10.1016/j.cgh.2013.03.007).
All cases were matched with controls seen on the same day, the mean follow-up period was 6.3 years, and patients with pre-existing IBS or functional bowel diagnoses were excluded from the study.
The primary outcome was a new IBS diagnosis after the index diverticulitis attack (for cases) or enrollment date (for controls). Ultimately, Dr. Cohen found 24 cases of newly diagnosed IBS during the study period: 20 among diverticulitis cases, and 4 among controls. That translated to a hazard ratio of 4.7 among cases compared with controls, even after adjustment for age, sex, ethnicity, race, inpatient versus outpatient status, and comorbidity score (95% CI, 1.6 –14.0; P = .006).
Dr. Cohen offered several possible explanations for the association between diverticulitis and new diagnosis of IBS.
"Inflammation may alter gastrointestinal reflexes, amplify visceral sensitivity, render the bowel more susceptible to negative effects of microbiota, and alter motility in IBS," she said.
"Another putative mechanism of chronic diverticular disease involves shifts in intestinal microbiota leading to chronic inflammation, similar to theoretical models of IBS," Dr. Cohen said.
"Future research should identify demographic and clinical predictors of post-diverticulitis irritable bowel syndrome and evaluate its incidence in prospective studies to better determine whether the link is causal or merely associative," she concluded.
Finally, a fourth study could help researchers reduce the risk of the painful inflammatory condition: Among diverticulosis patients, higher levels of serum vitamin D were associated significantly with a lower risk of diverticulitis.
In her analysis, also published in the December issue of Clinical Gastroenterology and Hepatology, Dr. Lillias H. Maguire and colleagues identified 9,116 diverticulosis patients and 922 diverticulitis patients from the Partners Healthcare Research Patient Data Registry (doi:10.1016/j.cgh.2013.07.035). All patients had at least one prediagnostic serum vitamin D level on record between 1993 and 2012.
Dr. Maguire of Massachusetts General Hospital, Boston, found that patients with uncomplicated diverticulosis had mean levels of 29.1 ng/mL, versus 25.3 ng/mL among the diverticulitis patients (P less than .0001).
A sensitivity analysis that compared the mean prediagnostic values between cases and controls who had more than one reported vitamin D level yielded similarly significant results: The mean vitamin D level of uncomplicated diverticulosis was 33.0 ng/mL, compared with 28.1 ng/mL for acute diverticulitis patients (P less than .0001), 28.8 ng/mL for complicated diverticulitis patients (P = .002), 23.9 ng/mL for surgical diverticulitis cases (P less than .0001), and 25.5 ng/mL for recurrent diverticulitis patients (P less than .0001).
Indeed, "Compared with patients with acute diverticulitis without other sequelae, patients in the subgroups who developed abscess, required surgery, or had recurrent attacks were observed to have lower prediagnostic levels of vitamin D."
These differences between diverticulitis subgroups did not reach significance except in the cohort of patients who required surgery, who had the lowest levels of all.
"Taken together with prior studies showing an inverse association of 25(OH)D and risk of colonic cancer and inflammatory bowel disease, these results highlight the potential importance of vitamin D in the maintenance of colonic health," the investigators wrote.
"Additional studies in cohorts with more detailed information on potential confounders of this association are warranted," they added.
Dr. Peery, whose study looked at fiber intake among diverticulosis patients, and her collaborators reported having no disclosures, and stated that they received funding from the National Institutes of Health.
Dr. Shahedi, who assessed the incidence of diverticulitis, disclosed that three coinvestigators are employees of Shire Pharmaceuticals, which sponsored their study. Other investigators disclosed ties to Amgen and Ironwood Pharmaceuticals.
Dr. Cohen’s coinvestigators, who studied the prevalence of IBS following diverticulitis, disclosed ties to Ironwood Pharmaceuticals, Prometheus, Takeda Pharmaceuticals, Amgen, Ritter Pharmaceuticals, and Shire. Two investigators were employees of Shire, which funded the research.
Finally, the coinvestigators of Dr. Maguire, who looked at vitamin D levels, reported ties to Shire, Bayer Health, Pfizer, Millennium Pharmaceuticals, and Pozen. They were funded by grants from the American College of Gastroenterology as well as the National Institutes of Health.
Among the widely held beliefs of both lay and medical communities are that a) diverticulosis is associated with constipation and low consumption of dietary fiber, b) individuals with diverticulosis should eat neither seeds nor nuts, and c) diverticulitis will occur in up to 25% of individuals over their lifetimes. Certitude is not the same as correctness and facts do not always support our most cherished beliefs. This is highlighted by four recent studies published in the December issue of Clinical Gastroenterology and Hepatology.
A cross-sectional study by Peery et al. showed that the first of these beliefs to not be true and also found that nonwhite subjects had a 26% lower risk than did whites even after adjustment for risk factors. This suggests that earlier studies demonstrating a low prevalence of diverticulosis in African populations may have reflected, in part, racial differences rather than dietary issues.
A large study by Shahedi et al., of mostly male patients with incidental diverticulosis found by colonoscopy, suggests that the risk of developing diverticulitis has been vastly overestimated. This may be indicative of the higher mix of asymptomatic diverticulosis discovered during screening colonoscopy, in contrast with earlier studies in which imaging was often performed on symptomatic patients.
Equally intriguing is the biologically plausible finding by Maguire et al., that higher serum levels of vitamin D may reduce the risk of diverticulitis. Screening for and correcting vitamin D deficiencies are widely accepted practices and easy to implement. It might also be mentioned that popcorn consumption was associated with a decreased incidence in diverticulitis according to a study published 5 years ago (JAMA 2008;300:907-14), a finding still not fully appreciated by either the lay or medical community.
Lastly, Cohen and colleagues provide evidence for an increased risk of developing irritable bowel syndrome after acute diverticulitis. Although these data can be conceptualized as similar to postinfectious IBS, this does not imply causality. Previous studies have suggested that treatment with antibiotics increases functional abdominal symptoms, including IBS (Am. J. Gastroenterol. 2002;97:104-8). Nevertheless, diverticulitis may have functional GI consequences beyond the acute event.
Despite what we think we know, many questions of clinical importance about diverticulosis remain to be answered (Am. J. Gastroenterol. 2012;107:1486-93).
Dr. Arnold Wald is professor of medicine in the division of gastroenterology and hepatology, University of Wisconsin School of Medicine and Public Health. He had no relevant conflicts of interest.
Among the widely held beliefs of both lay and medical communities are that a) diverticulosis is associated with constipation and low consumption of dietary fiber, b) individuals with diverticulosis should eat neither seeds nor nuts, and c) diverticulitis will occur in up to 25% of individuals over their lifetimes. Certitude is not the same as correctness and facts do not always support our most cherished beliefs. This is highlighted by four recent studies published in the December issue of Clinical Gastroenterology and Hepatology.
A cross-sectional study by Peery et al. showed that the first of these beliefs to not be true and also found that nonwhite subjects had a 26% lower risk than did whites even after adjustment for risk factors. This suggests that earlier studies demonstrating a low prevalence of diverticulosis in African populations may have reflected, in part, racial differences rather than dietary issues.
A large study by Shahedi et al., of mostly male patients with incidental diverticulosis found by colonoscopy, suggests that the risk of developing diverticulitis has been vastly overestimated. This may be indicative of the higher mix of asymptomatic diverticulosis discovered during screening colonoscopy, in contrast with earlier studies in which imaging was often performed on symptomatic patients.
Equally intriguing is the biologically plausible finding by Maguire et al., that higher serum levels of vitamin D may reduce the risk of diverticulitis. Screening for and correcting vitamin D deficiencies are widely accepted practices and easy to implement. It might also be mentioned that popcorn consumption was associated with a decreased incidence in diverticulitis according to a study published 5 years ago (JAMA 2008;300:907-14), a finding still not fully appreciated by either the lay or medical community.
Lastly, Cohen and colleagues provide evidence for an increased risk of developing irritable bowel syndrome after acute diverticulitis. Although these data can be conceptualized as similar to postinfectious IBS, this does not imply causality. Previous studies have suggested that treatment with antibiotics increases functional abdominal symptoms, including IBS (Am. J. Gastroenterol. 2002;97:104-8). Nevertheless, diverticulitis may have functional GI consequences beyond the acute event.
Despite what we think we know, many questions of clinical importance about diverticulosis remain to be answered (Am. J. Gastroenterol. 2012;107:1486-93).
Dr. Arnold Wald is professor of medicine in the division of gastroenterology and hepatology, University of Wisconsin School of Medicine and Public Health. He had no relevant conflicts of interest.
Among the widely held beliefs of both lay and medical communities are that a) diverticulosis is associated with constipation and low consumption of dietary fiber, b) individuals with diverticulosis should eat neither seeds nor nuts, and c) diverticulitis will occur in up to 25% of individuals over their lifetimes. Certitude is not the same as correctness and facts do not always support our most cherished beliefs. This is highlighted by four recent studies published in the December issue of Clinical Gastroenterology and Hepatology.
A cross-sectional study by Peery et al. showed that the first of these beliefs to not be true and also found that nonwhite subjects had a 26% lower risk than did whites even after adjustment for risk factors. This suggests that earlier studies demonstrating a low prevalence of diverticulosis in African populations may have reflected, in part, racial differences rather than dietary issues.
A large study by Shahedi et al., of mostly male patients with incidental diverticulosis found by colonoscopy, suggests that the risk of developing diverticulitis has been vastly overestimated. This may be indicative of the higher mix of asymptomatic diverticulosis discovered during screening colonoscopy, in contrast with earlier studies in which imaging was often performed on symptomatic patients.
Equally intriguing is the biologically plausible finding by Maguire et al., that higher serum levels of vitamin D may reduce the risk of diverticulitis. Screening for and correcting vitamin D deficiencies are widely accepted practices and easy to implement. It might also be mentioned that popcorn consumption was associated with a decreased incidence in diverticulitis according to a study published 5 years ago (JAMA 2008;300:907-14), a finding still not fully appreciated by either the lay or medical community.
Lastly, Cohen and colleagues provide evidence for an increased risk of developing irritable bowel syndrome after acute diverticulitis. Although these data can be conceptualized as similar to postinfectious IBS, this does not imply causality. Previous studies have suggested that treatment with antibiotics increases functional abdominal symptoms, including IBS (Am. J. Gastroenterol. 2002;97:104-8). Nevertheless, diverticulitis may have functional GI consequences beyond the acute event.
Despite what we think we know, many questions of clinical importance about diverticulosis remain to be answered (Am. J. Gastroenterol. 2012;107:1486-93).
Dr. Arnold Wald is professor of medicine in the division of gastroenterology and hepatology, University of Wisconsin School of Medicine and Public Health. He had no relevant conflicts of interest.
Not only is there no link between low-fiber diets and diverticulosis, but the incidence of diverticulitis is not nearly as common as was previously believed.
Those are the conclusions of two new studies in the December issue of Clinical Gastroenterology and Hepatology, both of which challenge long-held beliefs about the causes of these conditions.
In the first study, Dr. Anne F. Peery of the University of North Carolina at Chapel Hill, and her colleagues looked at 539 patients with colonic diverticula and 1,569 controls, all culled from the Vitamin D and Calcium Polyp Prevention Study, a double-blind, placebo-controlled trial of vitamin D and/or calcium for the prevention of colonic adenomas (doi:10.1016/j.cgh.2013.06.033).
Patients with a self-reported history of diverticulosis or diverticulitis were excluded, as were cases with a history of colon resection, inflammatory bowel disease, or familial history of colon cancer. Most cases (88%) had descending or sigmoid colon diverticula, and these patients were significantly older and more likely to be male than were the controls.
According to Dr. Peery and colleagues, there was no difference between cases and controls in terms of mean dietary fiber intake (14.8 g per day versus 15.3 g per day, P = .2) and reported supplemental fiber intake (5% versus 5%, P = .7).
Nor was there any significant link when investigators compared the highest quartile of fiber intake (mean, 25 g/day) to the lowest (mean, 8 g/day) (odds ratio = 0.96; 95% confidence interval, 0.71-1.30).
Finally, the investigators found no associations between dietary fiber intake by subtype (for instance, beans, grains, fruits, and vegetables) and diverticulosis.
"Forty years ago, Dr. Neil Painter popularized the hypothesis that inadequate dietary fiber intake and constipation were the cause of sigmoid diverticulosis," wrote Dr. Peery. However, "Although the fiber hypothesis is conceptually attractive and widely accepted, it has not been rigorously examined."
And while Dr. Peery’s data were based on a food frequency questionnaire – which could be subject to measurement bias – she added that "the mean total fiber intake in the highest quartile was 25 g, versus 8 g in the lowest.
"This wide range makes it unlikely that homogeneity of intake accounts for the null association of fiber with the presence of diverticula," she wrote.
A second study by Dr. Kamyar Shahedi of the University of California Los Angeles/Veteran’s Affairs Center for Outcomes Research and Education, also sought evidence for the commonly held belief that up to 25% of patients with diverticulosis will develop diverticulitis.
Dr. Shahedi and colleagues performed a retrospective survival analysis of 2,222 patients from the Veteran’s Affairs Greater Los Angeles Healthcare System with colonic diverticulosis and a median follow-up of 6.75 years (doi:10.1016/j.cgh.2013.06.020). Patients were excluded if they had any ICD-9 code for diverticulitis or documentation of diverticulitis in the medical record notes at any point before the index date of diverticulosis.
When the researchers looked only at imaging-confirmed or surgical specimen–confirmed cases, just 23 patients (1%) developed acute diverticulitis during the study period, Dr. Shahedi found. This jumped to 95 patients (4.3%) when clinical diagnoses were also used, for an incidence of 6 cases per 1,000 patient years.
Looking at predictors for progression, the authors found that only age was related to the development of diverticulitis, with every year of age at diverticulosis detection conferring a 2.4% lower hazard of developing diverticulitis.
According to the authors, the "widely cited figures" that up to a quarter of patients with diverticulosis will develop acute diverticulitis is based on data collected before the time of routine colon screening. "Therefore, the true denominator of individuals harboring diverticulosis was not accounted for in these calculations," they concluded.
And while their retrospective study does leave room for the possibility that cases were missed, "Future series or patient registries may better standardize the definition of diverticulitis in a prospective cohort," wrote the investigators.
In the meantime, prevalence data such as these "may help to reframe discussions with patients regarding their probability of developing clinically significant diverticulitis."
However, even as these two findings change the way providers counsel patients about the cause and impact of diverticula, a third study, also in December’s issue of Clinical Gastroenterology and Hepatology, adds another wrinkle: Patients who do develop diverticulitis are at increased risk for a diagnosis of irritable bowel syndrome later on.
Dr. Erica Cohen of the VA Greater Los Angeles Healthcare System, and colleagues looked at 1,105 chart-confirmed cases of diverticulitis, identified retrospectively from the same dataset used by Dr. Shahedi (doi:10.1016/j.cgh.2013.03.007).
All cases were matched with controls seen on the same day, the mean follow-up period was 6.3 years, and patients with pre-existing IBS or functional bowel diagnoses were excluded from the study.
The primary outcome was a new IBS diagnosis after the index diverticulitis attack (for cases) or enrollment date (for controls). Ultimately, Dr. Cohen found 24 cases of newly diagnosed IBS during the study period: 20 among diverticulitis cases, and 4 among controls. That translated to a hazard ratio of 4.7 among cases compared with controls, even after adjustment for age, sex, ethnicity, race, inpatient versus outpatient status, and comorbidity score (95% CI, 1.6 –14.0; P = .006).
Dr. Cohen offered several possible explanations for the association between diverticulitis and new diagnosis of IBS.
"Inflammation may alter gastrointestinal reflexes, amplify visceral sensitivity, render the bowel more susceptible to negative effects of microbiota, and alter motility in IBS," she said.
"Another putative mechanism of chronic diverticular disease involves shifts in intestinal microbiota leading to chronic inflammation, similar to theoretical models of IBS," Dr. Cohen said.
"Future research should identify demographic and clinical predictors of post-diverticulitis irritable bowel syndrome and evaluate its incidence in prospective studies to better determine whether the link is causal or merely associative," she concluded.
Finally, a fourth study could help researchers reduce the risk of the painful inflammatory condition: Among diverticulosis patients, higher levels of serum vitamin D were associated significantly with a lower risk of diverticulitis.
In her analysis, also published in the December issue of Clinical Gastroenterology and Hepatology, Dr. Lillias H. Maguire and colleagues identified 9,116 diverticulosis patients and 922 diverticulitis patients from the Partners Healthcare Research Patient Data Registry (doi:10.1016/j.cgh.2013.07.035). All patients had at least one prediagnostic serum vitamin D level on record between 1993 and 2012.
Dr. Maguire of Massachusetts General Hospital, Boston, found that patients with uncomplicated diverticulosis had mean levels of 29.1 ng/mL, versus 25.3 ng/mL among the diverticulitis patients (P less than .0001).
A sensitivity analysis that compared the mean prediagnostic values between cases and controls who had more than one reported vitamin D level yielded similarly significant results: The mean vitamin D level of uncomplicated diverticulosis was 33.0 ng/mL, compared with 28.1 ng/mL for acute diverticulitis patients (P less than .0001), 28.8 ng/mL for complicated diverticulitis patients (P = .002), 23.9 ng/mL for surgical diverticulitis cases (P less than .0001), and 25.5 ng/mL for recurrent diverticulitis patients (P less than .0001).
Indeed, "Compared with patients with acute diverticulitis without other sequelae, patients in the subgroups who developed abscess, required surgery, or had recurrent attacks were observed to have lower prediagnostic levels of vitamin D."
These differences between diverticulitis subgroups did not reach significance except in the cohort of patients who required surgery, who had the lowest levels of all.
"Taken together with prior studies showing an inverse association of 25(OH)D and risk of colonic cancer and inflammatory bowel disease, these results highlight the potential importance of vitamin D in the maintenance of colonic health," the investigators wrote.
"Additional studies in cohorts with more detailed information on potential confounders of this association are warranted," they added.
Dr. Peery, whose study looked at fiber intake among diverticulosis patients, and her collaborators reported having no disclosures, and stated that they received funding from the National Institutes of Health.
Dr. Shahedi, who assessed the incidence of diverticulitis, disclosed that three coinvestigators are employees of Shire Pharmaceuticals, which sponsored their study. Other investigators disclosed ties to Amgen and Ironwood Pharmaceuticals.
Dr. Cohen’s coinvestigators, who studied the prevalence of IBS following diverticulitis, disclosed ties to Ironwood Pharmaceuticals, Prometheus, Takeda Pharmaceuticals, Amgen, Ritter Pharmaceuticals, and Shire. Two investigators were employees of Shire, which funded the research.
Finally, the coinvestigators of Dr. Maguire, who looked at vitamin D levels, reported ties to Shire, Bayer Health, Pfizer, Millennium Pharmaceuticals, and Pozen. They were funded by grants from the American College of Gastroenterology as well as the National Institutes of Health.
Not only is there no link between low-fiber diets and diverticulosis, but the incidence of diverticulitis is not nearly as common as was previously believed.
Those are the conclusions of two new studies in the December issue of Clinical Gastroenterology and Hepatology, both of which challenge long-held beliefs about the causes of these conditions.
In the first study, Dr. Anne F. Peery of the University of North Carolina at Chapel Hill, and her colleagues looked at 539 patients with colonic diverticula and 1,569 controls, all culled from the Vitamin D and Calcium Polyp Prevention Study, a double-blind, placebo-controlled trial of vitamin D and/or calcium for the prevention of colonic adenomas (doi:10.1016/j.cgh.2013.06.033).
Patients with a self-reported history of diverticulosis or diverticulitis were excluded, as were cases with a history of colon resection, inflammatory bowel disease, or familial history of colon cancer. Most cases (88%) had descending or sigmoid colon diverticula, and these patients were significantly older and more likely to be male than were the controls.
According to Dr. Peery and colleagues, there was no difference between cases and controls in terms of mean dietary fiber intake (14.8 g per day versus 15.3 g per day, P = .2) and reported supplemental fiber intake (5% versus 5%, P = .7).
Nor was there any significant link when investigators compared the highest quartile of fiber intake (mean, 25 g/day) to the lowest (mean, 8 g/day) (odds ratio = 0.96; 95% confidence interval, 0.71-1.30).
Finally, the investigators found no associations between dietary fiber intake by subtype (for instance, beans, grains, fruits, and vegetables) and diverticulosis.
"Forty years ago, Dr. Neil Painter popularized the hypothesis that inadequate dietary fiber intake and constipation were the cause of sigmoid diverticulosis," wrote Dr. Peery. However, "Although the fiber hypothesis is conceptually attractive and widely accepted, it has not been rigorously examined."
And while Dr. Peery’s data were based on a food frequency questionnaire – which could be subject to measurement bias – she added that "the mean total fiber intake in the highest quartile was 25 g, versus 8 g in the lowest.
"This wide range makes it unlikely that homogeneity of intake accounts for the null association of fiber with the presence of diverticula," she wrote.
A second study by Dr. Kamyar Shahedi of the University of California Los Angeles/Veteran’s Affairs Center for Outcomes Research and Education, also sought evidence for the commonly held belief that up to 25% of patients with diverticulosis will develop diverticulitis.
Dr. Shahedi and colleagues performed a retrospective survival analysis of 2,222 patients from the Veteran’s Affairs Greater Los Angeles Healthcare System with colonic diverticulosis and a median follow-up of 6.75 years (doi:10.1016/j.cgh.2013.06.020). Patients were excluded if they had any ICD-9 code for diverticulitis or documentation of diverticulitis in the medical record notes at any point before the index date of diverticulosis.
When the researchers looked only at imaging-confirmed or surgical specimen–confirmed cases, just 23 patients (1%) developed acute diverticulitis during the study period, Dr. Shahedi found. This jumped to 95 patients (4.3%) when clinical diagnoses were also used, for an incidence of 6 cases per 1,000 patient years.
Looking at predictors for progression, the authors found that only age was related to the development of diverticulitis, with every year of age at diverticulosis detection conferring a 2.4% lower hazard of developing diverticulitis.
According to the authors, the "widely cited figures" that up to a quarter of patients with diverticulosis will develop acute diverticulitis is based on data collected before the time of routine colon screening. "Therefore, the true denominator of individuals harboring diverticulosis was not accounted for in these calculations," they concluded.
And while their retrospective study does leave room for the possibility that cases were missed, "Future series or patient registries may better standardize the definition of diverticulitis in a prospective cohort," wrote the investigators.
In the meantime, prevalence data such as these "may help to reframe discussions with patients regarding their probability of developing clinically significant diverticulitis."
However, even as these two findings change the way providers counsel patients about the cause and impact of diverticula, a third study, also in December’s issue of Clinical Gastroenterology and Hepatology, adds another wrinkle: Patients who do develop diverticulitis are at increased risk for a diagnosis of irritable bowel syndrome later on.
Dr. Erica Cohen of the VA Greater Los Angeles Healthcare System, and colleagues looked at 1,105 chart-confirmed cases of diverticulitis, identified retrospectively from the same dataset used by Dr. Shahedi (doi:10.1016/j.cgh.2013.03.007).
All cases were matched with controls seen on the same day, the mean follow-up period was 6.3 years, and patients with pre-existing IBS or functional bowel diagnoses were excluded from the study.
The primary outcome was a new IBS diagnosis after the index diverticulitis attack (for cases) or enrollment date (for controls). Ultimately, Dr. Cohen found 24 cases of newly diagnosed IBS during the study period: 20 among diverticulitis cases, and 4 among controls. That translated to a hazard ratio of 4.7 among cases compared with controls, even after adjustment for age, sex, ethnicity, race, inpatient versus outpatient status, and comorbidity score (95% CI, 1.6 –14.0; P = .006).
Dr. Cohen offered several possible explanations for the association between diverticulitis and new diagnosis of IBS.
"Inflammation may alter gastrointestinal reflexes, amplify visceral sensitivity, render the bowel more susceptible to negative effects of microbiota, and alter motility in IBS," she said.
"Another putative mechanism of chronic diverticular disease involves shifts in intestinal microbiota leading to chronic inflammation, similar to theoretical models of IBS," Dr. Cohen said.
"Future research should identify demographic and clinical predictors of post-diverticulitis irritable bowel syndrome and evaluate its incidence in prospective studies to better determine whether the link is causal or merely associative," she concluded.
Finally, a fourth study could help researchers reduce the risk of the painful inflammatory condition: Among diverticulosis patients, higher levels of serum vitamin D were associated significantly with a lower risk of diverticulitis.
In her analysis, also published in the December issue of Clinical Gastroenterology and Hepatology, Dr. Lillias H. Maguire and colleagues identified 9,116 diverticulosis patients and 922 diverticulitis patients from the Partners Healthcare Research Patient Data Registry (doi:10.1016/j.cgh.2013.07.035). All patients had at least one prediagnostic serum vitamin D level on record between 1993 and 2012.
Dr. Maguire of Massachusetts General Hospital, Boston, found that patients with uncomplicated diverticulosis had mean levels of 29.1 ng/mL, versus 25.3 ng/mL among the diverticulitis patients (P less than .0001).
A sensitivity analysis that compared the mean prediagnostic values between cases and controls who had more than one reported vitamin D level yielded similarly significant results: The mean vitamin D level of uncomplicated diverticulosis was 33.0 ng/mL, compared with 28.1 ng/mL for acute diverticulitis patients (P less than .0001), 28.8 ng/mL for complicated diverticulitis patients (P = .002), 23.9 ng/mL for surgical diverticulitis cases (P less than .0001), and 25.5 ng/mL for recurrent diverticulitis patients (P less than .0001).
Indeed, "Compared with patients with acute diverticulitis without other sequelae, patients in the subgroups who developed abscess, required surgery, or had recurrent attacks were observed to have lower prediagnostic levels of vitamin D."
These differences between diverticulitis subgroups did not reach significance except in the cohort of patients who required surgery, who had the lowest levels of all.
"Taken together with prior studies showing an inverse association of 25(OH)D and risk of colonic cancer and inflammatory bowel disease, these results highlight the potential importance of vitamin D in the maintenance of colonic health," the investigators wrote.
"Additional studies in cohorts with more detailed information on potential confounders of this association are warranted," they added.
Dr. Peery, whose study looked at fiber intake among diverticulosis patients, and her collaborators reported having no disclosures, and stated that they received funding from the National Institutes of Health.
Dr. Shahedi, who assessed the incidence of diverticulitis, disclosed that three coinvestigators are employees of Shire Pharmaceuticals, which sponsored their study. Other investigators disclosed ties to Amgen and Ironwood Pharmaceuticals.
Dr. Cohen’s coinvestigators, who studied the prevalence of IBS following diverticulitis, disclosed ties to Ironwood Pharmaceuticals, Prometheus, Takeda Pharmaceuticals, Amgen, Ritter Pharmaceuticals, and Shire. Two investigators were employees of Shire, which funded the research.
Finally, the coinvestigators of Dr. Maguire, who looked at vitamin D levels, reported ties to Shire, Bayer Health, Pfizer, Millennium Pharmaceuticals, and Pozen. They were funded by grants from the American College of Gastroenterology as well as the National Institutes of Health.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: The development of diverticulosis is not likely related to dietary fiber intake, and the progression of diverticulosis to diverticulitis may be less common than is now thought. On the other hand, patients who do develop diverticulitis may be at higher risk for irritable bowel syndrome, and ensuring adequate levels of serum vitamin D may help prevent diverticulitis altogether.
Data source: Four studies on diverticulosis and diverticulitis.
Disclosures: Dr. Peery and her collaborators stated that they received funding from the National Institutes of Health. Dr. Shahedi disclosed that three coinvestigators are employees of Shire Pharmaceuticals, which sponsored their study. Other investigators disclosed ties to Amgen and Ironwood Pharmaceuticals. Dr. Cohen’s coinvestigators disclosed ties to Ironwood Pharmaceuticals, Prometheus, Takeda Pharmaceuticals, Amgen, Ritter Pharmaceuticals, and Shire. Two investigators were employees of Shire, which funded their research. Dr. Maguire’s coinvestigators reported ties to Shire, Bayer Health, Pfizer, Millennium Pharmaceuticals, and Pozen. They stated that they were funded by grants from the American College of Gastroenterology and the National Institutes of Health.
Early azathioprine no benefit in Crohn’s
Two new studies of azathioprine use in early Crohn’s disease challenge the notions that the drug has utility in achieving and prolonging remission, respectively.
However, the drug may have efficacy in subsets of patients with severe disease and with perianal disease, the authors reported in the October issue of Gastroenterology.
In the first study, led by Dr. Julián Panés, of the Hospital Clínic/IDIBAPS in Barcelona, 131 patients were randomized to receive either azathioprine or placebo. In total, 37 patients in the azathioprine group and 32 in the placebo group completed the 18-month trial.
Patients ranged in age from 18 to 70 years and had been diagnosed with Crohn’s disease within 8 weeks of screening.
Dr. Panés and his colleagues found that overall, the proportion of patients with sustained corticosteroid-free remission was similar for both treatment groups: 44.1% of azathioprine patients and 36.5% of placebo patients (P = .48) (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.06.009]).
Nor was there any marked difference in the proportions of patients who achieved sustained corticosteroid-free remission at earlier time points, or who achieved relapse-free survival.
One area in which the drug did show benefit, however, was found in a post hoc analysis, wherein the definition of relapse was refined from a Crohn’s Disease Activity Index (CDAI) score of 175 or greater to a score in excess of 220.
By this measure, azathioprine patients showed lower relapse rates than placebo patients (11.8% vs. 30.2%; P = .01), leading Dr. Panés to conclude that there may be a "potential benefit of the drug in patients with more severe disease."
The second study, by Dr. Jacques Cosnes, of the Hôpital Saint-Antoine and Université Pierre et Marie Curie, both in Paris, also looked at adults with newly diagnosed Crohn’s disease (less than 6 months).
He and his colleagues randomly assigned 147 patients to initial azathioprine or to "conventional management," which included azathioprine only in cases of corticosteroid dependency, chronic active disease with frequent flares, poor response to corticosteroids, or development of severe perianal disease. A total of 132 patients completed the study.
The primary endpoint in this study was the proportion of trimesters spent in corticosteroid-free and anti–tumor necrosis factor–free remission over the 3 years of the study period.
Dr. Cosnes and his colleagues found that 67% of trimesters were spent in remission among the 65 patients in the early azathioprine group, compared with 56% among the 67 patients in the conventional management group (P = .69), with clinical remission in this case defined as a CDAI score of less than 150 (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.04.048]).
Additionally, "Overall, patients in the early azathioprine group had a similar proportion of trimesters with flare, hospitalization, intestinal surgery, and use of anti-TNF therapy compared with patients in the conventional management group, but had less active perianal lesions," the researchers wrote.
Indeed, while only two patients in the azathioprine group required perianal surgery, nine underwent the procedure in the conventional management group, although there was no difference in patients remaining free of intestinal surgery between the two groups, they added.
"The efficacy of azathioprine for maintenance of remission in patients with Crohn’s disease, albeit modest ... is well established when used at the appropriate dosage and duration," Dr. Cosnes and his colleagues wrote.
"However, it has been claimed that this effect may be jeopardized if azathioprine is prescribed too late, at a time when irreversible damage has already occurred."
And while this study showed no benefit of early induction of azathioprine, except in cases of perianal disease, this "beneficial effect ... needs to be confirmed by another study before recommending early initiation of azathioprine therapy in patients with rectal involvement and/or perianal lesions at diagnosis."
Dr. Panés and his fellow investigators disclosed no personal conflicts of interest relating to this study; their work was sponsored by the Grupo Español de Trabajo en Enfermedad de Crohn y Colitis Ulcerosa. Dr. Cosnes and his fellow investigators reported financial ties to numerous pharmaceutical makers, including the makers of azathioprine. Their work was supported by the Association François Aupetit and the Société Nationale Française de Gastroentérologie.
Two new studies of azathioprine use in early Crohn’s disease challenge the notions that the drug has utility in achieving and prolonging remission, respectively.
However, the drug may have efficacy in subsets of patients with severe disease and with perianal disease, the authors reported in the October issue of Gastroenterology.
In the first study, led by Dr. Julián Panés, of the Hospital Clínic/IDIBAPS in Barcelona, 131 patients were randomized to receive either azathioprine or placebo. In total, 37 patients in the azathioprine group and 32 in the placebo group completed the 18-month trial.
Patients ranged in age from 18 to 70 years and had been diagnosed with Crohn’s disease within 8 weeks of screening.
Dr. Panés and his colleagues found that overall, the proportion of patients with sustained corticosteroid-free remission was similar for both treatment groups: 44.1% of azathioprine patients and 36.5% of placebo patients (P = .48) (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.06.009]).
Nor was there any marked difference in the proportions of patients who achieved sustained corticosteroid-free remission at earlier time points, or who achieved relapse-free survival.
One area in which the drug did show benefit, however, was found in a post hoc analysis, wherein the definition of relapse was refined from a Crohn’s Disease Activity Index (CDAI) score of 175 or greater to a score in excess of 220.
By this measure, azathioprine patients showed lower relapse rates than placebo patients (11.8% vs. 30.2%; P = .01), leading Dr. Panés to conclude that there may be a "potential benefit of the drug in patients with more severe disease."
The second study, by Dr. Jacques Cosnes, of the Hôpital Saint-Antoine and Université Pierre et Marie Curie, both in Paris, also looked at adults with newly diagnosed Crohn’s disease (less than 6 months).
He and his colleagues randomly assigned 147 patients to initial azathioprine or to "conventional management," which included azathioprine only in cases of corticosteroid dependency, chronic active disease with frequent flares, poor response to corticosteroids, or development of severe perianal disease. A total of 132 patients completed the study.
The primary endpoint in this study was the proportion of trimesters spent in corticosteroid-free and anti–tumor necrosis factor–free remission over the 3 years of the study period.
Dr. Cosnes and his colleagues found that 67% of trimesters were spent in remission among the 65 patients in the early azathioprine group, compared with 56% among the 67 patients in the conventional management group (P = .69), with clinical remission in this case defined as a CDAI score of less than 150 (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.04.048]).
Additionally, "Overall, patients in the early azathioprine group had a similar proportion of trimesters with flare, hospitalization, intestinal surgery, and use of anti-TNF therapy compared with patients in the conventional management group, but had less active perianal lesions," the researchers wrote.
Indeed, while only two patients in the azathioprine group required perianal surgery, nine underwent the procedure in the conventional management group, although there was no difference in patients remaining free of intestinal surgery between the two groups, they added.
"The efficacy of azathioprine for maintenance of remission in patients with Crohn’s disease, albeit modest ... is well established when used at the appropriate dosage and duration," Dr. Cosnes and his colleagues wrote.
"However, it has been claimed that this effect may be jeopardized if azathioprine is prescribed too late, at a time when irreversible damage has already occurred."
And while this study showed no benefit of early induction of azathioprine, except in cases of perianal disease, this "beneficial effect ... needs to be confirmed by another study before recommending early initiation of azathioprine therapy in patients with rectal involvement and/or perianal lesions at diagnosis."
Dr. Panés and his fellow investigators disclosed no personal conflicts of interest relating to this study; their work was sponsored by the Grupo Español de Trabajo en Enfermedad de Crohn y Colitis Ulcerosa. Dr. Cosnes and his fellow investigators reported financial ties to numerous pharmaceutical makers, including the makers of azathioprine. Their work was supported by the Association François Aupetit and the Société Nationale Française de Gastroentérologie.
Two new studies of azathioprine use in early Crohn’s disease challenge the notions that the drug has utility in achieving and prolonging remission, respectively.
However, the drug may have efficacy in subsets of patients with severe disease and with perianal disease, the authors reported in the October issue of Gastroenterology.
In the first study, led by Dr. Julián Panés, of the Hospital Clínic/IDIBAPS in Barcelona, 131 patients were randomized to receive either azathioprine or placebo. In total, 37 patients in the azathioprine group and 32 in the placebo group completed the 18-month trial.
Patients ranged in age from 18 to 70 years and had been diagnosed with Crohn’s disease within 8 weeks of screening.
Dr. Panés and his colleagues found that overall, the proportion of patients with sustained corticosteroid-free remission was similar for both treatment groups: 44.1% of azathioprine patients and 36.5% of placebo patients (P = .48) (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.06.009]).
Nor was there any marked difference in the proportions of patients who achieved sustained corticosteroid-free remission at earlier time points, or who achieved relapse-free survival.
One area in which the drug did show benefit, however, was found in a post hoc analysis, wherein the definition of relapse was refined from a Crohn’s Disease Activity Index (CDAI) score of 175 or greater to a score in excess of 220.
By this measure, azathioprine patients showed lower relapse rates than placebo patients (11.8% vs. 30.2%; P = .01), leading Dr. Panés to conclude that there may be a "potential benefit of the drug in patients with more severe disease."
The second study, by Dr. Jacques Cosnes, of the Hôpital Saint-Antoine and Université Pierre et Marie Curie, both in Paris, also looked at adults with newly diagnosed Crohn’s disease (less than 6 months).
He and his colleagues randomly assigned 147 patients to initial azathioprine or to "conventional management," which included azathioprine only in cases of corticosteroid dependency, chronic active disease with frequent flares, poor response to corticosteroids, or development of severe perianal disease. A total of 132 patients completed the study.
The primary endpoint in this study was the proportion of trimesters spent in corticosteroid-free and anti–tumor necrosis factor–free remission over the 3 years of the study period.
Dr. Cosnes and his colleagues found that 67% of trimesters were spent in remission among the 65 patients in the early azathioprine group, compared with 56% among the 67 patients in the conventional management group (P = .69), with clinical remission in this case defined as a CDAI score of less than 150 (Gastroenterol. 2013 [doi: 10.1053/j.gastro.2013.04.048]).
Additionally, "Overall, patients in the early azathioprine group had a similar proportion of trimesters with flare, hospitalization, intestinal surgery, and use of anti-TNF therapy compared with patients in the conventional management group, but had less active perianal lesions," the researchers wrote.
Indeed, while only two patients in the azathioprine group required perianal surgery, nine underwent the procedure in the conventional management group, although there was no difference in patients remaining free of intestinal surgery between the two groups, they added.
"The efficacy of azathioprine for maintenance of remission in patients with Crohn’s disease, albeit modest ... is well established when used at the appropriate dosage and duration," Dr. Cosnes and his colleagues wrote.
"However, it has been claimed that this effect may be jeopardized if azathioprine is prescribed too late, at a time when irreversible damage has already occurred."
And while this study showed no benefit of early induction of azathioprine, except in cases of perianal disease, this "beneficial effect ... needs to be confirmed by another study before recommending early initiation of azathioprine therapy in patients with rectal involvement and/or perianal lesions at diagnosis."
Dr. Panés and his fellow investigators disclosed no personal conflicts of interest relating to this study; their work was sponsored by the Grupo Español de Trabajo en Enfermedad de Crohn y Colitis Ulcerosa. Dr. Cosnes and his fellow investigators reported financial ties to numerous pharmaceutical makers, including the makers of azathioprine. Their work was supported by the Association François Aupetit and the Société Nationale Française de Gastroentérologie.
FROM GASTROENTEROLOGY
Major finding: Early azathioprine therapy did not increase the proportion of patients with sustained relapse in Crohn’s compared with placebo, and did not increase the amount of time in relapse compared with conventional therapy.
Data source: Two studies of adult patients with diagnosed Crohn’s who started azathioprine within 8 weeks and within 6 months of screening, respectively.
Disclosures: Dr. Panés and his fellow investigators disclosed no personal conflicts of interest relating to this study; their work was sponsored by the Grupo Español de Trabajo en Enfermedad de Crohn y Colitis Ulcerosa. Dr. Cosnes and his fellow investigators reported financial ties to numerous pharmaceutical makers, including the makers of azathioprine. Their work was supported by the Association François Aupetit and the Société Nationale Française de Gastroentérologie.
Polypectomy bleed risk low in patients on antiplatelet agents
Maintaining antiplatelet therapy during colonoscopy with polypectomy carries a low risk of delayed bleeding.
Moreover, routinely using endoscopic clips in these patients increases safety while remaining cost effective, according to two studies published in the October issue of Clinical Gastroenterology and Hepatology.
The first study, by Dr. Linda A. Feagins of the North Texas VA Health Care System, Dallas, and her colleagues, looked at 219 patients taking thienopyridines, including clopidogrel and prasugrel, who underwent colonoscopy with polypectomy (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.02.003]).
Source: American Gastroenterological Institute
The majority of patients had a history of stent placement; the remainder had coronary artery disease with or without bypass, a history of cerebral accident, or aspirin allergy.
Data on immediate postpolypectomy bleeding requiring further endoscopic treatment, as well as delayed bleeds occurring within 30 days of the procedure, were collected, with "clinically important" delayed bleeding defined as bleeds requiring repeat colonoscopy, hospitalization, a drop in hemoglobin of at least 2 g/dL, or blood transfusion.
The investigators found that immediate bleeding occurred in 16 patients (7.3%) on uninterrupted antiplatelet therapy, compared with 14 (4.7%) of 297 controls (P = .25).
Looking at delayed bleeding among the 210 antiplatelet therapy users for whom 30-day follow-up was complete, 11% experienced "unimportant" delayed bleeding and 2.4% experienced clinically important events.
That compared with "unimportant" delayed events among 5.9% of controls and zero cases of clinically important delayed bleeds among these patients (P = .013).
Despite the significant difference, Dr. Feagins called the rate of bleeding in the antiplatelet therapy group "probably acceptable in comparison with the potentially catastrophic consequences of stent thrombosis."
"For patients who are at high risk for thromboembolic events with thienopyridine cessation and for whom colonoscopy cannot be delayed reasonably, our data support the decision to continue thienopyridines during colonoscopy," she noted.
The second study, by Dr. Neehar Parikh of Northwestern University in Chicago, concerned whether the use of routine clip placement after colon polypectomy would be cost effective both for patients taking antiplatelet agents and those not (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2012.12.044]).
Using a software-generated decision analysis model and published bleeding rates from the literature, Dr. Parikh and his colleagues created a reference case of a 50-year-old patient who had a single 1.0- to 1.5-cm polyp removed during colonoscopy.
They found that prophylactic clip placement was not cost effective until the risk of postpolypectomy bleeding increased to 2.01%; it was not cost saving until the probability reached 2.07%.
Indeed, while "placing one prophylactic clip in patients on antiplatelet/anticoagulation therapy appears cost effective ... the use of two clips is equivocal and not favorable in those patients on anticoagulation and antiplatelet therapy, respectively, on the basis of mean bleeding rates reported," the investigators wrote.
The cost calculation also varies according to clip price, they added, although they clarified that, "on the basis of listed current clip costs, prophylactic clip placement is favorable for those patients on antiplatelet and anticoagulation therapy."
Dr. Feagins and her fellow investigators reported no conflicts of interest. She disclosed funding from the Department of Veterans Affairs. Dr. Parikh and his colleagues also reported no financial conflicts. They disclosed funding from the National Institute of Diabetes and Digestive and Kidney Diseases, as well as the Agency for Healthcare Research and Quality.
Maintaining antiplatelet therapy during colonoscopy with polypectomy carries a low risk of delayed bleeding.
Moreover, routinely using endoscopic clips in these patients increases safety while remaining cost effective, according to two studies published in the October issue of Clinical Gastroenterology and Hepatology.
The first study, by Dr. Linda A. Feagins of the North Texas VA Health Care System, Dallas, and her colleagues, looked at 219 patients taking thienopyridines, including clopidogrel and prasugrel, who underwent colonoscopy with polypectomy (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.02.003]).
Source: American Gastroenterological Institute
The majority of patients had a history of stent placement; the remainder had coronary artery disease with or without bypass, a history of cerebral accident, or aspirin allergy.
Data on immediate postpolypectomy bleeding requiring further endoscopic treatment, as well as delayed bleeds occurring within 30 days of the procedure, were collected, with "clinically important" delayed bleeding defined as bleeds requiring repeat colonoscopy, hospitalization, a drop in hemoglobin of at least 2 g/dL, or blood transfusion.
The investigators found that immediate bleeding occurred in 16 patients (7.3%) on uninterrupted antiplatelet therapy, compared with 14 (4.7%) of 297 controls (P = .25).
Looking at delayed bleeding among the 210 antiplatelet therapy users for whom 30-day follow-up was complete, 11% experienced "unimportant" delayed bleeding and 2.4% experienced clinically important events.
That compared with "unimportant" delayed events among 5.9% of controls and zero cases of clinically important delayed bleeds among these patients (P = .013).
Despite the significant difference, Dr. Feagins called the rate of bleeding in the antiplatelet therapy group "probably acceptable in comparison with the potentially catastrophic consequences of stent thrombosis."
"For patients who are at high risk for thromboembolic events with thienopyridine cessation and for whom colonoscopy cannot be delayed reasonably, our data support the decision to continue thienopyridines during colonoscopy," she noted.
The second study, by Dr. Neehar Parikh of Northwestern University in Chicago, concerned whether the use of routine clip placement after colon polypectomy would be cost effective both for patients taking antiplatelet agents and those not (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2012.12.044]).
Using a software-generated decision analysis model and published bleeding rates from the literature, Dr. Parikh and his colleagues created a reference case of a 50-year-old patient who had a single 1.0- to 1.5-cm polyp removed during colonoscopy.
They found that prophylactic clip placement was not cost effective until the risk of postpolypectomy bleeding increased to 2.01%; it was not cost saving until the probability reached 2.07%.
Indeed, while "placing one prophylactic clip in patients on antiplatelet/anticoagulation therapy appears cost effective ... the use of two clips is equivocal and not favorable in those patients on anticoagulation and antiplatelet therapy, respectively, on the basis of mean bleeding rates reported," the investigators wrote.
The cost calculation also varies according to clip price, they added, although they clarified that, "on the basis of listed current clip costs, prophylactic clip placement is favorable for those patients on antiplatelet and anticoagulation therapy."
Dr. Feagins and her fellow investigators reported no conflicts of interest. She disclosed funding from the Department of Veterans Affairs. Dr. Parikh and his colleagues also reported no financial conflicts. They disclosed funding from the National Institute of Diabetes and Digestive and Kidney Diseases, as well as the Agency for Healthcare Research and Quality.
Maintaining antiplatelet therapy during colonoscopy with polypectomy carries a low risk of delayed bleeding.
Moreover, routinely using endoscopic clips in these patients increases safety while remaining cost effective, according to two studies published in the October issue of Clinical Gastroenterology and Hepatology.
The first study, by Dr. Linda A. Feagins of the North Texas VA Health Care System, Dallas, and her colleagues, looked at 219 patients taking thienopyridines, including clopidogrel and prasugrel, who underwent colonoscopy with polypectomy (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.02.003]).
Source: American Gastroenterological Institute
The majority of patients had a history of stent placement; the remainder had coronary artery disease with or without bypass, a history of cerebral accident, or aspirin allergy.
Data on immediate postpolypectomy bleeding requiring further endoscopic treatment, as well as delayed bleeds occurring within 30 days of the procedure, were collected, with "clinically important" delayed bleeding defined as bleeds requiring repeat colonoscopy, hospitalization, a drop in hemoglobin of at least 2 g/dL, or blood transfusion.
The investigators found that immediate bleeding occurred in 16 patients (7.3%) on uninterrupted antiplatelet therapy, compared with 14 (4.7%) of 297 controls (P = .25).
Looking at delayed bleeding among the 210 antiplatelet therapy users for whom 30-day follow-up was complete, 11% experienced "unimportant" delayed bleeding and 2.4% experienced clinically important events.
That compared with "unimportant" delayed events among 5.9% of controls and zero cases of clinically important delayed bleeds among these patients (P = .013).
Despite the significant difference, Dr. Feagins called the rate of bleeding in the antiplatelet therapy group "probably acceptable in comparison with the potentially catastrophic consequences of stent thrombosis."
"For patients who are at high risk for thromboembolic events with thienopyridine cessation and for whom colonoscopy cannot be delayed reasonably, our data support the decision to continue thienopyridines during colonoscopy," she noted.
The second study, by Dr. Neehar Parikh of Northwestern University in Chicago, concerned whether the use of routine clip placement after colon polypectomy would be cost effective both for patients taking antiplatelet agents and those not (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2012.12.044]).
Using a software-generated decision analysis model and published bleeding rates from the literature, Dr. Parikh and his colleagues created a reference case of a 50-year-old patient who had a single 1.0- to 1.5-cm polyp removed during colonoscopy.
They found that prophylactic clip placement was not cost effective until the risk of postpolypectomy bleeding increased to 2.01%; it was not cost saving until the probability reached 2.07%.
Indeed, while "placing one prophylactic clip in patients on antiplatelet/anticoagulation therapy appears cost effective ... the use of two clips is equivocal and not favorable in those patients on anticoagulation and antiplatelet therapy, respectively, on the basis of mean bleeding rates reported," the investigators wrote.
The cost calculation also varies according to clip price, they added, although they clarified that, "on the basis of listed current clip costs, prophylactic clip placement is favorable for those patients on antiplatelet and anticoagulation therapy."
Dr. Feagins and her fellow investigators reported no conflicts of interest. She disclosed funding from the Department of Veterans Affairs. Dr. Parikh and his colleagues also reported no financial conflicts. They disclosed funding from the National Institute of Diabetes and Digestive and Kidney Diseases, as well as the Agency for Healthcare Research and Quality.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Patients taking antiplatelet agents can expect a 2.4% risk of clinically important bleeds within 30 days from polypectomy; it may be cost effective to prophylactically clip patients with risks greater than 2.01%.
Data source: One study involved 219 patients undergoing polypectomy in the setting of uninterrupted antiplatelet therapy. The second was a software-based decision analysis.
Disclosures: Dr. Feagins and her fellow investigators reported no conflicts of interest. She disclosed funding from the Department of Veterans Affairs. Dr. Parikh and his colleagues also reported no personal financial conflicts. They disclosed funding from the National Institute of Diabetes and Digestive and Kidney Diseases, as well as the Agency for Healthcare Research and Quality.
Novel HCV therapy leads to rapid response
Combination therapy with the second-generation protease inhibitor danoprevir yielded high rates of sustained virologic response in hepatitis C.
Moreover, a large portion of patients also demonstrated an extended rapid virologic response up to 20 weeks, reported Dr. Patrick Marcellin and his colleagues in the October issue of Gastroenterology.
Dr. Marcellin, of the Hôpital Beaujon in Clichy, France, and his coinvestigators looked at 225 treatment-naive adults with hepatitis C virus (HCV) genotype 1 infection, including those who had a serum RNA level of 50,000 IU/mL or more.
Exclusion criteria included advanced fibrosis or cirrhosis, anemia, poorly controlled diabetes, or body mass index less than 18 kg/m2 or greater than 36 kg/m2.
The goal of this phase II, randomized, placebo-controlled study (ATLAS) was to evaluate the efficacy of treatment with danoprevir plus peginterferon alfa-2a/ribavirin for 12 weeks, compared with peginterferon alfa-2a/ribavirin alone.
Patients were randomized to one of three doses of oral danoprevir or placebo: 300 mg every 8 hours, 600 mg every 12 hours, or 900 mg every 12 hours.
All doses and placebo were given with standard combination HCV therapy, including subcutaneous peginterferon alfa-2a 180 mg/week plus oral ribavirin (1,000 mg/day for patients with a body weight less than 75 kg or 1,200 mg/day for patients weighing 75 kg or more).
At week 12, treatment with danoprevir or placebo was stopped, and peginterferon alfa-2a/ribavirin was continued for a total duration of 24 or 48 weeks, according to patient response.
Dr. Marcellin found that by week 1, mean decreases in HCV RNA ranged from 3.95 to 4.28 log10 IU/mL in the danoprevir groups, compared with 0.77 log10 IU/mL in the placebo group.
By week 2, according to the investigators, more than half of the danoprevir patients and none of the placebo recipients had achieved undetectable HCV RNA levels.
Indeed, broken down by dose, the researchers calculated that 74% of the danoprevir 300-mg group achieved a rapid virologic response (undetectable serum HCV RNA at week 4), with 65% maintaining an extended rapid virologic response (eRVR), defined as an undetectable HCV RNA that lasted from weeks 4 through 20.
Among the patients taking 600-mg doses, 88% achieved an RVR, with 79% maintaining an eRVR at week 20.
Finally, 86% of patients in the 900-mg treatment group achieved an RVR, although only 18% reached an eRVR.
Patients with an eRVR stopped all treatment at 24 weeks.
"Relapse occurred in 18%, 8%, and 11% of patients treated with danoprevir 300 mg, 600 mg, and 900 mg, respectively, versus 38% in the placebo group," the authors wrote.
Looking at the side-effect profile, Dr. Marcellin reported that fatigue, headache, nausea, insomnia, myalgia, and chills were the most common adverse events for both the treatment and placebo groups.
They also observed reversible, grade 4 elevations in alanine aminotransferase (ALT) levels between weeks 6 and 12 in 2% of danoprevir-treated patients, including three in the 900-mg cohort and one in the 600-mg cohort.
Treatment was discontinued, and serum ALT levels returned to within 1.5 times the upper limit of normal within a month for all four patients, the authors added.
"Notwithstanding the low incidence of reversible ALT elevations observed with high-dose danoprevir in this trial, danoprevir also appears to have a better tolerability profile than either boceprevir or telaprevir, as evidenced by the lower incidence of rash and anemia among danoprevir-treated patients compared with placebo recipients," concluded the investigators.
Indeed, they pointed to other studies showing that coadministration of low-dose ritonavir, another protease inhibitor, "significantly inhibits danoprevir reactive metabolite formation, proposed to be associated with ALT elevations."
"Studies to further evaluate the efficacy and safety of danoprevir in different patient groups are ongoing," the researchers said.
Dr. Marcellin and his fellow investigators reported financial relationships with numerous pharmaceutical companies, including Roche, the maker of danoprevir, which also funded this study.
Second-generation protease inhibitors (PIs) currently in development are generally thought to have less drug-drug interactions, improved dosing schedules, as well as less frequent and less severe side effects. Many of the second-generation PIs are macrocyclic molecules, which have been shown to generally be more potent and, depending on the location of the macrocycle, able to retain activity against resistant variants. Common wild-type and drug-resistant variants of the NS3 protein include Q80K, R155K, V36M/R155K, A156T, and D168A (ACS Chem. Biol. 2013;8:1469-78). They have also shown increased efficacy against genotype 1, though they still have limited efficacy against other genotypes (Curr. Gastroenterol. Rep. 2013;15:303 [doi: 10.1007/s11894-012-0303-3]).
This report on danoprevir clearly shows it is a potent, pan-genotypic, macrocyclic second-generation PI that meets all of these criteria. However, the grade 4 elevations in alanine aminotransferase (ALT) levels seen in 2% of danoprevir-treated patients (including three in the 900-mg cohort and one in the 600-mg cohort) that were seen in this study became a major roadblock to the phase III development of the compound. Instead, the coadministration of low-dose ritonavir, another protease inhibitor, which significantly inhibits danoprevir reactive metabolite formation, has allowed the compound to move forward into advanced studies without the hepatotoxicity concern [J. Hepatol. 2012;56(Suppl 2):S467; Hepatology 2012;56(Suppl 1):552A; Hepatology 2012;56(Suppl 1):231A; J. Hepatol. 2012;56(Suppl 2):S555].
The role for a ritonavir-boosted PI in all-oral interferon-free regimens remains to be defined over the coming years. This is not the only PI that requires boosting (ABT450); however, the field is quickly becoming crowded with PIs that do not require ritonavir (asunaprevir, faldaprevir, simeprevir, and vaniprevir). The role for danoprevir in the United States is therefore still unclear.
Dr. Paul J. Pockros is director of the Liver Disease Center of the division of gastroenterology/hepatology at the Scripps Clinic, director of the Scripps Clinic Liver Research Consortium, and director of clinical research at the Scripps Translational Science Institute, La Jolla, Calif. He is a researcher, speaker, and advisory board member for Roche/Genentech.
Second-generation protease inhibitors (PIs) currently in development are generally thought to have less drug-drug interactions, improved dosing schedules, as well as less frequent and less severe side effects. Many of the second-generation PIs are macrocyclic molecules, which have been shown to generally be more potent and, depending on the location of the macrocycle, able to retain activity against resistant variants. Common wild-type and drug-resistant variants of the NS3 protein include Q80K, R155K, V36M/R155K, A156T, and D168A (ACS Chem. Biol. 2013;8:1469-78). They have also shown increased efficacy against genotype 1, though they still have limited efficacy against other genotypes (Curr. Gastroenterol. Rep. 2013;15:303 [doi: 10.1007/s11894-012-0303-3]).
This report on danoprevir clearly shows it is a potent, pan-genotypic, macrocyclic second-generation PI that meets all of these criteria. However, the grade 4 elevations in alanine aminotransferase (ALT) levels seen in 2% of danoprevir-treated patients (including three in the 900-mg cohort and one in the 600-mg cohort) that were seen in this study became a major roadblock to the phase III development of the compound. Instead, the coadministration of low-dose ritonavir, another protease inhibitor, which significantly inhibits danoprevir reactive metabolite formation, has allowed the compound to move forward into advanced studies without the hepatotoxicity concern [J. Hepatol. 2012;56(Suppl 2):S467; Hepatology 2012;56(Suppl 1):552A; Hepatology 2012;56(Suppl 1):231A; J. Hepatol. 2012;56(Suppl 2):S555].
The role for a ritonavir-boosted PI in all-oral interferon-free regimens remains to be defined over the coming years. This is not the only PI that requires boosting (ABT450); however, the field is quickly becoming crowded with PIs that do not require ritonavir (asunaprevir, faldaprevir, simeprevir, and vaniprevir). The role for danoprevir in the United States is therefore still unclear.
Dr. Paul J. Pockros is director of the Liver Disease Center of the division of gastroenterology/hepatology at the Scripps Clinic, director of the Scripps Clinic Liver Research Consortium, and director of clinical research at the Scripps Translational Science Institute, La Jolla, Calif. He is a researcher, speaker, and advisory board member for Roche/Genentech.
Second-generation protease inhibitors (PIs) currently in development are generally thought to have less drug-drug interactions, improved dosing schedules, as well as less frequent and less severe side effects. Many of the second-generation PIs are macrocyclic molecules, which have been shown to generally be more potent and, depending on the location of the macrocycle, able to retain activity against resistant variants. Common wild-type and drug-resistant variants of the NS3 protein include Q80K, R155K, V36M/R155K, A156T, and D168A (ACS Chem. Biol. 2013;8:1469-78). They have also shown increased efficacy against genotype 1, though they still have limited efficacy against other genotypes (Curr. Gastroenterol. Rep. 2013;15:303 [doi: 10.1007/s11894-012-0303-3]).
This report on danoprevir clearly shows it is a potent, pan-genotypic, macrocyclic second-generation PI that meets all of these criteria. However, the grade 4 elevations in alanine aminotransferase (ALT) levels seen in 2% of danoprevir-treated patients (including three in the 900-mg cohort and one in the 600-mg cohort) that were seen in this study became a major roadblock to the phase III development of the compound. Instead, the coadministration of low-dose ritonavir, another protease inhibitor, which significantly inhibits danoprevir reactive metabolite formation, has allowed the compound to move forward into advanced studies without the hepatotoxicity concern [J. Hepatol. 2012;56(Suppl 2):S467; Hepatology 2012;56(Suppl 1):552A; Hepatology 2012;56(Suppl 1):231A; J. Hepatol. 2012;56(Suppl 2):S555].
The role for a ritonavir-boosted PI in all-oral interferon-free regimens remains to be defined over the coming years. This is not the only PI that requires boosting (ABT450); however, the field is quickly becoming crowded with PIs that do not require ritonavir (asunaprevir, faldaprevir, simeprevir, and vaniprevir). The role for danoprevir in the United States is therefore still unclear.
Dr. Paul J. Pockros is director of the Liver Disease Center of the division of gastroenterology/hepatology at the Scripps Clinic, director of the Scripps Clinic Liver Research Consortium, and director of clinical research at the Scripps Translational Science Institute, La Jolla, Calif. He is a researcher, speaker, and advisory board member for Roche/Genentech.
Combination therapy with the second-generation protease inhibitor danoprevir yielded high rates of sustained virologic response in hepatitis C.
Moreover, a large portion of patients also demonstrated an extended rapid virologic response up to 20 weeks, reported Dr. Patrick Marcellin and his colleagues in the October issue of Gastroenterology.
Dr. Marcellin, of the Hôpital Beaujon in Clichy, France, and his coinvestigators looked at 225 treatment-naive adults with hepatitis C virus (HCV) genotype 1 infection, including those who had a serum RNA level of 50,000 IU/mL or more.
Exclusion criteria included advanced fibrosis or cirrhosis, anemia, poorly controlled diabetes, or body mass index less than 18 kg/m2 or greater than 36 kg/m2.
The goal of this phase II, randomized, placebo-controlled study (ATLAS) was to evaluate the efficacy of treatment with danoprevir plus peginterferon alfa-2a/ribavirin for 12 weeks, compared with peginterferon alfa-2a/ribavirin alone.
Patients were randomized to one of three doses of oral danoprevir or placebo: 300 mg every 8 hours, 600 mg every 12 hours, or 900 mg every 12 hours.
All doses and placebo were given with standard combination HCV therapy, including subcutaneous peginterferon alfa-2a 180 mg/week plus oral ribavirin (1,000 mg/day for patients with a body weight less than 75 kg or 1,200 mg/day for patients weighing 75 kg or more).
At week 12, treatment with danoprevir or placebo was stopped, and peginterferon alfa-2a/ribavirin was continued for a total duration of 24 or 48 weeks, according to patient response.
Dr. Marcellin found that by week 1, mean decreases in HCV RNA ranged from 3.95 to 4.28 log10 IU/mL in the danoprevir groups, compared with 0.77 log10 IU/mL in the placebo group.
By week 2, according to the investigators, more than half of the danoprevir patients and none of the placebo recipients had achieved undetectable HCV RNA levels.
Indeed, broken down by dose, the researchers calculated that 74% of the danoprevir 300-mg group achieved a rapid virologic response (undetectable serum HCV RNA at week 4), with 65% maintaining an extended rapid virologic response (eRVR), defined as an undetectable HCV RNA that lasted from weeks 4 through 20.
Among the patients taking 600-mg doses, 88% achieved an RVR, with 79% maintaining an eRVR at week 20.
Finally, 86% of patients in the 900-mg treatment group achieved an RVR, although only 18% reached an eRVR.
Patients with an eRVR stopped all treatment at 24 weeks.
"Relapse occurred in 18%, 8%, and 11% of patients treated with danoprevir 300 mg, 600 mg, and 900 mg, respectively, versus 38% in the placebo group," the authors wrote.
Looking at the side-effect profile, Dr. Marcellin reported that fatigue, headache, nausea, insomnia, myalgia, and chills were the most common adverse events for both the treatment and placebo groups.
They also observed reversible, grade 4 elevations in alanine aminotransferase (ALT) levels between weeks 6 and 12 in 2% of danoprevir-treated patients, including three in the 900-mg cohort and one in the 600-mg cohort.
Treatment was discontinued, and serum ALT levels returned to within 1.5 times the upper limit of normal within a month for all four patients, the authors added.
"Notwithstanding the low incidence of reversible ALT elevations observed with high-dose danoprevir in this trial, danoprevir also appears to have a better tolerability profile than either boceprevir or telaprevir, as evidenced by the lower incidence of rash and anemia among danoprevir-treated patients compared with placebo recipients," concluded the investigators.
Indeed, they pointed to other studies showing that coadministration of low-dose ritonavir, another protease inhibitor, "significantly inhibits danoprevir reactive metabolite formation, proposed to be associated with ALT elevations."
"Studies to further evaluate the efficacy and safety of danoprevir in different patient groups are ongoing," the researchers said.
Dr. Marcellin and his fellow investigators reported financial relationships with numerous pharmaceutical companies, including Roche, the maker of danoprevir, which also funded this study.
Combination therapy with the second-generation protease inhibitor danoprevir yielded high rates of sustained virologic response in hepatitis C.
Moreover, a large portion of patients also demonstrated an extended rapid virologic response up to 20 weeks, reported Dr. Patrick Marcellin and his colleagues in the October issue of Gastroenterology.
Dr. Marcellin, of the Hôpital Beaujon in Clichy, France, and his coinvestigators looked at 225 treatment-naive adults with hepatitis C virus (HCV) genotype 1 infection, including those who had a serum RNA level of 50,000 IU/mL or more.
Exclusion criteria included advanced fibrosis or cirrhosis, anemia, poorly controlled diabetes, or body mass index less than 18 kg/m2 or greater than 36 kg/m2.
The goal of this phase II, randomized, placebo-controlled study (ATLAS) was to evaluate the efficacy of treatment with danoprevir plus peginterferon alfa-2a/ribavirin for 12 weeks, compared with peginterferon alfa-2a/ribavirin alone.
Patients were randomized to one of three doses of oral danoprevir or placebo: 300 mg every 8 hours, 600 mg every 12 hours, or 900 mg every 12 hours.
All doses and placebo were given with standard combination HCV therapy, including subcutaneous peginterferon alfa-2a 180 mg/week plus oral ribavirin (1,000 mg/day for patients with a body weight less than 75 kg or 1,200 mg/day for patients weighing 75 kg or more).
At week 12, treatment with danoprevir or placebo was stopped, and peginterferon alfa-2a/ribavirin was continued for a total duration of 24 or 48 weeks, according to patient response.
Dr. Marcellin found that by week 1, mean decreases in HCV RNA ranged from 3.95 to 4.28 log10 IU/mL in the danoprevir groups, compared with 0.77 log10 IU/mL in the placebo group.
By week 2, according to the investigators, more than half of the danoprevir patients and none of the placebo recipients had achieved undetectable HCV RNA levels.
Indeed, broken down by dose, the researchers calculated that 74% of the danoprevir 300-mg group achieved a rapid virologic response (undetectable serum HCV RNA at week 4), with 65% maintaining an extended rapid virologic response (eRVR), defined as an undetectable HCV RNA that lasted from weeks 4 through 20.
Among the patients taking 600-mg doses, 88% achieved an RVR, with 79% maintaining an eRVR at week 20.
Finally, 86% of patients in the 900-mg treatment group achieved an RVR, although only 18% reached an eRVR.
Patients with an eRVR stopped all treatment at 24 weeks.
"Relapse occurred in 18%, 8%, and 11% of patients treated with danoprevir 300 mg, 600 mg, and 900 mg, respectively, versus 38% in the placebo group," the authors wrote.
Looking at the side-effect profile, Dr. Marcellin reported that fatigue, headache, nausea, insomnia, myalgia, and chills were the most common adverse events for both the treatment and placebo groups.
They also observed reversible, grade 4 elevations in alanine aminotransferase (ALT) levels between weeks 6 and 12 in 2% of danoprevir-treated patients, including three in the 900-mg cohort and one in the 600-mg cohort.
Treatment was discontinued, and serum ALT levels returned to within 1.5 times the upper limit of normal within a month for all four patients, the authors added.
"Notwithstanding the low incidence of reversible ALT elevations observed with high-dose danoprevir in this trial, danoprevir also appears to have a better tolerability profile than either boceprevir or telaprevir, as evidenced by the lower incidence of rash and anemia among danoprevir-treated patients compared with placebo recipients," concluded the investigators.
Indeed, they pointed to other studies showing that coadministration of low-dose ritonavir, another protease inhibitor, "significantly inhibits danoprevir reactive metabolite formation, proposed to be associated with ALT elevations."
"Studies to further evaluate the efficacy and safety of danoprevir in different patient groups are ongoing," the researchers said.
Dr. Marcellin and his fellow investigators reported financial relationships with numerous pharmaceutical companies, including Roche, the maker of danoprevir, which also funded this study.
FROM GASTROENTEROLOGY
Major finding: Among hepatitis C patients who received the protease inhibitor danoprevir in addition to a regimen of peginterferon alfa-2a/ribavirin, as many as 88% achieved a rapid virologic response.
Data source: The phase II, randomized, partially blinded ATLAS study.
Disclosures: Dr. Marcellin and his fellow investigators reported financial relationships with numerous pharmaceutical companies, including Roche, the maker of danoprevir, which also funded this study.