User login
Combination of Rivaroxaban and Aspirin Improves Cardiovascular Outcomes
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
‘Observationists’: Ready for prime time in an internal medicine residency program
The Institute of Medicine, in its report “Hospital-Based Emergency Care – At the Breaking Point,” has identified Observation Units (OUs) as a “particularly promising” technique to improve patient flow.1 Many hospitals across the country either already have them or are in the process of establishing such units.
Multiple studies have shown that a highly efficient OU can save billions in health care costs.2 Historically, such units have existed within and are staffed by emergency departments. Since the implementation of the two-midnight rule in Oct. 2013, the complexities of observation care changed dramatically from run of the mill 30- to 40-year-old chest pain patients to 80- to 90-year-olds with multiple comorbidities being placed in observation.3 In many cases, this shifted the care out of the emergency department and into the arena of hospital medicine.
At our institution OUs are staffed by internal medicine residents supervised by faculty 24/7 year round. This, we believe, is a unique model. We implemented our model after a mini SWOT (strengths, weaknesses, opportunities, and threats ) analysis in August 2014. The biggest strength was that we were educating the next generation of “Observationists” as we improved the quality of care delivered to our patients. Our biggest opportunity was no existing curriculum for teaching internal medicine residents the art of observation medicine. So we designed our own. Just like Peter Drucker said, “The best way to predict the future is to create it.”
The curriculum is extremely innovative and exposes our residents to both the business and administrative aspect of OUs. Upon surveying our own residents anonymously within 6 months of instituting this rotation, over 90% felt this to be a valuable rotation towards their training. Since we went live, some of our residents who have graduated are now leading OUs at other hospitals.
To measure our program outcomes, we developed a dashboard with multiple metrics for our team. With such data, this rotation became an incubator for our residents for quality improvement projects. They have developed, implemented, and published multiple abstracts, presented posters and even won the first place for innovation at the Midwest Regional Society of General Internal Medicine conference.5-8
We have learned many lessons, and every challenge has been addressed as an opportunity. The first lesson was that we needed strong physician leadership to act as the gatekeeper to the unit. Second, as the rotation matured, we always kept our focus on high-quality patient care; we created a quality dashboard which includes length of stay, falls, and patient satisfaction as examples. Last but not least, we stayed mindful of stakeholder buy in, which for us was primarily our residents. We created the curriculum that provides the next generation of internists the broad experience of medicine, with the appropriate amount of autonomy and supervision. This, we believe, is a win-win proposition for all stakeholders – hospitals, physicians, residents, and most importantly the patients we serve. Additionally, data at our institution shows that our resident-run units are educationally, clinically, and financially beneficial to the residency programs and the hospitals.
Teaching and exposure to observation medicine is not currently a mainstay in many internal medicine residency programs. Our program provides a framework to establish an observation medicine rotation, which exposes residents to quality metrics and expands their scope of medical education.
Dr. Nand is medical director, care management & observation unit, and associate program director, internal medicine residency program, at the University of Illinois College of Medicine/Advocate Christ Medical Center.
References
1. “Hospital-Based Emergency Care: At the Breaking Point” (Washington: National Academies Press, 2006) 2. Baugh, CJ et al. “Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year” Health Aff (Millwood). 2012 Oct;31(10):2314-23
3. Fact Sheet: Two-Midnight Rule. 2015. Available at www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-07-01-2.html. Accessed March 29, 2016.
4. Society of Hospital Medicine. The observation unit white paper. http://www.hospitalmedicine.org, April 3, 2013.
5. Yousuf T. et al. “Intermediate chest pain protocol in an observation unit” Won first place award for innovation at the Midwest Regional SGIM conference, August 2015.
6. Sarfraz S et al. “Hand hygiene intervention increases compliance in observation unit” Poster: May 2016, Macy Midwest GME Conference, Michigan.
7. “Impact of syncope protocol in an observation unit of an academic tertiary care center” Poster for Oct 2016 AAIM skills development conference, National Harbor, Md.
8. Metgud S et al. “Integrating residents in providing high-value care via improved results of the ACGME annual resident survey” Poster: May 2016, Macy Midwest GME Conference, Michigan.
The Institute of Medicine, in its report “Hospital-Based Emergency Care – At the Breaking Point,” has identified Observation Units (OUs) as a “particularly promising” technique to improve patient flow.1 Many hospitals across the country either already have them or are in the process of establishing such units.
Multiple studies have shown that a highly efficient OU can save billions in health care costs.2 Historically, such units have existed within and are staffed by emergency departments. Since the implementation of the two-midnight rule in Oct. 2013, the complexities of observation care changed dramatically from run of the mill 30- to 40-year-old chest pain patients to 80- to 90-year-olds with multiple comorbidities being placed in observation.3 In many cases, this shifted the care out of the emergency department and into the arena of hospital medicine.
At our institution OUs are staffed by internal medicine residents supervised by faculty 24/7 year round. This, we believe, is a unique model. We implemented our model after a mini SWOT (strengths, weaknesses, opportunities, and threats ) analysis in August 2014. The biggest strength was that we were educating the next generation of “Observationists” as we improved the quality of care delivered to our patients. Our biggest opportunity was no existing curriculum for teaching internal medicine residents the art of observation medicine. So we designed our own. Just like Peter Drucker said, “The best way to predict the future is to create it.”
The curriculum is extremely innovative and exposes our residents to both the business and administrative aspect of OUs. Upon surveying our own residents anonymously within 6 months of instituting this rotation, over 90% felt this to be a valuable rotation towards their training. Since we went live, some of our residents who have graduated are now leading OUs at other hospitals.
To measure our program outcomes, we developed a dashboard with multiple metrics for our team. With such data, this rotation became an incubator for our residents for quality improvement projects. They have developed, implemented, and published multiple abstracts, presented posters and even won the first place for innovation at the Midwest Regional Society of General Internal Medicine conference.5-8
We have learned many lessons, and every challenge has been addressed as an opportunity. The first lesson was that we needed strong physician leadership to act as the gatekeeper to the unit. Second, as the rotation matured, we always kept our focus on high-quality patient care; we created a quality dashboard which includes length of stay, falls, and patient satisfaction as examples. Last but not least, we stayed mindful of stakeholder buy in, which for us was primarily our residents. We created the curriculum that provides the next generation of internists the broad experience of medicine, with the appropriate amount of autonomy and supervision. This, we believe, is a win-win proposition for all stakeholders – hospitals, physicians, residents, and most importantly the patients we serve. Additionally, data at our institution shows that our resident-run units are educationally, clinically, and financially beneficial to the residency programs and the hospitals.
Teaching and exposure to observation medicine is not currently a mainstay in many internal medicine residency programs. Our program provides a framework to establish an observation medicine rotation, which exposes residents to quality metrics and expands their scope of medical education.
Dr. Nand is medical director, care management & observation unit, and associate program director, internal medicine residency program, at the University of Illinois College of Medicine/Advocate Christ Medical Center.
References
1. “Hospital-Based Emergency Care: At the Breaking Point” (Washington: National Academies Press, 2006) 2. Baugh, CJ et al. “Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year” Health Aff (Millwood). 2012 Oct;31(10):2314-23
3. Fact Sheet: Two-Midnight Rule. 2015. Available at www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-07-01-2.html. Accessed March 29, 2016.
4. Society of Hospital Medicine. The observation unit white paper. http://www.hospitalmedicine.org, April 3, 2013.
5. Yousuf T. et al. “Intermediate chest pain protocol in an observation unit” Won first place award for innovation at the Midwest Regional SGIM conference, August 2015.
6. Sarfraz S et al. “Hand hygiene intervention increases compliance in observation unit” Poster: May 2016, Macy Midwest GME Conference, Michigan.
7. “Impact of syncope protocol in an observation unit of an academic tertiary care center” Poster for Oct 2016 AAIM skills development conference, National Harbor, Md.
8. Metgud S et al. “Integrating residents in providing high-value care via improved results of the ACGME annual resident survey” Poster: May 2016, Macy Midwest GME Conference, Michigan.
The Institute of Medicine, in its report “Hospital-Based Emergency Care – At the Breaking Point,” has identified Observation Units (OUs) as a “particularly promising” technique to improve patient flow.1 Many hospitals across the country either already have them or are in the process of establishing such units.
Multiple studies have shown that a highly efficient OU can save billions in health care costs.2 Historically, such units have existed within and are staffed by emergency departments. Since the implementation of the two-midnight rule in Oct. 2013, the complexities of observation care changed dramatically from run of the mill 30- to 40-year-old chest pain patients to 80- to 90-year-olds with multiple comorbidities being placed in observation.3 In many cases, this shifted the care out of the emergency department and into the arena of hospital medicine.
At our institution OUs are staffed by internal medicine residents supervised by faculty 24/7 year round. This, we believe, is a unique model. We implemented our model after a mini SWOT (strengths, weaknesses, opportunities, and threats ) analysis in August 2014. The biggest strength was that we were educating the next generation of “Observationists” as we improved the quality of care delivered to our patients. Our biggest opportunity was no existing curriculum for teaching internal medicine residents the art of observation medicine. So we designed our own. Just like Peter Drucker said, “The best way to predict the future is to create it.”
The curriculum is extremely innovative and exposes our residents to both the business and administrative aspect of OUs. Upon surveying our own residents anonymously within 6 months of instituting this rotation, over 90% felt this to be a valuable rotation towards their training. Since we went live, some of our residents who have graduated are now leading OUs at other hospitals.
To measure our program outcomes, we developed a dashboard with multiple metrics for our team. With such data, this rotation became an incubator for our residents for quality improvement projects. They have developed, implemented, and published multiple abstracts, presented posters and even won the first place for innovation at the Midwest Regional Society of General Internal Medicine conference.5-8
We have learned many lessons, and every challenge has been addressed as an opportunity. The first lesson was that we needed strong physician leadership to act as the gatekeeper to the unit. Second, as the rotation matured, we always kept our focus on high-quality patient care; we created a quality dashboard which includes length of stay, falls, and patient satisfaction as examples. Last but not least, we stayed mindful of stakeholder buy in, which for us was primarily our residents. We created the curriculum that provides the next generation of internists the broad experience of medicine, with the appropriate amount of autonomy and supervision. This, we believe, is a win-win proposition for all stakeholders – hospitals, physicians, residents, and most importantly the patients we serve. Additionally, data at our institution shows that our resident-run units are educationally, clinically, and financially beneficial to the residency programs and the hospitals.
Teaching and exposure to observation medicine is not currently a mainstay in many internal medicine residency programs. Our program provides a framework to establish an observation medicine rotation, which exposes residents to quality metrics and expands their scope of medical education.
Dr. Nand is medical director, care management & observation unit, and associate program director, internal medicine residency program, at the University of Illinois College of Medicine/Advocate Christ Medical Center.
References
1. “Hospital-Based Emergency Care: At the Breaking Point” (Washington: National Academies Press, 2006) 2. Baugh, CJ et al. “Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year” Health Aff (Millwood). 2012 Oct;31(10):2314-23
3. Fact Sheet: Two-Midnight Rule. 2015. Available at www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-07-01-2.html. Accessed March 29, 2016.
4. Society of Hospital Medicine. The observation unit white paper. http://www.hospitalmedicine.org, April 3, 2013.
5. Yousuf T. et al. “Intermediate chest pain protocol in an observation unit” Won first place award for innovation at the Midwest Regional SGIM conference, August 2015.
6. Sarfraz S et al. “Hand hygiene intervention increases compliance in observation unit” Poster: May 2016, Macy Midwest GME Conference, Michigan.
7. “Impact of syncope protocol in an observation unit of an academic tertiary care center” Poster for Oct 2016 AAIM skills development conference, National Harbor, Md.
8. Metgud S et al. “Integrating residents in providing high-value care via improved results of the ACGME annual resident survey” Poster: May 2016, Macy Midwest GME Conference, Michigan.
Older RBCs may sometimes be better
New research suggests that, overall, the age of transfused red blood cells (RBCs) does not significantly impact outcomes in critically ill adults, but, in some cases, older RBCs may be the better choice.
The study showed no significant difference in 90-day mortality whether patients received RBCs stored for a mean of 11.8 days or 22.4 days.
Likewise, there were no significant differences in most other study endpoints.
However, febrile nonhemolytic transfusion reactions were more frequent in the short-term storage group.
And among the most severely ill patients, the transfusion of older RBCs was associated with fewer deaths at 90 days.
“Older blood appears to be like a good red wine—better with some age,” said study author D. James Cooper, MD, of Monash University and Alfred Hospital in Melbourne, Victoria, Australia.
“The findings of our trial confirm that the current duration of storage of red blood cells for transfusion is both safe and optimal.”
Dr Cooper and his colleagues reported their findings in NEJM.
The researchers conducted this trial from November 2012 through December 2016 at 59 centers in 5 countries—Australia, New Zealand, Ireland, Finland, and Saudi Arabia.
The study included nearly 5000 critically ill adults who were randomized to receive either the freshest available RBCs or the oldest available RBCs.
There were 2457 patients in the short-term storage group, where the mean RBC storage duration was 11.8 ± 5.3 days. And there were 2462 patients in the long-term storage group, where the mean RBC storage duration was 22.4 ± 7.5 days.
Baseline characteristics were largely similar between the 2 groups. However, patients were significantly older in the short-term storage group, with a mean age of 62.5 ± 16.8 years, compared to 61.4 ± 17.3 years in the long-term storage group (P=0.02).
The median time from randomization to first RBC transfusion was similar between the groups—1.6 hours in the short-term group and 1.5 hours in the long-term group. The mean number of RBC units was also similar—4.1 ± 6.0 and 4.0 ± 6.2, respectively. The use of other blood products was similar as well.
90-day mortality
The study’s primary endpoint was 90-day mortality, which was 24.8% (n=610) in the short-term storage group and 24.1% (n=594) in the long-term storage group. The unadjusted (u) odds ratio (OR) was 1.04 (P=0.57).
When the researchers adjusted for APACHE III risk of death, patient age, hemoglobin at randomization, blood group, and site, the adjusted (a) OR was 1.04 (P=0.59).
When the researchers looked at patient subgroups, they found a significant difference between the storage groups when it came to 90-day mortality according to APACHE III risk of death.
Patients with an APACHE III predicted risk of death at hospital discharge at a median of 21.5% or higher had a significantly higher rate of 90-day mortality if they received the freshest available RBCs rather than the oldest available RBCs—37.7% and 34.0%, respectively (uOR=1.18, P=0.05).
There were no significant differences in 90-day mortality in the other subgroups.
Secondary endpoints
There were no significant between-group differences in secondary endpoints, with the exception of febrile nonhemolytic transfusion reaction. The incidence of this outcome was 5.0% in the short-term storage group and 3.6% in the long-term group (uOR=1.42, P=0.01; aOR=1.45, P=0.01).
Other secondary endpoints included (data in the short-term and long-term groups, respectively):
- Death at day 28 (19.4% and 18.8%, uOR=1.04, P=0.61)
- Death at day 180 (28.5% and 28.1%, uOR=1.02, P=0.75)
- Persistent organ dysfunction or death at day 28 (23.3% and 22.3%, uOR=1.06, P=0.39)
- New bloodstream infection (1.4% and 1.6%, uOR=0.90, P=0.65)
- Duration of hospital stay (median 14.5 days and 14.7 days, P=0.42)
- Duration of stay in the intensive care unit (median 4.2 days for both, P=0.86)
- Invasive mechanical ventilation (58.6% and 59.3%, uOR=0.97, P=0.64)
- Renal-replacement therapy (13.9% and 14.6%, uOR=0.97, P=0.48).
New research suggests that, overall, the age of transfused red blood cells (RBCs) does not significantly impact outcomes in critically ill adults, but, in some cases, older RBCs may be the better choice.
The study showed no significant difference in 90-day mortality whether patients received RBCs stored for a mean of 11.8 days or 22.4 days.
Likewise, there were no significant differences in most other study endpoints.
However, febrile nonhemolytic transfusion reactions were more frequent in the short-term storage group.
And among the most severely ill patients, the transfusion of older RBCs was associated with fewer deaths at 90 days.
“Older blood appears to be like a good red wine—better with some age,” said study author D. James Cooper, MD, of Monash University and Alfred Hospital in Melbourne, Victoria, Australia.
“The findings of our trial confirm that the current duration of storage of red blood cells for transfusion is both safe and optimal.”
Dr Cooper and his colleagues reported their findings in NEJM.
The researchers conducted this trial from November 2012 through December 2016 at 59 centers in 5 countries—Australia, New Zealand, Ireland, Finland, and Saudi Arabia.
The study included nearly 5000 critically ill adults who were randomized to receive either the freshest available RBCs or the oldest available RBCs.
There were 2457 patients in the short-term storage group, where the mean RBC storage duration was 11.8 ± 5.3 days. And there were 2462 patients in the long-term storage group, where the mean RBC storage duration was 22.4 ± 7.5 days.
Baseline characteristics were largely similar between the 2 groups. However, patients were significantly older in the short-term storage group, with a mean age of 62.5 ± 16.8 years, compared to 61.4 ± 17.3 years in the long-term storage group (P=0.02).
The median time from randomization to first RBC transfusion was similar between the groups—1.6 hours in the short-term group and 1.5 hours in the long-term group. The mean number of RBC units was also similar—4.1 ± 6.0 and 4.0 ± 6.2, respectively. The use of other blood products was similar as well.
90-day mortality
The study’s primary endpoint was 90-day mortality, which was 24.8% (n=610) in the short-term storage group and 24.1% (n=594) in the long-term storage group. The unadjusted (u) odds ratio (OR) was 1.04 (P=0.57).
When the researchers adjusted for APACHE III risk of death, patient age, hemoglobin at randomization, blood group, and site, the adjusted (a) OR was 1.04 (P=0.59).
When the researchers looked at patient subgroups, they found a significant difference between the storage groups when it came to 90-day mortality according to APACHE III risk of death.
Patients with an APACHE III predicted risk of death at hospital discharge at a median of 21.5% or higher had a significantly higher rate of 90-day mortality if they received the freshest available RBCs rather than the oldest available RBCs—37.7% and 34.0%, respectively (uOR=1.18, P=0.05).
There were no significant differences in 90-day mortality in the other subgroups.
Secondary endpoints
There were no significant between-group differences in secondary endpoints, with the exception of febrile nonhemolytic transfusion reaction. The incidence of this outcome was 5.0% in the short-term storage group and 3.6% in the long-term group (uOR=1.42, P=0.01; aOR=1.45, P=0.01).
Other secondary endpoints included (data in the short-term and long-term groups, respectively):
- Death at day 28 (19.4% and 18.8%, uOR=1.04, P=0.61)
- Death at day 180 (28.5% and 28.1%, uOR=1.02, P=0.75)
- Persistent organ dysfunction or death at day 28 (23.3% and 22.3%, uOR=1.06, P=0.39)
- New bloodstream infection (1.4% and 1.6%, uOR=0.90, P=0.65)
- Duration of hospital stay (median 14.5 days and 14.7 days, P=0.42)
- Duration of stay in the intensive care unit (median 4.2 days for both, P=0.86)
- Invasive mechanical ventilation (58.6% and 59.3%, uOR=0.97, P=0.64)
- Renal-replacement therapy (13.9% and 14.6%, uOR=0.97, P=0.48).
New research suggests that, overall, the age of transfused red blood cells (RBCs) does not significantly impact outcomes in critically ill adults, but, in some cases, older RBCs may be the better choice.
The study showed no significant difference in 90-day mortality whether patients received RBCs stored for a mean of 11.8 days or 22.4 days.
Likewise, there were no significant differences in most other study endpoints.
However, febrile nonhemolytic transfusion reactions were more frequent in the short-term storage group.
And among the most severely ill patients, the transfusion of older RBCs was associated with fewer deaths at 90 days.
“Older blood appears to be like a good red wine—better with some age,” said study author D. James Cooper, MD, of Monash University and Alfred Hospital in Melbourne, Victoria, Australia.
“The findings of our trial confirm that the current duration of storage of red blood cells for transfusion is both safe and optimal.”
Dr Cooper and his colleagues reported their findings in NEJM.
The researchers conducted this trial from November 2012 through December 2016 at 59 centers in 5 countries—Australia, New Zealand, Ireland, Finland, and Saudi Arabia.
The study included nearly 5000 critically ill adults who were randomized to receive either the freshest available RBCs or the oldest available RBCs.
There were 2457 patients in the short-term storage group, where the mean RBC storage duration was 11.8 ± 5.3 days. And there were 2462 patients in the long-term storage group, where the mean RBC storage duration was 22.4 ± 7.5 days.
Baseline characteristics were largely similar between the 2 groups. However, patients were significantly older in the short-term storage group, with a mean age of 62.5 ± 16.8 years, compared to 61.4 ± 17.3 years in the long-term storage group (P=0.02).
The median time from randomization to first RBC transfusion was similar between the groups—1.6 hours in the short-term group and 1.5 hours in the long-term group. The mean number of RBC units was also similar—4.1 ± 6.0 and 4.0 ± 6.2, respectively. The use of other blood products was similar as well.
90-day mortality
The study’s primary endpoint was 90-day mortality, which was 24.8% (n=610) in the short-term storage group and 24.1% (n=594) in the long-term storage group. The unadjusted (u) odds ratio (OR) was 1.04 (P=0.57).
When the researchers adjusted for APACHE III risk of death, patient age, hemoglobin at randomization, blood group, and site, the adjusted (a) OR was 1.04 (P=0.59).
When the researchers looked at patient subgroups, they found a significant difference between the storage groups when it came to 90-day mortality according to APACHE III risk of death.
Patients with an APACHE III predicted risk of death at hospital discharge at a median of 21.5% or higher had a significantly higher rate of 90-day mortality if they received the freshest available RBCs rather than the oldest available RBCs—37.7% and 34.0%, respectively (uOR=1.18, P=0.05).
There were no significant differences in 90-day mortality in the other subgroups.
Secondary endpoints
There were no significant between-group differences in secondary endpoints, with the exception of febrile nonhemolytic transfusion reaction. The incidence of this outcome was 5.0% in the short-term storage group and 3.6% in the long-term group (uOR=1.42, P=0.01; aOR=1.45, P=0.01).
Other secondary endpoints included (data in the short-term and long-term groups, respectively):
- Death at day 28 (19.4% and 18.8%, uOR=1.04, P=0.61)
- Death at day 180 (28.5% and 28.1%, uOR=1.02, P=0.75)
- Persistent organ dysfunction or death at day 28 (23.3% and 22.3%, uOR=1.06, P=0.39)
- New bloodstream infection (1.4% and 1.6%, uOR=0.90, P=0.65)
- Duration of hospital stay (median 14.5 days and 14.7 days, P=0.42)
- Duration of stay in the intensive care unit (median 4.2 days for both, P=0.86)
- Invasive mechanical ventilation (58.6% and 59.3%, uOR=0.97, P=0.64)
- Renal-replacement therapy (13.9% and 14.6%, uOR=0.97, P=0.48).
Daratumumab combos approved to treat MM in Japan
The Ministry of Health, Labor and Welfare in Japan has approved the use of daratumumab (DARZALEX®) in combination with lenalidomide and dexamethasone or bortezomib and dexamethasone to treat adults with relapsed or refractory multiple myeloma (MM).
Daratumumab is a human IgG1k monoclonal antibody that binds to CD38, which is highly expressed on the surface of MM cells.
The drug is being developed by Janssen Biotech, Inc. under an exclusive worldwide license from Genmab.
The approval of daratumumab is based on data from the phase 3 POLLUX and CASTOR trials.
In the POLLUX trial, researchers compared treatment with lenalidomide and dexamethasone to treatment with daratumumab, lenalidomide, and dexamethasone in patients with relapsed or refractory MM.
Patients who received daratumumab in combination had a significantly higher response rate and longer progression-free survival than patients who received the 2-drug combination.
However, treatment with daratumumab was associated with infusion-related reactions and a higher incidence of neutropenia.
Results from this trial were published in NEJM in October 2016.
In the CASTOR trial, researchers compared treatment with bortezomib and dexamethasone to treatment with daratumumab, bortezomib, and dexamethasone in patients with previously treated MM.
Patients who received the 3-drug combination had a higher response rate, longer progression-free survival, and a higher incidence of grade 3/4 adverse events than those who received the 2-drug combination.
Results from this trial were published in NEJM in August 2016.
The Ministry of Health, Labor and Welfare in Japan has approved the use of daratumumab (DARZALEX®) in combination with lenalidomide and dexamethasone or bortezomib and dexamethasone to treat adults with relapsed or refractory multiple myeloma (MM).
Daratumumab is a human IgG1k monoclonal antibody that binds to CD38, which is highly expressed on the surface of MM cells.
The drug is being developed by Janssen Biotech, Inc. under an exclusive worldwide license from Genmab.
The approval of daratumumab is based on data from the phase 3 POLLUX and CASTOR trials.
In the POLLUX trial, researchers compared treatment with lenalidomide and dexamethasone to treatment with daratumumab, lenalidomide, and dexamethasone in patients with relapsed or refractory MM.
Patients who received daratumumab in combination had a significantly higher response rate and longer progression-free survival than patients who received the 2-drug combination.
However, treatment with daratumumab was associated with infusion-related reactions and a higher incidence of neutropenia.
Results from this trial were published in NEJM in October 2016.
In the CASTOR trial, researchers compared treatment with bortezomib and dexamethasone to treatment with daratumumab, bortezomib, and dexamethasone in patients with previously treated MM.
Patients who received the 3-drug combination had a higher response rate, longer progression-free survival, and a higher incidence of grade 3/4 adverse events than those who received the 2-drug combination.
Results from this trial were published in NEJM in August 2016.
The Ministry of Health, Labor and Welfare in Japan has approved the use of daratumumab (DARZALEX®) in combination with lenalidomide and dexamethasone or bortezomib and dexamethasone to treat adults with relapsed or refractory multiple myeloma (MM).
Daratumumab is a human IgG1k monoclonal antibody that binds to CD38, which is highly expressed on the surface of MM cells.
The drug is being developed by Janssen Biotech, Inc. under an exclusive worldwide license from Genmab.
The approval of daratumumab is based on data from the phase 3 POLLUX and CASTOR trials.
In the POLLUX trial, researchers compared treatment with lenalidomide and dexamethasone to treatment with daratumumab, lenalidomide, and dexamethasone in patients with relapsed or refractory MM.
Patients who received daratumumab in combination had a significantly higher response rate and longer progression-free survival than patients who received the 2-drug combination.
However, treatment with daratumumab was associated with infusion-related reactions and a higher incidence of neutropenia.
Results from this trial were published in NEJM in October 2016.
In the CASTOR trial, researchers compared treatment with bortezomib and dexamethasone to treatment with daratumumab, bortezomib, and dexamethasone in patients with previously treated MM.
Patients who received the 3-drug combination had a higher response rate, longer progression-free survival, and a higher incidence of grade 3/4 adverse events than those who received the 2-drug combination.
Results from this trial were published in NEJM in August 2016.
FDA grants factor IX therapy orphan designation
The US Food and Drug Administration (FDA) has granted orphan drug designation to CB 2679d/ISU304, a clinical stage drug candidate for hemophilia B.
CB 2679d/ISU304 is a next-generation coagulation factor IX variant that may allow for subcutaneous prophylactic treatment of patients with hemophilia B.
The product is being developed by Catalyst Biosciences, Inc. and ISU Abxis.
The companies are currently conducting a phase 1/2 trial of CB 2679d/ISU304 in patients with severe hemophilia B.
Catalyst Biosciences and ISU Abxis plan to have interim, top-line results from this trial by the end of 2017 and complete results in early 2018.
CB 2679d/ISU304 also has orphan medicinal product designation from the European Commission.
About orphan designation
The FDA grants orphan designation to products intended to treat, diagnose, or prevent diseases/disorders that affect fewer than 200,000 people in the US.
The designation provides incentives for sponsors to develop products for rare diseases. This may include tax credits toward the cost of clinical trials, prescription drug user fee waivers, and 7 years of market exclusivity if the product is approved.
The US Food and Drug Administration (FDA) has granted orphan drug designation to CB 2679d/ISU304, a clinical stage drug candidate for hemophilia B.
CB 2679d/ISU304 is a next-generation coagulation factor IX variant that may allow for subcutaneous prophylactic treatment of patients with hemophilia B.
The product is being developed by Catalyst Biosciences, Inc. and ISU Abxis.
The companies are currently conducting a phase 1/2 trial of CB 2679d/ISU304 in patients with severe hemophilia B.
Catalyst Biosciences and ISU Abxis plan to have interim, top-line results from this trial by the end of 2017 and complete results in early 2018.
CB 2679d/ISU304 also has orphan medicinal product designation from the European Commission.
About orphan designation
The FDA grants orphan designation to products intended to treat, diagnose, or prevent diseases/disorders that affect fewer than 200,000 people in the US.
The designation provides incentives for sponsors to develop products for rare diseases. This may include tax credits toward the cost of clinical trials, prescription drug user fee waivers, and 7 years of market exclusivity if the product is approved.
The US Food and Drug Administration (FDA) has granted orphan drug designation to CB 2679d/ISU304, a clinical stage drug candidate for hemophilia B.
CB 2679d/ISU304 is a next-generation coagulation factor IX variant that may allow for subcutaneous prophylactic treatment of patients with hemophilia B.
The product is being developed by Catalyst Biosciences, Inc. and ISU Abxis.
The companies are currently conducting a phase 1/2 trial of CB 2679d/ISU304 in patients with severe hemophilia B.
Catalyst Biosciences and ISU Abxis plan to have interim, top-line results from this trial by the end of 2017 and complete results in early 2018.
CB 2679d/ISU304 also has orphan medicinal product designation from the European Commission.
About orphan designation
The FDA grants orphan designation to products intended to treat, diagnose, or prevent diseases/disorders that affect fewer than 200,000 people in the US.
The designation provides incentives for sponsors to develop products for rare diseases. This may include tax credits toward the cost of clinical trials, prescription drug user fee waivers, and 7 years of market exclusivity if the product is approved.
Bowel rest or early feeding for acute pancreatitis
Clinical question: When should you start enteral feedings in patients with acute pancreatitis?
Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.
Setting: Europe, New Zealand, United States, and China.
Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).
The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.
Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.
Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.
Clinical question: When should you start enteral feedings in patients with acute pancreatitis?
Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.
Setting: Europe, New Zealand, United States, and China.
Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).
The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.
Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.
Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.
Clinical question: When should you start enteral feedings in patients with acute pancreatitis?
Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.
Setting: Europe, New Zealand, United States, and China.
Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).
The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.
Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.
Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.
Nonpruritic rash on arms
Based on the results of the punch biopsy and the slight scale seen on the periphery of the lesions (a collarette scale pattern), the FP made a diagnosis of pityriasis rosea.
The distribution of the lesions in this case was not typical for pityriasis rosea; lesions are typically found on the trunk (not the arms) and may start with a herald patch. Given the distribution of the lesions in this case, the more precise diagnosis was inverse pityriasis rosea.
The physician explained to the patient and her mother that the rash would resolve spontaneously and was unlikely to leave any scarring. Six months later, the FP saw the mother for an unrelated issue and she said her daughter’s rash had gotten better within a month of her daughter’s visit, and there had been no scarring.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Henderson D, Usatine R. Pityriasis rosea. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013: 896-900.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com
Based on the results of the punch biopsy and the slight scale seen on the periphery of the lesions (a collarette scale pattern), the FP made a diagnosis of pityriasis rosea.
The distribution of the lesions in this case was not typical for pityriasis rosea; lesions are typically found on the trunk (not the arms) and may start with a herald patch. Given the distribution of the lesions in this case, the more precise diagnosis was inverse pityriasis rosea.
The physician explained to the patient and her mother that the rash would resolve spontaneously and was unlikely to leave any scarring. Six months later, the FP saw the mother for an unrelated issue and she said her daughter’s rash had gotten better within a month of her daughter’s visit, and there had been no scarring.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Henderson D, Usatine R. Pityriasis rosea. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013: 896-900.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com
Based on the results of the punch biopsy and the slight scale seen on the periphery of the lesions (a collarette scale pattern), the FP made a diagnosis of pityriasis rosea.
The distribution of the lesions in this case was not typical for pityriasis rosea; lesions are typically found on the trunk (not the arms) and may start with a herald patch. Given the distribution of the lesions in this case, the more precise diagnosis was inverse pityriasis rosea.
The physician explained to the patient and her mother that the rash would resolve spontaneously and was unlikely to leave any scarring. Six months later, the FP saw the mother for an unrelated issue and she said her daughter’s rash had gotten better within a month of her daughter’s visit, and there had been no scarring.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Henderson D, Usatine R. Pityriasis rosea. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013: 896-900.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com
Delayed-release metformin proves promising for diabetic renal disease
LISBON – in a 16-week, dose-ranging, phase 2 trial performed in patients with type 2 diabetes mellitus and chronic kidney disease (CKD).
There was also a reduced incidence of gastrointestinal side effects with metformin DR versus immediate-release (IR) metformin (less than 16% at all doses tested vs. 28%), particularly with regard to nausea (1%-3% vs. 10%).
“Based on concerns about the potential for lactic acidosis, primarily in patients with fairly severe renal impairment, metformin use is contraindicated in patients with stage IV chronic kidney disease,” Dr. Frias observed. Furthermore its use is restricted in patients with stage IIIB CKD, with European guidelines recommending lower starting (500 mg) and maximum (1,000 mg) daily doses, and U.S. guidelines recommending continuing treatment with caution in those already on metformin and not initiating metformin in new patients with this stage of kidney disease.
Treating patients with type 2 diabetes mellitus (T2DM) and later stages of CKD is challenging, as there are issues with almost all of the available alternatives to metformin, he noted. For instance, insulin use and sulfonylureas carry the risk of hypoglycemia, which is higher in patients with stage IIIB/IV renal disease than without. The dipeptidyl peptidase-4 inhibitors are “modestly able to reduce A1c, but generally do much better in combination with metformin, which is often contraindicated in these patients,” Dr. Frias said. Sodium glucose cotransporter 2 inhibitors are “generally not effective” in this patient group, he said.
Metformin DR is being specifically developed to manage patients with T2DM patients and stage IIIB/IV CKD, Dr. Frias said. Its enteric coating helps it bypass the stomach and upper intestine and so ensures that the majority of metformin absorption occurs in the lower bowel to reduce systemic exposure while retaining its positive effects on glycemic mechanisms such as the secretion of glucagon-like peptide 1.
In the current phase 2 study, 571 patients with T2DM and stage I/II CKD were recruited. Patients with stage IIIB/IV were not included because of the restrictions on the use of metformin.
Patients were randomized to receive placebo or metformin DR (600 mg, 900 mg, 1,200 mg, and 1,500 mg twice daily) in a double-blind comparison, with a single-blind reference arm of metformin IR, (1,000 mg once daily for the first week then 1,000 mg twice daily) also included as part of the study design.
The change in hemoglobin A1c (HbA1c) from baseline levels to week 16 of treatment, the primary endpoint, was significantly (P less than .05) greater with metformin DR) than with placebo (–0.49%, –0.62%, and –0.06%, respectively). Changes in fasting plasma glucose (FPG) from week 4 to week 16 were also higher with metformin DR than with placebo, with the 1,200 metformin DR dose achieving a 25.1 mg/dL drop in FPG, “almost 80% of the fasting glucose–lowering capacity of the immediate release formulation.”
While the changes in HbA1c (-1.10%) and FPG (-32.6 m/dL) were greatest with metformin IR, the lower systemic exposure needs to be considered, Dr. Frias said. The plasma exposure with metformin DR was less than 37% that of metformin IR.
“If we normalize for systemic exposure, so for any given unit, if you will, of systemic exposure, you actually had [a 1.5-fold] improved hemoglobin A1c with the delayed-release formulation, and a twofold increase in the fasting glucose,” Dr. Frias reported. “So from a practical point of view, if you needed to reach those ‘safe’ plasma concentrations with an immediate-release formulation, you would have to lower [the dose of] that formulation, probably to a dose that would not be efficacious for a patient.”
As for safety, any adverse event (AE) occurred in 41.7% of placebo-treated patients, in 47.9% of metformin IR–treated patients, and in 55.3%, 48.4%, 39.6%, and 43.8%, of those taking metformin DR at the respective doses of 600 mg, 900 mg, 1,200 mg, and 1, 500 mg.
Serious AEs were recorded in 4.2% of placebo-treated patients, 1.1% of metformin IR-treated patients, and in 1.1%, 0%, 4.2%, and 1.0% those taking increasing doses of metformin DR.
There were fewer AEs related to study medication (12.8%, 13.7%, 14.6%, and 9.4%) and subsequently resulting in discontinuation (3.2%, 2.1%, 7.3%, 2.1%) with metformin DR than with metformin IR (25.5%, 8.5%). Of placebo-treated patients, 6.3% developed a treatment-related AE, and 6.3% discontinued the study as a result.
“The improved risk/benefit profile that’s seen [in this study] would lead you to think that this would be a formulation that would be effective, particularly in patients with CKD IIIB or IV,” Dr. Frias concluded, noting that further studies would need to look into this possibility further.
The study was funded by Elcelyx Therapeutics. Dr. Frias disclosed receiving research support from Abbvie, Eli Lilly, IONIS, Janssen, Johnson & Johnson, Merck, Mylan, Novartis, Pfizer, and vTv therapeutics. He has also received research support from and participated in scientific advisory boards for AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Novo Nordisk, and Theracos. A coauthor is an employee of Elcelyx Therapeutics and disclosed being a shareholder of the company.
LISBON – in a 16-week, dose-ranging, phase 2 trial performed in patients with type 2 diabetes mellitus and chronic kidney disease (CKD).
There was also a reduced incidence of gastrointestinal side effects with metformin DR versus immediate-release (IR) metformin (less than 16% at all doses tested vs. 28%), particularly with regard to nausea (1%-3% vs. 10%).
“Based on concerns about the potential for lactic acidosis, primarily in patients with fairly severe renal impairment, metformin use is contraindicated in patients with stage IV chronic kidney disease,” Dr. Frias observed. Furthermore its use is restricted in patients with stage IIIB CKD, with European guidelines recommending lower starting (500 mg) and maximum (1,000 mg) daily doses, and U.S. guidelines recommending continuing treatment with caution in those already on metformin and not initiating metformin in new patients with this stage of kidney disease.
Treating patients with type 2 diabetes mellitus (T2DM) and later stages of CKD is challenging, as there are issues with almost all of the available alternatives to metformin, he noted. For instance, insulin use and sulfonylureas carry the risk of hypoglycemia, which is higher in patients with stage IIIB/IV renal disease than without. The dipeptidyl peptidase-4 inhibitors are “modestly able to reduce A1c, but generally do much better in combination with metformin, which is often contraindicated in these patients,” Dr. Frias said. Sodium glucose cotransporter 2 inhibitors are “generally not effective” in this patient group, he said.
Metformin DR is being specifically developed to manage patients with T2DM patients and stage IIIB/IV CKD, Dr. Frias said. Its enteric coating helps it bypass the stomach and upper intestine and so ensures that the majority of metformin absorption occurs in the lower bowel to reduce systemic exposure while retaining its positive effects on glycemic mechanisms such as the secretion of glucagon-like peptide 1.
In the current phase 2 study, 571 patients with T2DM and stage I/II CKD were recruited. Patients with stage IIIB/IV were not included because of the restrictions on the use of metformin.
Patients were randomized to receive placebo or metformin DR (600 mg, 900 mg, 1,200 mg, and 1,500 mg twice daily) in a double-blind comparison, with a single-blind reference arm of metformin IR, (1,000 mg once daily for the first week then 1,000 mg twice daily) also included as part of the study design.
The change in hemoglobin A1c (HbA1c) from baseline levels to week 16 of treatment, the primary endpoint, was significantly (P less than .05) greater with metformin DR) than with placebo (–0.49%, –0.62%, and –0.06%, respectively). Changes in fasting plasma glucose (FPG) from week 4 to week 16 were also higher with metformin DR than with placebo, with the 1,200 metformin DR dose achieving a 25.1 mg/dL drop in FPG, “almost 80% of the fasting glucose–lowering capacity of the immediate release formulation.”
While the changes in HbA1c (-1.10%) and FPG (-32.6 m/dL) were greatest with metformin IR, the lower systemic exposure needs to be considered, Dr. Frias said. The plasma exposure with metformin DR was less than 37% that of metformin IR.
“If we normalize for systemic exposure, so for any given unit, if you will, of systemic exposure, you actually had [a 1.5-fold] improved hemoglobin A1c with the delayed-release formulation, and a twofold increase in the fasting glucose,” Dr. Frias reported. “So from a practical point of view, if you needed to reach those ‘safe’ plasma concentrations with an immediate-release formulation, you would have to lower [the dose of] that formulation, probably to a dose that would not be efficacious for a patient.”
As for safety, any adverse event (AE) occurred in 41.7% of placebo-treated patients, in 47.9% of metformin IR–treated patients, and in 55.3%, 48.4%, 39.6%, and 43.8%, of those taking metformin DR at the respective doses of 600 mg, 900 mg, 1,200 mg, and 1, 500 mg.
Serious AEs were recorded in 4.2% of placebo-treated patients, 1.1% of metformin IR-treated patients, and in 1.1%, 0%, 4.2%, and 1.0% those taking increasing doses of metformin DR.
There were fewer AEs related to study medication (12.8%, 13.7%, 14.6%, and 9.4%) and subsequently resulting in discontinuation (3.2%, 2.1%, 7.3%, 2.1%) with metformin DR than with metformin IR (25.5%, 8.5%). Of placebo-treated patients, 6.3% developed a treatment-related AE, and 6.3% discontinued the study as a result.
“The improved risk/benefit profile that’s seen [in this study] would lead you to think that this would be a formulation that would be effective, particularly in patients with CKD IIIB or IV,” Dr. Frias concluded, noting that further studies would need to look into this possibility further.
The study was funded by Elcelyx Therapeutics. Dr. Frias disclosed receiving research support from Abbvie, Eli Lilly, IONIS, Janssen, Johnson & Johnson, Merck, Mylan, Novartis, Pfizer, and vTv therapeutics. He has also received research support from and participated in scientific advisory boards for AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Novo Nordisk, and Theracos. A coauthor is an employee of Elcelyx Therapeutics and disclosed being a shareholder of the company.
LISBON – in a 16-week, dose-ranging, phase 2 trial performed in patients with type 2 diabetes mellitus and chronic kidney disease (CKD).
There was also a reduced incidence of gastrointestinal side effects with metformin DR versus immediate-release (IR) metformin (less than 16% at all doses tested vs. 28%), particularly with regard to nausea (1%-3% vs. 10%).
“Based on concerns about the potential for lactic acidosis, primarily in patients with fairly severe renal impairment, metformin use is contraindicated in patients with stage IV chronic kidney disease,” Dr. Frias observed. Furthermore its use is restricted in patients with stage IIIB CKD, with European guidelines recommending lower starting (500 mg) and maximum (1,000 mg) daily doses, and U.S. guidelines recommending continuing treatment with caution in those already on metformin and not initiating metformin in new patients with this stage of kidney disease.
Treating patients with type 2 diabetes mellitus (T2DM) and later stages of CKD is challenging, as there are issues with almost all of the available alternatives to metformin, he noted. For instance, insulin use and sulfonylureas carry the risk of hypoglycemia, which is higher in patients with stage IIIB/IV renal disease than without. The dipeptidyl peptidase-4 inhibitors are “modestly able to reduce A1c, but generally do much better in combination with metformin, which is often contraindicated in these patients,” Dr. Frias said. Sodium glucose cotransporter 2 inhibitors are “generally not effective” in this patient group, he said.
Metformin DR is being specifically developed to manage patients with T2DM patients and stage IIIB/IV CKD, Dr. Frias said. Its enteric coating helps it bypass the stomach and upper intestine and so ensures that the majority of metformin absorption occurs in the lower bowel to reduce systemic exposure while retaining its positive effects on glycemic mechanisms such as the secretion of glucagon-like peptide 1.
In the current phase 2 study, 571 patients with T2DM and stage I/II CKD were recruited. Patients with stage IIIB/IV were not included because of the restrictions on the use of metformin.
Patients were randomized to receive placebo or metformin DR (600 mg, 900 mg, 1,200 mg, and 1,500 mg twice daily) in a double-blind comparison, with a single-blind reference arm of metformin IR, (1,000 mg once daily for the first week then 1,000 mg twice daily) also included as part of the study design.
The change in hemoglobin A1c (HbA1c) from baseline levels to week 16 of treatment, the primary endpoint, was significantly (P less than .05) greater with metformin DR) than with placebo (–0.49%, –0.62%, and –0.06%, respectively). Changes in fasting plasma glucose (FPG) from week 4 to week 16 were also higher with metformin DR than with placebo, with the 1,200 metformin DR dose achieving a 25.1 mg/dL drop in FPG, “almost 80% of the fasting glucose–lowering capacity of the immediate release formulation.”
While the changes in HbA1c (-1.10%) and FPG (-32.6 m/dL) were greatest with metformin IR, the lower systemic exposure needs to be considered, Dr. Frias said. The plasma exposure with metformin DR was less than 37% that of metformin IR.
“If we normalize for systemic exposure, so for any given unit, if you will, of systemic exposure, you actually had [a 1.5-fold] improved hemoglobin A1c with the delayed-release formulation, and a twofold increase in the fasting glucose,” Dr. Frias reported. “So from a practical point of view, if you needed to reach those ‘safe’ plasma concentrations with an immediate-release formulation, you would have to lower [the dose of] that formulation, probably to a dose that would not be efficacious for a patient.”
As for safety, any adverse event (AE) occurred in 41.7% of placebo-treated patients, in 47.9% of metformin IR–treated patients, and in 55.3%, 48.4%, 39.6%, and 43.8%, of those taking metformin DR at the respective doses of 600 mg, 900 mg, 1,200 mg, and 1, 500 mg.
Serious AEs were recorded in 4.2% of placebo-treated patients, 1.1% of metformin IR-treated patients, and in 1.1%, 0%, 4.2%, and 1.0% those taking increasing doses of metformin DR.
There were fewer AEs related to study medication (12.8%, 13.7%, 14.6%, and 9.4%) and subsequently resulting in discontinuation (3.2%, 2.1%, 7.3%, 2.1%) with metformin DR than with metformin IR (25.5%, 8.5%). Of placebo-treated patients, 6.3% developed a treatment-related AE, and 6.3% discontinued the study as a result.
“The improved risk/benefit profile that’s seen [in this study] would lead you to think that this would be a formulation that would be effective, particularly in patients with CKD IIIB or IV,” Dr. Frias concluded, noting that further studies would need to look into this possibility further.
The study was funded by Elcelyx Therapeutics. Dr. Frias disclosed receiving research support from Abbvie, Eli Lilly, IONIS, Janssen, Johnson & Johnson, Merck, Mylan, Novartis, Pfizer, and vTv therapeutics. He has also received research support from and participated in scientific advisory boards for AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Novo Nordisk, and Theracos. A coauthor is an employee of Elcelyx Therapeutics and disclosed being a shareholder of the company.
AT EASD 2017
Key clinical point: A delayed-release formulation of metformin appears have a risk/benefit profile that would enable use in patients with type 2 diabetes and chronic kidney disease.
Major finding: Change in HbA1c at week 16 (primary endpoint) was –0.49%, –0.62%, and –0.06%, for metformin DR 1,200 mg, 1,500 mg, and placebo, respectively, P less than .05).
Data source: A 16-week, dose-ranging phase 2 trial involving 571 patients with T2DM and CKD.
Disclosures: The study was funded by Elcelyx Therapeutics. Dr. Frias disclosed receiving research support from Abbvie, Eli Lilly, IONIS, Janssen, Johnson & Johnson, Merck, Mylan, Novartis, Pfizer, and vTv therapeutics. He has also received research support from and participated in scientific advisory boards for AstraZeneca, Boehringer Ingelheim, Bristol-Myers Squibb, Novo Nordisk, and Theracos. A coauthor is an employee of Elcelyx Therapeutics and disclosed being a shareholder of the company.
Rituximab maintenance halves MCL death risk after ASCT
, results of a phase 3 trial show.
After 50.2 months median follow-up, the overall survival rate for patients aged 65 or younger randomized to rituximab maintenance after four cycles of induction chemotherapy with rituximab, dexamethasone, cytarabine, and a platinum derivative (R-DHAP) followed by ASCT was 89%, compared with 80% for patients randomized to observation (P = .004), reported Steven Le Gouill, MD, PhD, of University Hospital Hotel-Dieu, in Nantes, Frances, and colleagues.
In an unadjusted regression analysis, the difference translated into a hazard ratio for death within 4 years of 0.50 (P = .004) favoring rituximab, they wrote in the Sept. 28, 2017 issue of The New England Journal of Medicine.
“[A]n induction regimen with four courses of R-DHAP followed by transplantation without total-body irradiation resulted in a high rate of complete response. A 3-year course of rituximab maintenance therapy administered every 2 months prolonged overall survival among young patients with mantle cell lymphoma,” the investigators wrote (N Engl J Med. 2017;377:1250-60).
Dr. Le Gouill and his colleagues hypothesized that relapses following treatment for MCL may be caused by residual malignant cells that chemotherapy and ASCT fail to eradicate, suggesting that maintenance therapy with rituximab could help to suppress residual disease, prolong the duration of responses, and extend both progression-free and overall survival.
They cited an earlier study by members of the European Mantle Cell Lymphoma Network showing that among patients aged 60 and older who had a response to eight cycles of chemotherapy with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP), maintenance therapy with rituximab was associated with an 87% 4-year overall survival rate vs. 63% for patients maintained on interferon alfa (P = .005) (N Engl J Med. 2012;367:520-31).
For the current study, the researchers enrolled 299 patients, of whom 257 went on to ASCT, and 240 of whom were randomized and were included in an intention-to-treat (ITT) analysis.
Patients received induction with four cycles of R-DHAP. Those patients who had partial responses or tumor mass shrinkage of less than 75% on CT received a rescue induction with four cycles of R-CHOP.
Those patients with complete or partial responses could then go on to transplantation after a conditioning regimen of R-BEAM (rituximab, carmustine, etoposide, cytarabine, and melphalan).
Patients randomized after ASCT to rituximab received it every 2 months for 3 years in an intravenous infusion at a dose of 375 mg/m2.
After a median of 50.2 months from randomization, the rate of 4-year event-free survival (no disease progression, relapse, death, or severe infection), the primary endpoint, was 79% for patients maintained on rituximab vs. 61% for those on observation alone (P = .001).
The 4-year progression-free survival rate also favored rituximab at 83% vs. 64%, respectively (P less than .001), with respective overall survival rates of 89% and 80%.
The median event-free survival, progression-free survival, and overall survival was not reached in either study arm.
For the 59 patients who for various reasons did not undergo randomization, the median progression-free survival was 11.0 months, and the median overall survival was 30.6 months.
In all, 83 of the 120 patients randomized to rituximab completed the scheduled 3 years of therapy. Maintenance therapy was stopped for disease progression in 16 patients and because of neutropenia in 9. There were 13 deaths in the rituximab arm, including 3 deaths from second malignancies.
Of the 120 patients assigned to observation, 37 had disease progression during the study period, and 24 died, one from a second malignancy.
Four patients in each study arm had serious infections after ASCT, including one case each of spondylitis, pyelonephritis, septicemia, and varicella pneumonia in the rituximab group, and septicemia, cellulitis, meningitis, and severe pneumonia in the observation group.
Lymphoma was the cause of death in 8 patients assigned to rituximab, and in 16 assigned to observation.
The investigators noted that although some centers use total-body irradiation for conditioning prior to transplant, this modality is not available in all centers and is associated with both short- and long-term toxicities. The progression-free survival results seen in this trial, where only ablative drug regimens were used “suggest that total-body irradiation–based conditioning regimens may not be superior to chemotherapy alone when an effective regimen is used during induction,” they wrote.
The study was supported by Roche and Amgen. Dr. Le Gouill disclosed fees for consulting and honoraria from Roche, Janssen-Cilag, and Celgene. Multiple coauthors disclosed similar relationships with industry.
, results of a phase 3 trial show.
After 50.2 months median follow-up, the overall survival rate for patients aged 65 or younger randomized to rituximab maintenance after four cycles of induction chemotherapy with rituximab, dexamethasone, cytarabine, and a platinum derivative (R-DHAP) followed by ASCT was 89%, compared with 80% for patients randomized to observation (P = .004), reported Steven Le Gouill, MD, PhD, of University Hospital Hotel-Dieu, in Nantes, Frances, and colleagues.
In an unadjusted regression analysis, the difference translated into a hazard ratio for death within 4 years of 0.50 (P = .004) favoring rituximab, they wrote in the Sept. 28, 2017 issue of The New England Journal of Medicine.
“[A]n induction regimen with four courses of R-DHAP followed by transplantation without total-body irradiation resulted in a high rate of complete response. A 3-year course of rituximab maintenance therapy administered every 2 months prolonged overall survival among young patients with mantle cell lymphoma,” the investigators wrote (N Engl J Med. 2017;377:1250-60).
Dr. Le Gouill and his colleagues hypothesized that relapses following treatment for MCL may be caused by residual malignant cells that chemotherapy and ASCT fail to eradicate, suggesting that maintenance therapy with rituximab could help to suppress residual disease, prolong the duration of responses, and extend both progression-free and overall survival.
They cited an earlier study by members of the European Mantle Cell Lymphoma Network showing that among patients aged 60 and older who had a response to eight cycles of chemotherapy with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP), maintenance therapy with rituximab was associated with an 87% 4-year overall survival rate vs. 63% for patients maintained on interferon alfa (P = .005) (N Engl J Med. 2012;367:520-31).
For the current study, the researchers enrolled 299 patients, of whom 257 went on to ASCT, and 240 of whom were randomized and were included in an intention-to-treat (ITT) analysis.
Patients received induction with four cycles of R-DHAP. Those patients who had partial responses or tumor mass shrinkage of less than 75% on CT received a rescue induction with four cycles of R-CHOP.
Those patients with complete or partial responses could then go on to transplantation after a conditioning regimen of R-BEAM (rituximab, carmustine, etoposide, cytarabine, and melphalan).
Patients randomized after ASCT to rituximab received it every 2 months for 3 years in an intravenous infusion at a dose of 375 mg/m2.
After a median of 50.2 months from randomization, the rate of 4-year event-free survival (no disease progression, relapse, death, or severe infection), the primary endpoint, was 79% for patients maintained on rituximab vs. 61% for those on observation alone (P = .001).
The 4-year progression-free survival rate also favored rituximab at 83% vs. 64%, respectively (P less than .001), with respective overall survival rates of 89% and 80%.
The median event-free survival, progression-free survival, and overall survival was not reached in either study arm.
For the 59 patients who for various reasons did not undergo randomization, the median progression-free survival was 11.0 months, and the median overall survival was 30.6 months.
In all, 83 of the 120 patients randomized to rituximab completed the scheduled 3 years of therapy. Maintenance therapy was stopped for disease progression in 16 patients and because of neutropenia in 9. There were 13 deaths in the rituximab arm, including 3 deaths from second malignancies.
Of the 120 patients assigned to observation, 37 had disease progression during the study period, and 24 died, one from a second malignancy.
Four patients in each study arm had serious infections after ASCT, including one case each of spondylitis, pyelonephritis, septicemia, and varicella pneumonia in the rituximab group, and septicemia, cellulitis, meningitis, and severe pneumonia in the observation group.
Lymphoma was the cause of death in 8 patients assigned to rituximab, and in 16 assigned to observation.
The investigators noted that although some centers use total-body irradiation for conditioning prior to transplant, this modality is not available in all centers and is associated with both short- and long-term toxicities. The progression-free survival results seen in this trial, where only ablative drug regimens were used “suggest that total-body irradiation–based conditioning regimens may not be superior to chemotherapy alone when an effective regimen is used during induction,” they wrote.
The study was supported by Roche and Amgen. Dr. Le Gouill disclosed fees for consulting and honoraria from Roche, Janssen-Cilag, and Celgene. Multiple coauthors disclosed similar relationships with industry.
, results of a phase 3 trial show.
After 50.2 months median follow-up, the overall survival rate for patients aged 65 or younger randomized to rituximab maintenance after four cycles of induction chemotherapy with rituximab, dexamethasone, cytarabine, and a platinum derivative (R-DHAP) followed by ASCT was 89%, compared with 80% for patients randomized to observation (P = .004), reported Steven Le Gouill, MD, PhD, of University Hospital Hotel-Dieu, in Nantes, Frances, and colleagues.
In an unadjusted regression analysis, the difference translated into a hazard ratio for death within 4 years of 0.50 (P = .004) favoring rituximab, they wrote in the Sept. 28, 2017 issue of The New England Journal of Medicine.
“[A]n induction regimen with four courses of R-DHAP followed by transplantation without total-body irradiation resulted in a high rate of complete response. A 3-year course of rituximab maintenance therapy administered every 2 months prolonged overall survival among young patients with mantle cell lymphoma,” the investigators wrote (N Engl J Med. 2017;377:1250-60).
Dr. Le Gouill and his colleagues hypothesized that relapses following treatment for MCL may be caused by residual malignant cells that chemotherapy and ASCT fail to eradicate, suggesting that maintenance therapy with rituximab could help to suppress residual disease, prolong the duration of responses, and extend both progression-free and overall survival.
They cited an earlier study by members of the European Mantle Cell Lymphoma Network showing that among patients aged 60 and older who had a response to eight cycles of chemotherapy with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP), maintenance therapy with rituximab was associated with an 87% 4-year overall survival rate vs. 63% for patients maintained on interferon alfa (P = .005) (N Engl J Med. 2012;367:520-31).
For the current study, the researchers enrolled 299 patients, of whom 257 went on to ASCT, and 240 of whom were randomized and were included in an intention-to-treat (ITT) analysis.
Patients received induction with four cycles of R-DHAP. Those patients who had partial responses or tumor mass shrinkage of less than 75% on CT received a rescue induction with four cycles of R-CHOP.
Those patients with complete or partial responses could then go on to transplantation after a conditioning regimen of R-BEAM (rituximab, carmustine, etoposide, cytarabine, and melphalan).
Patients randomized after ASCT to rituximab received it every 2 months for 3 years in an intravenous infusion at a dose of 375 mg/m2.
After a median of 50.2 months from randomization, the rate of 4-year event-free survival (no disease progression, relapse, death, or severe infection), the primary endpoint, was 79% for patients maintained on rituximab vs. 61% for those on observation alone (P = .001).
The 4-year progression-free survival rate also favored rituximab at 83% vs. 64%, respectively (P less than .001), with respective overall survival rates of 89% and 80%.
The median event-free survival, progression-free survival, and overall survival was not reached in either study arm.
For the 59 patients who for various reasons did not undergo randomization, the median progression-free survival was 11.0 months, and the median overall survival was 30.6 months.
In all, 83 of the 120 patients randomized to rituximab completed the scheduled 3 years of therapy. Maintenance therapy was stopped for disease progression in 16 patients and because of neutropenia in 9. There were 13 deaths in the rituximab arm, including 3 deaths from second malignancies.
Of the 120 patients assigned to observation, 37 had disease progression during the study period, and 24 died, one from a second malignancy.
Four patients in each study arm had serious infections after ASCT, including one case each of spondylitis, pyelonephritis, septicemia, and varicella pneumonia in the rituximab group, and septicemia, cellulitis, meningitis, and severe pneumonia in the observation group.
Lymphoma was the cause of death in 8 patients assigned to rituximab, and in 16 assigned to observation.
The investigators noted that although some centers use total-body irradiation for conditioning prior to transplant, this modality is not available in all centers and is associated with both short- and long-term toxicities. The progression-free survival results seen in this trial, where only ablative drug regimens were used “suggest that total-body irradiation–based conditioning regimens may not be superior to chemotherapy alone when an effective regimen is used during induction,” they wrote.
The study was supported by Roche and Amgen. Dr. Le Gouill disclosed fees for consulting and honoraria from Roche, Janssen-Cilag, and Celgene. Multiple coauthors disclosed similar relationships with industry.
FROM NEW ENGLAND JOURNAL OF MEDICINE
Key clinical point: Following stem cell transplantation, rituximab maintenance cut in half the risk for death in patients with mantle cell lymphoma.
Major finding: Four-year overall survival was 89% with rituximab maintenance vs. 80% for observation alone.
Data source: Randomized phase 3 trial in 240 patients aged 65 and younger at diagnosis of mantle cell lymphoma.
Disclosures: The study was supported by Roche and Amgen. Dr. Le Gouill disclosed fees for consulting and honoraria from Roche, Janssen-Cilag, and Celgene. Multiple coauthors disclosed similar relationships with industry.
Tips for Living With Ataxia
Click here to download the PDF.
Click here to download the PDF.
Click here to download the PDF.