Prescribing Statins for Patients With ACS? No Need to Wait

Article Type
Changed
Display Headline
Prescribing Statins for Patients With ACS? No Need to Wait
The best time to start a statin in patients with acute coronary syndrome is before they undergo percutaneous coronary intervention.

PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1

STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1

ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?

Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6

Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7

Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.

Continue for study summary >>

 

 

STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and ­CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.

For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.

Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).

In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.

WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.

Next page: Caveats and challenges >>

 

 

CAVEATS
Benefits might vary with ­different statins
The studies evaluated in this ­meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.

CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.

REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.

ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.

References

Article PDF
Author and Disclosure Information

Hanna Gov-Ari, MD, James J. Stevermer, MD, MSPH


Hanna Gov-Ari and James J. Stevermer are in the Department of Family and Community Medicine at the University of ­Missouri–Columbia.

Issue
Clinician Reviews - 25(1)
Publications
Topics
Page Number
26-27
Legacy Keywords
statins, acs, purls, acute coronary syndrome
Sections
Author and Disclosure Information

Hanna Gov-Ari, MD, James J. Stevermer, MD, MSPH


Hanna Gov-Ari and James J. Stevermer are in the Department of Family and Community Medicine at the University of ­Missouri–Columbia.

Author and Disclosure Information

Hanna Gov-Ari, MD, James J. Stevermer, MD, MSPH


Hanna Gov-Ari and James J. Stevermer are in the Department of Family and Community Medicine at the University of ­Missouri–Columbia.

Article PDF
Article PDF
Related Articles
The best time to start a statin in patients with acute coronary syndrome is before they undergo percutaneous coronary intervention.
The best time to start a statin in patients with acute coronary syndrome is before they undergo percutaneous coronary intervention.

PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1

STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1

ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?

Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6

Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7

Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.

Continue for study summary >>

 

 

STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and ­CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.

For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.

Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).

In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.

WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.

Next page: Caveats and challenges >>

 

 

CAVEATS
Benefits might vary with ­different statins
The studies evaluated in this ­meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.

CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.

REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.

ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.

PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1

STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1

ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?

Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6

Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7

Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.

Continue for study summary >>

 

 

STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and ­CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.

For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.

Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).

In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.

WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.

Next page: Caveats and challenges >>

 

 

CAVEATS
Benefits might vary with ­different statins
The studies evaluated in this ­meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.

CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.

REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.

ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.

Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.

References

References

Issue
Clinician Reviews - 25(1)
Issue
Clinician Reviews - 25(1)
Page Number
26-27
Page Number
26-27
Publications
Publications
Topics
Article Type
Display Headline
Prescribing Statins for Patients With ACS? No Need to Wait
Display Headline
Prescribing Statins for Patients With ACS? No Need to Wait
Legacy Keywords
statins, acs, purls, acute coronary syndrome
Legacy Keywords
statins, acs, purls, acute coronary syndrome
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Pain Out of Proportion to a Fracture

Article Type
Changed
Display Headline
Pain Out of Proportion to a Fracture
How did a case of a "simple fracture" end with a $2.75 million settlement?

A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.

The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.

In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.

OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.

Continue for David M. Lang's comments >>

 

 

COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.

Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.

Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.

Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.

In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.

Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.

One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.

But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.

In this case, recognition of ­out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her.  —DML

References

Article PDF
Author and Disclosure Information

David M. Lang, JD, ­PA-C

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Issue
Clinician Reviews - 25(1)
Publications
Topics
Page Number
19-20
Legacy Keywords
malpractice, chronicle, fracture
Sections
Author and Disclosure Information

David M. Lang, JD, ­PA-C

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Author and Disclosure Information

David M. Lang, JD, ­PA-C

Commentary by David M. Lang, JD, ­PA-C, an experienced PA and a former medical malpractice defense attorney who practices law in Granite Bay, California. Cases reprinted with permission from Medical Malpractice Verdicts, Settlements and Experts, Lewis Laska, Editor, (800) 298-6288.

Article PDF
Article PDF
Related Articles
How did a case of a "simple fracture" end with a $2.75 million settlement?
How did a case of a "simple fracture" end with a $2.75 million settlement?

A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.

The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.

In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.

OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.

Continue for David M. Lang's comments >>

 

 

COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.

Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.

Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.

Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.

In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.

Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.

One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.

But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.

In this case, recognition of ­out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her.  —DML

A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.

The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.

In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.

OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.

Continue for David M. Lang's comments >>

 

 

COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.

Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.

Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.

Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.

In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.

Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.

One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.

But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.

In this case, recognition of ­out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her.  —DML

References

References

Issue
Clinician Reviews - 25(1)
Issue
Clinician Reviews - 25(1)
Page Number
19-20
Page Number
19-20
Publications
Publications
Topics
Article Type
Display Headline
Pain Out of Proportion to a Fracture
Display Headline
Pain Out of Proportion to a Fracture
Legacy Keywords
malpractice, chronicle, fracture
Legacy Keywords
malpractice, chronicle, fracture
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Varying cutoffs of vitamin D add confusion to field

Article Type
Changed
Display Headline
Varying cutoffs of vitamin D add confusion to field

Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.

“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”

In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”

Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”

Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”

Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”

During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”

The presenters reported having no financial disclosures.

[email protected]

On Twitter @dougbrunk

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
NIH, vitamin D, Aloia, Bouillon, cut-off, cut point
Author and Disclosure Information

Author and Disclosure Information

Related Articles

Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.

“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”

In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”

Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”

Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”

Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”

During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”

The presenters reported having no financial disclosures.

[email protected]

On Twitter @dougbrunk

Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.

“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”

In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”

Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”

Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”

Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”

During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”

The presenters reported having no financial disclosures.

[email protected]

On Twitter @dougbrunk

References

References

Publications
Publications
Topics
Article Type
Display Headline
Varying cutoffs of vitamin D add confusion to field
Display Headline
Varying cutoffs of vitamin D add confusion to field
Legacy Keywords
NIH, vitamin D, Aloia, Bouillon, cut-off, cut point
Legacy Keywords
NIH, vitamin D, Aloia, Bouillon, cut-off, cut point
Article Source

FROM AN NIH PUBLIC CONFERENCE ON VITAMIN D

PURLs Copyright

Inside the Article

A step away from immediate umbilical cord clamping

Article Type
Changed
Display Headline
A step away from immediate umbilical cord clamping

The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.

Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.

Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).

With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.

Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.

Timing in term infants

The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).

Dr. George A. Macones

Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.

On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.

For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.

The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)

There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.

The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.

Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.

In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.

 

 

Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.

A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.

Timing in preterm infants

Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.

As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).

Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.

While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.

The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.

When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.

Our practice and the future

At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.

We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.

Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.

These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.

Dr. Macones said he had no relevant financial disclosures.

Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.

References

Author and Disclosure Information

Publications
Legacy Keywords
umbilical cord clamping, obstetrics, preterm delivery
Sections
Author and Disclosure Information

Author and Disclosure Information

The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.

Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.

Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).

With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.

Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.

Timing in term infants

The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).

Dr. George A. Macones

Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.

On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.

For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.

The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)

There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.

The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.

Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.

In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.

 

 

Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.

A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.

Timing in preterm infants

Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.

As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).

Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.

While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.

The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.

When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.

Our practice and the future

At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.

We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.

Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.

These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.

Dr. Macones said he had no relevant financial disclosures.

Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.

The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.

Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.

Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).

With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.

Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.

Timing in term infants

The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).

Dr. George A. Macones

Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.

On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.

For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.

The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)

There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.

The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.

Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.

In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.

 

 

Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.

A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.

Timing in preterm infants

Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.

As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).

Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.

While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.

The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.

When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.

Our practice and the future

At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.

We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.

Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.

These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.

Dr. Macones said he had no relevant financial disclosures.

Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.

References

References

Publications
Publications
Article Type
Display Headline
A step away from immediate umbilical cord clamping
Display Headline
A step away from immediate umbilical cord clamping
Legacy Keywords
umbilical cord clamping, obstetrics, preterm delivery
Legacy Keywords
umbilical cord clamping, obstetrics, preterm delivery
Sections
Article Source

PURLs Copyright

Inside the Article

CKT more important than del(17p) in CLL, group finds

Article Type
Changed
Display Headline
CKT more important than del(17p) in CLL, group finds

SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.

The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.

According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).

However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.

“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.

Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.

Patient characteristics

Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.

The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.

FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).

Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.

Event-free survival

The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.

“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”

EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).

EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).

In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).

“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.

In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).

Overall survival

There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).

However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).

 

 

Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).

In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).

“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.

“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”

*Information in the abstract differs from that presented at the meeting.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.

The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.

According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).

However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.

“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.

Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.

Patient characteristics

Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.

The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.

FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).

Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.

Event-free survival

The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.

“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”

EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).

EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).

In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).

“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.

In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).

Overall survival

There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).

However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).

 

 

Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).

In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).

“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.

“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”

*Information in the abstract differs from that presented at the meeting.

SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.

The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.

According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).

However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.

“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.

Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.

Patient characteristics

Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.

The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.

FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).

Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.

Event-free survival

The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.

“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”

EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).

EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).

In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).

“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.

In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).

Overall survival

There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).

However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).

 

 

Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).

In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).

“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.

“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”

*Information in the abstract differs from that presented at the meeting.

Publications
Publications
Topics
Article Type
Display Headline
CKT more important than del(17p) in CLL, group finds
Display Headline
CKT more important than del(17p) in CLL, group finds
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Long-Acting Insulin Analogs: Effects on Diabetic Retinopathy

Article Type
Changed
Display Headline
Long-Acting Insulin Analogs: Effects on Diabetic Retinopathy

Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.

The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.

The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).

They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.

Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.

Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.

References

Author and Disclosure Information

Issue
Federal Practitioner - 31(12)
Publications
Topics
Page Number
44
Legacy Keywords
long-acting insulin analogs, diabetic retinopathy, glycemic control, sight-threatening diabetic retinopathy, STDR, glargine, detemir, neutral protamine Hagedorn insulin, NPH insulin, type 2 diabetes mellitus, T2DM
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.

The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.

The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).

They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.

Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.

Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.

Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.

The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.

The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).

They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.

Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.

Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.

References

References

Issue
Federal Practitioner - 31(12)
Issue
Federal Practitioner - 31(12)
Page Number
44
Page Number
44
Publications
Publications
Topics
Article Type
Display Headline
Long-Acting Insulin Analogs: Effects on Diabetic Retinopathy
Display Headline
Long-Acting Insulin Analogs: Effects on Diabetic Retinopathy
Legacy Keywords
long-acting insulin analogs, diabetic retinopathy, glycemic control, sight-threatening diabetic retinopathy, STDR, glargine, detemir, neutral protamine Hagedorn insulin, NPH insulin, type 2 diabetes mellitus, T2DM
Legacy Keywords
long-acting insulin analogs, diabetic retinopathy, glycemic control, sight-threatening diabetic retinopathy, STDR, glargine, detemir, neutral protamine Hagedorn insulin, NPH insulin, type 2 diabetes mellitus, T2DM
Sections
Article Source

PURLs Copyright

Inside the Article

CHMP supports expanding use of lenalidomide in MM

Article Type
Changed
Display Headline
CHMP supports expanding use of lenalidomide in MM

Micrograph showing MM

The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).

The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.

Lenalidomide is not currently approved to treat newly diagnosed MM in any country.

The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.

Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.

The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.

The FIRST trial

In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.

Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).

Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.

The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.

This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).

The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).

Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).

The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.

The MM-015 trial

In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.

Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).

And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).

During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.

Publications
Topics

Micrograph showing MM

The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).

The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.

Lenalidomide is not currently approved to treat newly diagnosed MM in any country.

The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.

Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.

The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.

The FIRST trial

In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.

Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).

Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.

The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.

This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).

The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).

Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).

The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.

The MM-015 trial

In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.

Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).

And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).

During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.

Micrograph showing MM

The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).

The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.

Lenalidomide is not currently approved to treat newly diagnosed MM in any country.

The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.

Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.

The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.

The FIRST trial

In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.

Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).

Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.

The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.

This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).

The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).

Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).

The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.

The MM-015 trial

In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.

Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).

And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).

During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.

Publications
Publications
Topics
Article Type
Display Headline
CHMP supports expanding use of lenalidomide in MM
Display Headline
CHMP supports expanding use of lenalidomide in MM
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Bisphosphonates may protect against endometrial cancer

Article Type
Changed
Display Headline
Bisphosphonates may protect against endometrial cancer

The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.

The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.

©Thomas Northcut/ Thinkstockphotos.com

The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).

PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.

The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.

“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.

The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.

References

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.

The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.

©Thomas Northcut/ Thinkstockphotos.com

The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).

PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.

The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.

“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.

The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.

The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.

The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.

©Thomas Northcut/ Thinkstockphotos.com

The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).

PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.

The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.

“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.

The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Bisphosphonates may protect against endometrial cancer
Display Headline
Bisphosphonates may protect against endometrial cancer
Article Source

FROM CANCER

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Women with a history of bisphosphonate use had a reduced risk of developing endometrial cancer.

Major finding: The endometrial cancer incidence rate was 8.7 vs. 17.7/10,000 person-years for ever vs. never users of nitrogenous bisphosphonates (rate ratio, 0.49).

Data source: An analysis of data from 23,485 women from a randomized population-based trial.

Disclosures: The PLCO trial was funded by the National Institutes of Health. The authors reported having no financial disclosures.

A Decision Aid Did Not Improve Patient Empowerment for Setting and Achieving Diabetes Treatment Goals

Article Type
Changed
Display Headline
A Decision Aid Did Not Improve Patient Empowerment for Setting and Achieving Diabetes Treatment Goals

Study Overview

Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.

Design. Randomized controlled trial.

Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.

The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.

Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.

Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.

Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.

Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.

Commentary

Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management.  The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.

This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.

Applications for Clinical Practice

Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.

—William Hung, MD, MPH

 

References

1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.

2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.

3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.

4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.

Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Publications
Topics
Sections

Study Overview

Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.

Design. Randomized controlled trial.

Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.

The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.

Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.

Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.

Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.

Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.

Commentary

Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management.  The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.

This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.

Applications for Clinical Practice

Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.

—William Hung, MD, MPH

 

Study Overview

Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.

Design. Randomized controlled trial.

Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.

The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.

Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.

Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.

Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.

Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.

Commentary

Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management.  The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.

This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.

Applications for Clinical Practice

Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.

—William Hung, MD, MPH

 

References

1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.

2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.

3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.

4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.

References

1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.

2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.

3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.

4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.

Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Publications
Publications
Topics
Article Type
Display Headline
A Decision Aid Did Not Improve Patient Empowerment for Setting and Achieving Diabetes Treatment Goals
Display Headline
A Decision Aid Did Not Improve Patient Empowerment for Setting and Achieving Diabetes Treatment Goals
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Letting Our Patients “Fail Fast”: Early Non-Response to Lorcaserin May Be a Good Reason to Discontinue Medication

Article Type
Changed
Display Headline
Letting Our Patients “Fail Fast”: Early Non-Response to Lorcaserin May Be a Good Reason to Discontinue Medication

Study Overview

Objective. To examine whether an early response (or non-response) to lorcaserin therapy predicts ≥ 5% weight loss achieved at 1 year.

Study design. Secondary analysis of data collected in 3 placebo-controlled blinded randomized trials.

Setting and participants. This study relied upon data collected as part of 3 separate phase 3 clinical trials of lorcaserin, a weight loss drug and selective serotonin 2c (5-HT2c) agonist. The first study, “Behavioral Modification and Lorcaserin for Overweight and Obesity Management” (BLOOM; n = 3182) enrolled overweight (with at least 1 comorbidity) or obese (no comorbidity needed) adult patients (18–65 yr) without diabetes, to determine the safety and efficacy of lorcaserin. The second trial, “Behavioral Modification and Lorcaserin Second Study of Obesity Management” (BLOSSOM; n = 4008) enrolled a similar population as BLOOM. For both BLOOM and BLOSSOM, patients were randomly assigned to receive either lorcaserin (10 mg po bid) or placebo for a 1-year period, and all patients received advice and instruction in exercise goals (at least 30 min/day) and caloric intake (600 kcal less than recommended for weight maintenance for that individual) necessary to promote weight loss. The third trial, BLOOM-DM (n = 604) focused on overweight or obese diabetic patients, but otherwise was similar in methodology to BLOOM and BLOSSOM. All studies took place in multiple US academic and private medical centers and were funded by Arena Pharmaceuticals. For the current analysis, the investigators used data from these trials and classified participants as either “responders” or “non-responders” based on each participant’s early weight loss response to either lorcaserin or placebo.

Main outcome measures. The investigators used area under the curve for the receiver operating characteristic (AUC for ROC) analysis to determine whether an early weight loss response to lorcaserin or placebo predicted a patient’s longer-term (52-week) weight loss. Several steps were used to conduct these analyses.

First, the investigators needed to determine what amount of weight loss at which of several early time-points would qualify a participant as a “responder” to either drug or placebo. They compared weight lost at weeks 2, 4, 8 and 12, using AUC for ROC analysis to identify the appropriate “responder” or “non-responder” cut-points, and classified all participants with data points in these early weeks as such. Second, all of the early responder and non-responder participants with 52-week weight data were then classified as to whether or not they had achieved at least a 5% weight loss at the end of the study. AUC for ROC analysis was again used to determine whether this early categorization was predictive of final study response. In addition to looking at early response as predictive of final weight loss, the investigators also examined the response/non-response variable’s ability to predict other health outcomes, including changes in lipid levels, blood pressure, and, for type 2 diabetic participants, changes in glycemic control (fasting plasma glucose [FPG] and HgbA1c). Finally, the investigators examined the incidence of adverse events in the different groups as well.

Results. The investigators identified a 4.6% weight loss by week 12 on lorcaserin or placebo as the optimal cut-point for determining whether a participant was a “responder” or “non-responder” (“W12R” or “W12NR”). This cut-point had an AUC (95% CI) of 0.849 (0.828–0.870) for predicting ≥ 5% weight loss at 52 weeks, with a positive predictive value (PPV) of 0.855 and negative predictive value (NPV) of 0.740, thus optimizing specificity and sensitivity of the time/weight cut-point compared to those at weeks 2, 4, or 8. Given the need for practical clinical recommendations, however, the investigators used a cut-point of 5% weight loss by week 12 to determine response/non-response for the health outcome analyses. The breakdown of responders vs. non-responders was as follows: For the pooled BLOOM/BLOSSOM participants, there were 1251 lorcaserin-recipient responders and 1286 lorcaserin-recipient non-responders (about 40% of those randomized to lorcaserin were “responders”). Among placebo recipients, there were 541 early responders and 1852 non-responders (about 17% of those randomized to placebo were “responders”). For the diabetic BLOOM-DM participants, the ratios were similar although slightly less favorable, with only about 30% (n = 78) of lorcaserin patients classified W12 responders (139 non-responders), and 10% (n = 25) of placebo patients as W12 responders (192 non-responders).

The lorcaserin and placebo groups in BLOOM and BLOSSOM were similar to one another, with overall mean (SD) age of 43.8 (11.6) years for lorcaserin and 44.0 (11.4) years for placebo. The vast majority of participants in these 2 trials were female (81.7% in lorcaserin arms, 81.0% in placebo), and the majority were non-Hispanic white (67.6% in lorcaserin and 66.2% in placebo). The mean (SD) baseline body mass index (BMI, kg/m2) was 36.1 (4.3) for lorcaserin and 36.1 (4.2) for placebo. The BLOOM-DM participants were also similar in the lorcaserin and placebo arms, although they were older (mean age, 53.2 years lorcaserin, 52 years placebo), and more likely to be female (53.5% lorcaserin, 54.4% placebo). Otherwise, the BLOOM-DM participants were similar on reported demographic characteristics to those in the other 2 trials.

Importantly, however, for all 3 trials there were differences in demographic characteristics between those participants characterized as responders and those characterized as non-responders. Amongst the nondiabetic participants in the BLOOM and BLOSSOM studies, responders (to both lorcaserin and placebo) were more likely to be non-Hispanic white (as opposed to African American or Hispanic participants, who were more likely to be non-responders), and responders were older than non-responders. Interestingly, for the diabetics in the BLOOM-DM trial, the responder/non-responder differences were less pronounced, although the responders were still slightly more likely to be non-Hispanic white and older, particularly for placebo.

Among BLOOM and BLOSSOM participants who received lorcaserin, mean weight loss at 52 weeks was 10.8% among W12Rs and only 2.7% among W12NRs. A similar pattern was observed in the BLOOM and BLOSSOM placebo participants; W12Rs averaged 9.5% weight loss at 52 weeks, versus just 1.1% in W12NRs.  Among diabetics receiving lorcaserin in the BLOOM-DM study, weight loss at 1 year was 9.1% in W12Rs versus 3.1% in W12NRs. Similarly, in placebo-recipients in BLOOM-DM, weight loss at 1 year was 7% for W12Rs and 1.3% for W12NRs. When the weight loss at 1 year was categorized in terms of whether or not participants achieved at least 5% or 10% weight loss, once again early responders to either lorcaserin or placebo had higher rates of achieving both thresholds. Namely, 85.5% of nondiabetic W12Rs had achieved or maintained 5% weight loss at week 52, while only 26% of the W12NRs ultimately did so. Seventy percent of diabetic W12Rs to lorcaserin had ≥ 5% weight loss at week 52 and 25.2% of W12NRs did. The pattern of prediction for achieving 10% weight loss at week 52 was even more pronounced, with, for example, 49.8% of nondiabetic W12Rs having lost at least 10% of their starting weight at 1 year, versus just 4.7% of W12NRs.

When cardiometabolic outcomes were examined, the differences between W12 lorcaserin responders and non-responders appeared to be somewhat attenuated. For example, among diabetic patients, W12 lorcaserin responders had a mean decrease of 1.2% in their A1c level by study end, compared to a nearly 1% decrease in W12NRs. For fasting plasma glucose, the improvement at week 52 was pronounced (about 30 mg/dL lower than baseline) and very similar in W12 responders and non-responders.

Among nondiabetics, average blood pressure lowering (SBP and DBP) at week 52 was greater among lorcaserin W12 responders (SBP dropped 4 mm Hg on average, DBP 3 mm Hg) than it was among non-responders (SBP and DBP dropped by about 1 mm Hg). Other than triglycerides, which decreased substantially among W12 responders (whether on placebo or lorcaserin), changes to lipid profile were relatively small for nondiabetics. Among diabetics, however, LDL and HDL both increased on average in all 4 groups (W12 responders/non-responders to placebo/lorcaserin) by week 52.

Common adverse events for lorcaserin-treated patients included headache (15%–17%), upper respiratory infections (9%–14%), nausea (8%–9%), and dizziness (8% among nondiabetics). Among diabetics, hypoglycemia occurred in 29.3% of those treated with lorcaserin (vs. 21% on placebo). Week 12 responders and non-responders appeared to have a similar adverse event profile, and, in general, adverse events were more common among lorcaserin than placebo participants.

Conclusion. The authors of this study concluded that a week-12 weight loss of ≥ 5% on lorcaserin was a strong predictor of achieving at least that same amount of weight loss, as well as improvements in some cardiometabolic parameters, at 1 year.

Commentary

In 2013, the American Medical Association officially recognized obesity as a disease. This shift in terminology, coupled with a movement towards reimbursing primary care providers for obesity-related interventions, has created a growing awareness among providers that better treatment options for this chronic condition are sorely needed. Just as we treat patients with hypertension and type 2 diabetes by titrating medications, discontinuing those that aren’t effective and continuing those that are, so should we approach the management of our patients with obesity. Although behavioral interventions centered around lifestyle changes (diet/exercise) remain first-line therapies for the treatment of obesity [1], many patients will seek additional tools, such as meal replacement, medication, or even bariatric surgery, to help achieve and maintain weight loss.

In the past 2 to 3 years, there has been a flurry of activity by the FDA to approve new medications for weight loss. In keeping with the view of obesity as a chronic condition, some of these medications, including lorcaserin and phentermine-topiramate ER, have even been approved for patient long-term use [2]. While the addition of new options to the weight loss toolkit is exciting, it may also be daunting for clinicians who have witnessed a bevy of weight loss drugs come on, and then off, the market over the years due to serious adverse events experienced by patients. For physicians and patients considering the use of a new weight loss medication, there is therefore a clear need to minimize risk for adverse effects related to the drug, while maximizing the patient’s chances of losing weight.

Growing evidence from trials of behavioral interventions as well as weight loss medications suggests that the individuals who will ultimately achieve weight loss success with a given intervention/medication, tend to indicate that success relatively early on in the course of therapy [3–5]. For clinicians, this fact is extremely useful, because it may allow the physician and patient to more rapidly make a decision to discontinue a likely ineffective option in favor of another that has not yet been tried, thus minimizing risks for adverse events while maximizing chances of weight loss outcomes.

In this paper, Smith and colleagues addressed this very important issue for one of the more recently FDA-approved medications, lorcaserin. This 5-HT2c agonist is a useful addition to the list of weight loss medications, as it has relatively few contraindications, other than that it cannot be used in pregnancy/lactation and should be avoided in those with a history of heart failure. However, lorcaserin is still relatively costly (eg, compared to phentermine) and, if it is going to be used for long-term weight loss/maintenance, the financial outlay faced by patients might be considerable. In addition to answering an important question, this paper also examined not only weight loss outcomes but also cardiometabolic impacts of the medication. Furthermore, the authors separately examined outcomes for diabetic and nondiabetic patients, as the risk/benefit ratio of remaining on this medication could be quite different between the 2 groups.

Importantly, the study represented a group of secondary analyses of data aggregated from several trials—trials that were not originally designed to answer this question. Although the majority of original trial participants did have data at weeks 12 and 52 (requirement for inclusion in this analysis), up to a quarter of patients in some groups were missing one or the other measure. Whether or not those analyzed represented a biased subsample, and therefore do not have generalizable results, cannot be ascertained.

In reviewing the outcomes achieved by early responders and non-responders, it was very interesting to note that so-called “responders” to placebo followed a nearly identical weight loss trajectory as those on lorcaserin. This fact should not be taken to indicate that lorcaserin is no different from placebo, as the overall chances of achieving weight loss were significantly greater among the lorcaserin participants. However, it is interesting that, for those placebo patients who clearly followed the recommended lifestyle changes, they did just as well as patients receiving active study drug. This underscores the need to educate patients and encourage them, first and foremost, to make a real effort to diet and exercise regardless of what other tools are employed to achieve weight loss.

Another issue to consider for this study is that there are clear differences in the racial/ethnic makeup of responders versus non-responders. This finding is not unexpected, as in many prior weight loss trials, particularly for behavioral interventions, African-American women have experienced less weight loss than their non-Hispanic white counterparts [6]. These differences were observed both for lorcaserin and placebo patients, raising a concern that the lifestyle intervention component of the study was not equally successful for minorities compared to the non-Hispanic white participants. More research is needed on behavioral interventions that work well in diverse populations.

One finding of interest is that among diabetic participants (BLOOM-DM), glycemic control parameters improved nearly equally between lorcaserin early responders and non-responders, despite the differences between those groups for year-end weight loss. The reasons for this are not clear but could merit further investigation.

Ultimately however, even among this large group of randomized trial participants, who were likely highly motivated, only about 40% of nondiabetics and 30% of diabetics were classified as week 12 responders to lorcaserin. That means that likely well over half of the real-world patients who initiate the drug may not achieve their desired weight loss goals with it. Given the cost of the medication, this must be considered before prescribing it, and it reinforces the importance of being willing to reassess a patient’s weight loss progress early and often so that the medication can be discontinued in favor of other therapies as needed.

Applications for Clinical Practice

For providers interested in prescribing lorcaserin to their patients, a clear plan should be made to have regular and early follow-up to assess the patient’s response to the medication. Patients should understand that if they are not responding to the medication within 3 months, or perhaps sooner if they are experiencing any negative side effects, their physician may elect to discontinue it. Importantly, they should only be given lorcaserin if they are also willing to undertake the behavioral changes necessary to promote weight loss, and it should be underscored that their chances of successful weight loss with or without the medication will be greatly enhanced by doing so.

 —Kristina Lewis, MD, MPH

References

1. Jensen MD, Ryan DH, Apovian CM, et al. 2013 AHA/ACC/TOS Guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. J Am Coll Cardiol 2014;63(25 Pt B):2985–3023.

2. Hurt RT, Jithinraj EV, Ebbert JO. New pharmacological treatments for the management of obesity. Cur Gastroenterol Rep 2014;16.6:1–8.

3. Wadden TA, Foster GD, Wang J, et al. Clinical correlates of short- and long-term weight loss. Am J Clin Nutr 1992;56(Suppl 1):271S–274S.

4. Rissanen A, Lean M, Rossner S, et al. Predictive value of early weight loss in obesity management with orlistat: An evidence-based assessment of prescribing guidelines. Int J Obes Relat Metab Disord 2003;27:103–9.

5. O’Neil P, Foster G, Billes S, et al. Early weight loss with naltrexone SR/bupropion SR combination therapy for obesity predicts long-term weight loss (Abstract). Obesity 2009;17:S109.

6. Kumanyika SK, Whitt-Glover MC, Haire-Joshu D. What works for obesity prevention and treatment in black Americans? Research directions. Obes Rev 2014;15:204–12.

Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Publications
Topics
Sections

Study Overview

Objective. To examine whether an early response (or non-response) to lorcaserin therapy predicts ≥ 5% weight loss achieved at 1 year.

Study design. Secondary analysis of data collected in 3 placebo-controlled blinded randomized trials.

Setting and participants. This study relied upon data collected as part of 3 separate phase 3 clinical trials of lorcaserin, a weight loss drug and selective serotonin 2c (5-HT2c) agonist. The first study, “Behavioral Modification and Lorcaserin for Overweight and Obesity Management” (BLOOM; n = 3182) enrolled overweight (with at least 1 comorbidity) or obese (no comorbidity needed) adult patients (18–65 yr) without diabetes, to determine the safety and efficacy of lorcaserin. The second trial, “Behavioral Modification and Lorcaserin Second Study of Obesity Management” (BLOSSOM; n = 4008) enrolled a similar population as BLOOM. For both BLOOM and BLOSSOM, patients were randomly assigned to receive either lorcaserin (10 mg po bid) or placebo for a 1-year period, and all patients received advice and instruction in exercise goals (at least 30 min/day) and caloric intake (600 kcal less than recommended for weight maintenance for that individual) necessary to promote weight loss. The third trial, BLOOM-DM (n = 604) focused on overweight or obese diabetic patients, but otherwise was similar in methodology to BLOOM and BLOSSOM. All studies took place in multiple US academic and private medical centers and were funded by Arena Pharmaceuticals. For the current analysis, the investigators used data from these trials and classified participants as either “responders” or “non-responders” based on each participant’s early weight loss response to either lorcaserin or placebo.

Main outcome measures. The investigators used area under the curve for the receiver operating characteristic (AUC for ROC) analysis to determine whether an early weight loss response to lorcaserin or placebo predicted a patient’s longer-term (52-week) weight loss. Several steps were used to conduct these analyses.

First, the investigators needed to determine what amount of weight loss at which of several early time-points would qualify a participant as a “responder” to either drug or placebo. They compared weight lost at weeks 2, 4, 8 and 12, using AUC for ROC analysis to identify the appropriate “responder” or “non-responder” cut-points, and classified all participants with data points in these early weeks as such. Second, all of the early responder and non-responder participants with 52-week weight data were then classified as to whether or not they had achieved at least a 5% weight loss at the end of the study. AUC for ROC analysis was again used to determine whether this early categorization was predictive of final study response. In addition to looking at early response as predictive of final weight loss, the investigators also examined the response/non-response variable’s ability to predict other health outcomes, including changes in lipid levels, blood pressure, and, for type 2 diabetic participants, changes in glycemic control (fasting plasma glucose [FPG] and HgbA1c). Finally, the investigators examined the incidence of adverse events in the different groups as well.

Results. The investigators identified a 4.6% weight loss by week 12 on lorcaserin or placebo as the optimal cut-point for determining whether a participant was a “responder” or “non-responder” (“W12R” or “W12NR”). This cut-point had an AUC (95% CI) of 0.849 (0.828–0.870) for predicting ≥ 5% weight loss at 52 weeks, with a positive predictive value (PPV) of 0.855 and negative predictive value (NPV) of 0.740, thus optimizing specificity and sensitivity of the time/weight cut-point compared to those at weeks 2, 4, or 8. Given the need for practical clinical recommendations, however, the investigators used a cut-point of 5% weight loss by week 12 to determine response/non-response for the health outcome analyses. The breakdown of responders vs. non-responders was as follows: For the pooled BLOOM/BLOSSOM participants, there were 1251 lorcaserin-recipient responders and 1286 lorcaserin-recipient non-responders (about 40% of those randomized to lorcaserin were “responders”). Among placebo recipients, there were 541 early responders and 1852 non-responders (about 17% of those randomized to placebo were “responders”). For the diabetic BLOOM-DM participants, the ratios were similar although slightly less favorable, with only about 30% (n = 78) of lorcaserin patients classified W12 responders (139 non-responders), and 10% (n = 25) of placebo patients as W12 responders (192 non-responders).

The lorcaserin and placebo groups in BLOOM and BLOSSOM were similar to one another, with overall mean (SD) age of 43.8 (11.6) years for lorcaserin and 44.0 (11.4) years for placebo. The vast majority of participants in these 2 trials were female (81.7% in lorcaserin arms, 81.0% in placebo), and the majority were non-Hispanic white (67.6% in lorcaserin and 66.2% in placebo). The mean (SD) baseline body mass index (BMI, kg/m2) was 36.1 (4.3) for lorcaserin and 36.1 (4.2) for placebo. The BLOOM-DM participants were also similar in the lorcaserin and placebo arms, although they were older (mean age, 53.2 years lorcaserin, 52 years placebo), and more likely to be female (53.5% lorcaserin, 54.4% placebo). Otherwise, the BLOOM-DM participants were similar on reported demographic characteristics to those in the other 2 trials.

Importantly, however, for all 3 trials there were differences in demographic characteristics between those participants characterized as responders and those characterized as non-responders. Amongst the nondiabetic participants in the BLOOM and BLOSSOM studies, responders (to both lorcaserin and placebo) were more likely to be non-Hispanic white (as opposed to African American or Hispanic participants, who were more likely to be non-responders), and responders were older than non-responders. Interestingly, for the diabetics in the BLOOM-DM trial, the responder/non-responder differences were less pronounced, although the responders were still slightly more likely to be non-Hispanic white and older, particularly for placebo.

Among BLOOM and BLOSSOM participants who received lorcaserin, mean weight loss at 52 weeks was 10.8% among W12Rs and only 2.7% among W12NRs. A similar pattern was observed in the BLOOM and BLOSSOM placebo participants; W12Rs averaged 9.5% weight loss at 52 weeks, versus just 1.1% in W12NRs.  Among diabetics receiving lorcaserin in the BLOOM-DM study, weight loss at 1 year was 9.1% in W12Rs versus 3.1% in W12NRs. Similarly, in placebo-recipients in BLOOM-DM, weight loss at 1 year was 7% for W12Rs and 1.3% for W12NRs. When the weight loss at 1 year was categorized in terms of whether or not participants achieved at least 5% or 10% weight loss, once again early responders to either lorcaserin or placebo had higher rates of achieving both thresholds. Namely, 85.5% of nondiabetic W12Rs had achieved or maintained 5% weight loss at week 52, while only 26% of the W12NRs ultimately did so. Seventy percent of diabetic W12Rs to lorcaserin had ≥ 5% weight loss at week 52 and 25.2% of W12NRs did. The pattern of prediction for achieving 10% weight loss at week 52 was even more pronounced, with, for example, 49.8% of nondiabetic W12Rs having lost at least 10% of their starting weight at 1 year, versus just 4.7% of W12NRs.

When cardiometabolic outcomes were examined, the differences between W12 lorcaserin responders and non-responders appeared to be somewhat attenuated. For example, among diabetic patients, W12 lorcaserin responders had a mean decrease of 1.2% in their A1c level by study end, compared to a nearly 1% decrease in W12NRs. For fasting plasma glucose, the improvement at week 52 was pronounced (about 30 mg/dL lower than baseline) and very similar in W12 responders and non-responders.

Among nondiabetics, average blood pressure lowering (SBP and DBP) at week 52 was greater among lorcaserin W12 responders (SBP dropped 4 mm Hg on average, DBP 3 mm Hg) than it was among non-responders (SBP and DBP dropped by about 1 mm Hg). Other than triglycerides, which decreased substantially among W12 responders (whether on placebo or lorcaserin), changes to lipid profile were relatively small for nondiabetics. Among diabetics, however, LDL and HDL both increased on average in all 4 groups (W12 responders/non-responders to placebo/lorcaserin) by week 52.

Common adverse events for lorcaserin-treated patients included headache (15%–17%), upper respiratory infections (9%–14%), nausea (8%–9%), and dizziness (8% among nondiabetics). Among diabetics, hypoglycemia occurred in 29.3% of those treated with lorcaserin (vs. 21% on placebo). Week 12 responders and non-responders appeared to have a similar adverse event profile, and, in general, adverse events were more common among lorcaserin than placebo participants.

Conclusion. The authors of this study concluded that a week-12 weight loss of ≥ 5% on lorcaserin was a strong predictor of achieving at least that same amount of weight loss, as well as improvements in some cardiometabolic parameters, at 1 year.

Commentary

In 2013, the American Medical Association officially recognized obesity as a disease. This shift in terminology, coupled with a movement towards reimbursing primary care providers for obesity-related interventions, has created a growing awareness among providers that better treatment options for this chronic condition are sorely needed. Just as we treat patients with hypertension and type 2 diabetes by titrating medications, discontinuing those that aren’t effective and continuing those that are, so should we approach the management of our patients with obesity. Although behavioral interventions centered around lifestyle changes (diet/exercise) remain first-line therapies for the treatment of obesity [1], many patients will seek additional tools, such as meal replacement, medication, or even bariatric surgery, to help achieve and maintain weight loss.

In the past 2 to 3 years, there has been a flurry of activity by the FDA to approve new medications for weight loss. In keeping with the view of obesity as a chronic condition, some of these medications, including lorcaserin and phentermine-topiramate ER, have even been approved for patient long-term use [2]. While the addition of new options to the weight loss toolkit is exciting, it may also be daunting for clinicians who have witnessed a bevy of weight loss drugs come on, and then off, the market over the years due to serious adverse events experienced by patients. For physicians and patients considering the use of a new weight loss medication, there is therefore a clear need to minimize risk for adverse effects related to the drug, while maximizing the patient’s chances of losing weight.

Growing evidence from trials of behavioral interventions as well as weight loss medications suggests that the individuals who will ultimately achieve weight loss success with a given intervention/medication, tend to indicate that success relatively early on in the course of therapy [3–5]. For clinicians, this fact is extremely useful, because it may allow the physician and patient to more rapidly make a decision to discontinue a likely ineffective option in favor of another that has not yet been tried, thus minimizing risks for adverse events while maximizing chances of weight loss outcomes.

In this paper, Smith and colleagues addressed this very important issue for one of the more recently FDA-approved medications, lorcaserin. This 5-HT2c agonist is a useful addition to the list of weight loss medications, as it has relatively few contraindications, other than that it cannot be used in pregnancy/lactation and should be avoided in those with a history of heart failure. However, lorcaserin is still relatively costly (eg, compared to phentermine) and, if it is going to be used for long-term weight loss/maintenance, the financial outlay faced by patients might be considerable. In addition to answering an important question, this paper also examined not only weight loss outcomes but also cardiometabolic impacts of the medication. Furthermore, the authors separately examined outcomes for diabetic and nondiabetic patients, as the risk/benefit ratio of remaining on this medication could be quite different between the 2 groups.

Importantly, the study represented a group of secondary analyses of data aggregated from several trials—trials that were not originally designed to answer this question. Although the majority of original trial participants did have data at weeks 12 and 52 (requirement for inclusion in this analysis), up to a quarter of patients in some groups were missing one or the other measure. Whether or not those analyzed represented a biased subsample, and therefore do not have generalizable results, cannot be ascertained.

In reviewing the outcomes achieved by early responders and non-responders, it was very interesting to note that so-called “responders” to placebo followed a nearly identical weight loss trajectory as those on lorcaserin. This fact should not be taken to indicate that lorcaserin is no different from placebo, as the overall chances of achieving weight loss were significantly greater among the lorcaserin participants. However, it is interesting that, for those placebo patients who clearly followed the recommended lifestyle changes, they did just as well as patients receiving active study drug. This underscores the need to educate patients and encourage them, first and foremost, to make a real effort to diet and exercise regardless of what other tools are employed to achieve weight loss.

Another issue to consider for this study is that there are clear differences in the racial/ethnic makeup of responders versus non-responders. This finding is not unexpected, as in many prior weight loss trials, particularly for behavioral interventions, African-American women have experienced less weight loss than their non-Hispanic white counterparts [6]. These differences were observed both for lorcaserin and placebo patients, raising a concern that the lifestyle intervention component of the study was not equally successful for minorities compared to the non-Hispanic white participants. More research is needed on behavioral interventions that work well in diverse populations.

One finding of interest is that among diabetic participants (BLOOM-DM), glycemic control parameters improved nearly equally between lorcaserin early responders and non-responders, despite the differences between those groups for year-end weight loss. The reasons for this are not clear but could merit further investigation.

Ultimately however, even among this large group of randomized trial participants, who were likely highly motivated, only about 40% of nondiabetics and 30% of diabetics were classified as week 12 responders to lorcaserin. That means that likely well over half of the real-world patients who initiate the drug may not achieve their desired weight loss goals with it. Given the cost of the medication, this must be considered before prescribing it, and it reinforces the importance of being willing to reassess a patient’s weight loss progress early and often so that the medication can be discontinued in favor of other therapies as needed.

Applications for Clinical Practice

For providers interested in prescribing lorcaserin to their patients, a clear plan should be made to have regular and early follow-up to assess the patient’s response to the medication. Patients should understand that if they are not responding to the medication within 3 months, or perhaps sooner if they are experiencing any negative side effects, their physician may elect to discontinue it. Importantly, they should only be given lorcaserin if they are also willing to undertake the behavioral changes necessary to promote weight loss, and it should be underscored that their chances of successful weight loss with or without the medication will be greatly enhanced by doing so.

 —Kristina Lewis, MD, MPH

Study Overview

Objective. To examine whether an early response (or non-response) to lorcaserin therapy predicts ≥ 5% weight loss achieved at 1 year.

Study design. Secondary analysis of data collected in 3 placebo-controlled blinded randomized trials.

Setting and participants. This study relied upon data collected as part of 3 separate phase 3 clinical trials of lorcaserin, a weight loss drug and selective serotonin 2c (5-HT2c) agonist. The first study, “Behavioral Modification and Lorcaserin for Overweight and Obesity Management” (BLOOM; n = 3182) enrolled overweight (with at least 1 comorbidity) or obese (no comorbidity needed) adult patients (18–65 yr) without diabetes, to determine the safety and efficacy of lorcaserin. The second trial, “Behavioral Modification and Lorcaserin Second Study of Obesity Management” (BLOSSOM; n = 4008) enrolled a similar population as BLOOM. For both BLOOM and BLOSSOM, patients were randomly assigned to receive either lorcaserin (10 mg po bid) or placebo for a 1-year period, and all patients received advice and instruction in exercise goals (at least 30 min/day) and caloric intake (600 kcal less than recommended for weight maintenance for that individual) necessary to promote weight loss. The third trial, BLOOM-DM (n = 604) focused on overweight or obese diabetic patients, but otherwise was similar in methodology to BLOOM and BLOSSOM. All studies took place in multiple US academic and private medical centers and were funded by Arena Pharmaceuticals. For the current analysis, the investigators used data from these trials and classified participants as either “responders” or “non-responders” based on each participant’s early weight loss response to either lorcaserin or placebo.

Main outcome measures. The investigators used area under the curve for the receiver operating characteristic (AUC for ROC) analysis to determine whether an early weight loss response to lorcaserin or placebo predicted a patient’s longer-term (52-week) weight loss. Several steps were used to conduct these analyses.

First, the investigators needed to determine what amount of weight loss at which of several early time-points would qualify a participant as a “responder” to either drug or placebo. They compared weight lost at weeks 2, 4, 8 and 12, using AUC for ROC analysis to identify the appropriate “responder” or “non-responder” cut-points, and classified all participants with data points in these early weeks as such. Second, all of the early responder and non-responder participants with 52-week weight data were then classified as to whether or not they had achieved at least a 5% weight loss at the end of the study. AUC for ROC analysis was again used to determine whether this early categorization was predictive of final study response. In addition to looking at early response as predictive of final weight loss, the investigators also examined the response/non-response variable’s ability to predict other health outcomes, including changes in lipid levels, blood pressure, and, for type 2 diabetic participants, changes in glycemic control (fasting plasma glucose [FPG] and HgbA1c). Finally, the investigators examined the incidence of adverse events in the different groups as well.

Results. The investigators identified a 4.6% weight loss by week 12 on lorcaserin or placebo as the optimal cut-point for determining whether a participant was a “responder” or “non-responder” (“W12R” or “W12NR”). This cut-point had an AUC (95% CI) of 0.849 (0.828–0.870) for predicting ≥ 5% weight loss at 52 weeks, with a positive predictive value (PPV) of 0.855 and negative predictive value (NPV) of 0.740, thus optimizing specificity and sensitivity of the time/weight cut-point compared to those at weeks 2, 4, or 8. Given the need for practical clinical recommendations, however, the investigators used a cut-point of 5% weight loss by week 12 to determine response/non-response for the health outcome analyses. The breakdown of responders vs. non-responders was as follows: For the pooled BLOOM/BLOSSOM participants, there were 1251 lorcaserin-recipient responders and 1286 lorcaserin-recipient non-responders (about 40% of those randomized to lorcaserin were “responders”). Among placebo recipients, there were 541 early responders and 1852 non-responders (about 17% of those randomized to placebo were “responders”). For the diabetic BLOOM-DM participants, the ratios were similar although slightly less favorable, with only about 30% (n = 78) of lorcaserin patients classified W12 responders (139 non-responders), and 10% (n = 25) of placebo patients as W12 responders (192 non-responders).

The lorcaserin and placebo groups in BLOOM and BLOSSOM were similar to one another, with overall mean (SD) age of 43.8 (11.6) years for lorcaserin and 44.0 (11.4) years for placebo. The vast majority of participants in these 2 trials were female (81.7% in lorcaserin arms, 81.0% in placebo), and the majority were non-Hispanic white (67.6% in lorcaserin and 66.2% in placebo). The mean (SD) baseline body mass index (BMI, kg/m2) was 36.1 (4.3) for lorcaserin and 36.1 (4.2) for placebo. The BLOOM-DM participants were also similar in the lorcaserin and placebo arms, although they were older (mean age, 53.2 years lorcaserin, 52 years placebo), and more likely to be female (53.5% lorcaserin, 54.4% placebo). Otherwise, the BLOOM-DM participants were similar on reported demographic characteristics to those in the other 2 trials.

Importantly, however, for all 3 trials there were differences in demographic characteristics between those participants characterized as responders and those characterized as non-responders. Amongst the nondiabetic participants in the BLOOM and BLOSSOM studies, responders (to both lorcaserin and placebo) were more likely to be non-Hispanic white (as opposed to African American or Hispanic participants, who were more likely to be non-responders), and responders were older than non-responders. Interestingly, for the diabetics in the BLOOM-DM trial, the responder/non-responder differences were less pronounced, although the responders were still slightly more likely to be non-Hispanic white and older, particularly for placebo.

Among BLOOM and BLOSSOM participants who received lorcaserin, mean weight loss at 52 weeks was 10.8% among W12Rs and only 2.7% among W12NRs. A similar pattern was observed in the BLOOM and BLOSSOM placebo participants; W12Rs averaged 9.5% weight loss at 52 weeks, versus just 1.1% in W12NRs.  Among diabetics receiving lorcaserin in the BLOOM-DM study, weight loss at 1 year was 9.1% in W12Rs versus 3.1% in W12NRs. Similarly, in placebo-recipients in BLOOM-DM, weight loss at 1 year was 7% for W12Rs and 1.3% for W12NRs. When the weight loss at 1 year was categorized in terms of whether or not participants achieved at least 5% or 10% weight loss, once again early responders to either lorcaserin or placebo had higher rates of achieving both thresholds. Namely, 85.5% of nondiabetic W12Rs had achieved or maintained 5% weight loss at week 52, while only 26% of the W12NRs ultimately did so. Seventy percent of diabetic W12Rs to lorcaserin had ≥ 5% weight loss at week 52 and 25.2% of W12NRs did. The pattern of prediction for achieving 10% weight loss at week 52 was even more pronounced, with, for example, 49.8% of nondiabetic W12Rs having lost at least 10% of their starting weight at 1 year, versus just 4.7% of W12NRs.

When cardiometabolic outcomes were examined, the differences between W12 lorcaserin responders and non-responders appeared to be somewhat attenuated. For example, among diabetic patients, W12 lorcaserin responders had a mean decrease of 1.2% in their A1c level by study end, compared to a nearly 1% decrease in W12NRs. For fasting plasma glucose, the improvement at week 52 was pronounced (about 30 mg/dL lower than baseline) and very similar in W12 responders and non-responders.

Among nondiabetics, average blood pressure lowering (SBP and DBP) at week 52 was greater among lorcaserin W12 responders (SBP dropped 4 mm Hg on average, DBP 3 mm Hg) than it was among non-responders (SBP and DBP dropped by about 1 mm Hg). Other than triglycerides, which decreased substantially among W12 responders (whether on placebo or lorcaserin), changes to lipid profile were relatively small for nondiabetics. Among diabetics, however, LDL and HDL both increased on average in all 4 groups (W12 responders/non-responders to placebo/lorcaserin) by week 52.

Common adverse events for lorcaserin-treated patients included headache (15%–17%), upper respiratory infections (9%–14%), nausea (8%–9%), and dizziness (8% among nondiabetics). Among diabetics, hypoglycemia occurred in 29.3% of those treated with lorcaserin (vs. 21% on placebo). Week 12 responders and non-responders appeared to have a similar adverse event profile, and, in general, adverse events were more common among lorcaserin than placebo participants.

Conclusion. The authors of this study concluded that a week-12 weight loss of ≥ 5% on lorcaserin was a strong predictor of achieving at least that same amount of weight loss, as well as improvements in some cardiometabolic parameters, at 1 year.

Commentary

In 2013, the American Medical Association officially recognized obesity as a disease. This shift in terminology, coupled with a movement towards reimbursing primary care providers for obesity-related interventions, has created a growing awareness among providers that better treatment options for this chronic condition are sorely needed. Just as we treat patients with hypertension and type 2 diabetes by titrating medications, discontinuing those that aren’t effective and continuing those that are, so should we approach the management of our patients with obesity. Although behavioral interventions centered around lifestyle changes (diet/exercise) remain first-line therapies for the treatment of obesity [1], many patients will seek additional tools, such as meal replacement, medication, or even bariatric surgery, to help achieve and maintain weight loss.

In the past 2 to 3 years, there has been a flurry of activity by the FDA to approve new medications for weight loss. In keeping with the view of obesity as a chronic condition, some of these medications, including lorcaserin and phentermine-topiramate ER, have even been approved for patient long-term use [2]. While the addition of new options to the weight loss toolkit is exciting, it may also be daunting for clinicians who have witnessed a bevy of weight loss drugs come on, and then off, the market over the years due to serious adverse events experienced by patients. For physicians and patients considering the use of a new weight loss medication, there is therefore a clear need to minimize risk for adverse effects related to the drug, while maximizing the patient’s chances of losing weight.

Growing evidence from trials of behavioral interventions as well as weight loss medications suggests that the individuals who will ultimately achieve weight loss success with a given intervention/medication, tend to indicate that success relatively early on in the course of therapy [3–5]. For clinicians, this fact is extremely useful, because it may allow the physician and patient to more rapidly make a decision to discontinue a likely ineffective option in favor of another that has not yet been tried, thus minimizing risks for adverse events while maximizing chances of weight loss outcomes.

In this paper, Smith and colleagues addressed this very important issue for one of the more recently FDA-approved medications, lorcaserin. This 5-HT2c agonist is a useful addition to the list of weight loss medications, as it has relatively few contraindications, other than that it cannot be used in pregnancy/lactation and should be avoided in those with a history of heart failure. However, lorcaserin is still relatively costly (eg, compared to phentermine) and, if it is going to be used for long-term weight loss/maintenance, the financial outlay faced by patients might be considerable. In addition to answering an important question, this paper also examined not only weight loss outcomes but also cardiometabolic impacts of the medication. Furthermore, the authors separately examined outcomes for diabetic and nondiabetic patients, as the risk/benefit ratio of remaining on this medication could be quite different between the 2 groups.

Importantly, the study represented a group of secondary analyses of data aggregated from several trials—trials that were not originally designed to answer this question. Although the majority of original trial participants did have data at weeks 12 and 52 (requirement for inclusion in this analysis), up to a quarter of patients in some groups were missing one or the other measure. Whether or not those analyzed represented a biased subsample, and therefore do not have generalizable results, cannot be ascertained.

In reviewing the outcomes achieved by early responders and non-responders, it was very interesting to note that so-called “responders” to placebo followed a nearly identical weight loss trajectory as those on lorcaserin. This fact should not be taken to indicate that lorcaserin is no different from placebo, as the overall chances of achieving weight loss were significantly greater among the lorcaserin participants. However, it is interesting that, for those placebo patients who clearly followed the recommended lifestyle changes, they did just as well as patients receiving active study drug. This underscores the need to educate patients and encourage them, first and foremost, to make a real effort to diet and exercise regardless of what other tools are employed to achieve weight loss.

Another issue to consider for this study is that there are clear differences in the racial/ethnic makeup of responders versus non-responders. This finding is not unexpected, as in many prior weight loss trials, particularly for behavioral interventions, African-American women have experienced less weight loss than their non-Hispanic white counterparts [6]. These differences were observed both for lorcaserin and placebo patients, raising a concern that the lifestyle intervention component of the study was not equally successful for minorities compared to the non-Hispanic white participants. More research is needed on behavioral interventions that work well in diverse populations.

One finding of interest is that among diabetic participants (BLOOM-DM), glycemic control parameters improved nearly equally between lorcaserin early responders and non-responders, despite the differences between those groups for year-end weight loss. The reasons for this are not clear but could merit further investigation.

Ultimately however, even among this large group of randomized trial participants, who were likely highly motivated, only about 40% of nondiabetics and 30% of diabetics were classified as week 12 responders to lorcaserin. That means that likely well over half of the real-world patients who initiate the drug may not achieve their desired weight loss goals with it. Given the cost of the medication, this must be considered before prescribing it, and it reinforces the importance of being willing to reassess a patient’s weight loss progress early and often so that the medication can be discontinued in favor of other therapies as needed.

Applications for Clinical Practice

For providers interested in prescribing lorcaserin to their patients, a clear plan should be made to have regular and early follow-up to assess the patient’s response to the medication. Patients should understand that if they are not responding to the medication within 3 months, or perhaps sooner if they are experiencing any negative side effects, their physician may elect to discontinue it. Importantly, they should only be given lorcaserin if they are also willing to undertake the behavioral changes necessary to promote weight loss, and it should be underscored that their chances of successful weight loss with or without the medication will be greatly enhanced by doing so.

 —Kristina Lewis, MD, MPH

References

1. Jensen MD, Ryan DH, Apovian CM, et al. 2013 AHA/ACC/TOS Guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. J Am Coll Cardiol 2014;63(25 Pt B):2985–3023.

2. Hurt RT, Jithinraj EV, Ebbert JO. New pharmacological treatments for the management of obesity. Cur Gastroenterol Rep 2014;16.6:1–8.

3. Wadden TA, Foster GD, Wang J, et al. Clinical correlates of short- and long-term weight loss. Am J Clin Nutr 1992;56(Suppl 1):271S–274S.

4. Rissanen A, Lean M, Rossner S, et al. Predictive value of early weight loss in obesity management with orlistat: An evidence-based assessment of prescribing guidelines. Int J Obes Relat Metab Disord 2003;27:103–9.

5. O’Neil P, Foster G, Billes S, et al. Early weight loss with naltrexone SR/bupropion SR combination therapy for obesity predicts long-term weight loss (Abstract). Obesity 2009;17:S109.

6. Kumanyika SK, Whitt-Glover MC, Haire-Joshu D. What works for obesity prevention and treatment in black Americans? Research directions. Obes Rev 2014;15:204–12.

References

1. Jensen MD, Ryan DH, Apovian CM, et al. 2013 AHA/ACC/TOS Guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and The Obesity Society. J Am Coll Cardiol 2014;63(25 Pt B):2985–3023.

2. Hurt RT, Jithinraj EV, Ebbert JO. New pharmacological treatments for the management of obesity. Cur Gastroenterol Rep 2014;16.6:1–8.

3. Wadden TA, Foster GD, Wang J, et al. Clinical correlates of short- and long-term weight loss. Am J Clin Nutr 1992;56(Suppl 1):271S–274S.

4. Rissanen A, Lean M, Rossner S, et al. Predictive value of early weight loss in obesity management with orlistat: An evidence-based assessment of prescribing guidelines. Int J Obes Relat Metab Disord 2003;27:103–9.

5. O’Neil P, Foster G, Billes S, et al. Early weight loss with naltrexone SR/bupropion SR combination therapy for obesity predicts long-term weight loss (Abstract). Obesity 2009;17:S109.

6. Kumanyika SK, Whitt-Glover MC, Haire-Joshu D. What works for obesity prevention and treatment in black Americans? Research directions. Obes Rev 2014;15:204–12.

Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Issue
Journal of Clinical Outcomes Management - December 2014, Vol. 21, No. 12
Publications
Publications
Topics
Article Type
Display Headline
Letting Our Patients “Fail Fast”: Early Non-Response to Lorcaserin May Be a Good Reason to Discontinue Medication
Display Headline
Letting Our Patients “Fail Fast”: Early Non-Response to Lorcaserin May Be a Good Reason to Discontinue Medication
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default