User login
CKD: Risk Before, Diet After
Q) In my admitting orders for a CKD patient, I wrote for a “renal diet.” However, the nephrology practitioners changed it to a DASH diet. What is the difference? Why would they not want a “renal diet”?
The answer to this question is: It depends—on the patient, his/her comorbidities, and the need for dialysis treatments (and if so, what type he/she is receiving). Renal diet is a general term used to refer to medical nutrition therapy (MNT) given to a patient with CKD. Each of the five stages of CKD has its own specific MNT requirements.
The MNT for CKD patients often involves modification of the following nutrients: protein, sodium, potassium, phosphorus, and sometimes fluid. The complexity of this therapy often confuses health care professionals when a CKD patient is admitted to the hospital. Let’s examine each nutrient modification to understand optimal nutrition for CKD patients.
Protein. As kidneys fail to excrete urea, protein metabolism is compromised. Thus, in CKD stages 1 and 2, the general recommendations for protein intake are 0.8 to 1.4 g/kg/d. As a patient progresses into CKD stages 3 and 4, these recommendations decrease to 0.6 to 0.8 g/kg/d. In addition, the Kidney Disease Outcomes Quality Initiative (KDOQI) and the Academy of Nutrition and Dietetics recommend that at least 50% of that protein intake be of high biological value (eg, foods of animal origin, soy proteins, dairy, legumes, and nuts and nut butters).2
Why the wide range in protein intake? Needs vary depending on the patient’s comorbidities and nutritional status. Patients with greater need (eg, documented malnutrition, infections, or wounds) will require more protein than those without documented catabolic stress. Additionally, protein needs are based on weight, making it crucial to obtain an accurate weight. When managing a very overweight or underweight patient, an appropriate standard body weight must be calculated. This assessment should be done by a registered dietitian (RD).
Also, renal replacement therapies, once introduced, sharply increase protein needs. Hemodialysis (HD) can account for free amino acid losses of 5 to 20 g per treatment. Peritoneal dialysis (PD) can result in albumin losses of 5 to 15 g/d.2 As a result, protein needs in HD and PD patients are about 1.2 g/kg/d of standard body weight. It has been reported that 30% to 50% of patients are not consuming these amounts, placing them at risk for malnutrition and a higher incidence of morbidity and mortality.2
Sodium. In CKD stages 1 to 4, dietary sodium intake should be less than 2,400 mg/d. The Dietary Approaches to Stop Hypertension (DASH) diet and a Mediterranean diet have been associated with reduced risk for decline in glomerular filtration rate (GFR) and better blood pressure control.3 Both of these diets can be employed, especially in the beginning stages of CKD. But as CKD progresses and urine output declines, recommendations for sodium intake for both HD and PD patients decrease to 2,000 mg/d. In an anuric patient, 2,000 mg/d is the maximum.2
Potassium. As kidney function declines, potassium retention occurs. In CKD stages 1 to 4, potassium restriction is not employed unless the serum level rises above normal.2 The addition of an ACE inhibitor or an angiotensin II receptor blocker to the medication regimen necessitates close monitoring of potassium levels. Potassium allowance for HD varies according to the patient’s urine output and can range from 2 to 4 g/d. PD patients generally can tolerate 3 to 4 g/d of potassium without becoming hyperkalemic, as potassium is well cleared with PD.2
Continue for further examinations >>
Phosphorus. Mineral bone abnormalities begin early in the course of CKD and lead to high-turnover bone disease, adynamic bone disease, fractures, and soft-tissue calcification. Careful monitoring of calcium, intact parathyroid hormone, and phosphorus levels is required throughout all stages of CKD, with hyperphosphatemia of particular concern.
In CKD stages 1 and 2, dietary phosphorus should be limited to maintain a normal serum level.2 As CKD progresses and phosphorus retention increases, 800 to 1,000 mg/d or 10 to 12 mg of phosphorus per gram of protein should be prescribed.
Even with the limitation of dietary phosphorus, phosphate-binding medications may be required to control serum phosphorus in later CKD stages and in HD and PD patients.2 Limiting dietary phosphorus can be difficult for patients because of inorganic phosphate salt additives widely found in canned and processed foods; they are also added to dark colas and to meats and poultry to act as preservatives and improve flavor and texture. Phosphorus additives are 100% bioavailable and therefore more readily absorbed than organic phosphorus.4
Fluid. Lastly, CKD patients need to think about their fluid intake. HD patients with a urine output of > 1,000 mL/24-h period will be allowed up to 2,000 mL/d of fluid. (A 12-oz canned drink is 355 mL.) Those with less than 1,000 mL of urine output will be allowed 1,000 to 1,500 mL/d, with anuric patients capped at 1,000 mL/d. PD patients are allowed 1,000 to 3,000 mL/d depending on urine output and overall status.2 Patients should also be reminded that foods such as soup and gelatin are counted in their fluid allowance.
The complexities of the “renal diet” make patient education by an RD critical. However, a recent article suggested that MNT for CKD patients is underutilized, with limited referrals and lack of education for primary care providers and RDs cited as reasons.5 This is mystifying considering that Medicare will pay for RD services for CKD patients.
The National Kidney Disease Education Program, in association with the Academy of Nutrition and Dietetics, has developed free professional and patient education materials to address this need; they are available at http://nkdep.nih.gov/.
Luanne DiGuglielmo, MS, RD, CSR
DaVita Summit Renal Center
Mountainside, New Jersey
REFERENCES
1. McMahon GM, Preis SR, Hwang S-J, Fox CS. Mid-adulthood risk factor profiles for CKD. J Am Soc Nephrol. 2014 Jun 26; [Epub ahead of print].
2. Byham-Gray L, Stover J, Wiesen K. A Clinical Guide to Nutrition Care in Kidney Disease. 2nd ed. The Academy of Nutrition and Dietetics; 2013.
3. Crews DC. Chronic kidney disease and access to healthful foods. ASN Kidney News. 2014;6(5):11.
4. Moe SM. Phosphate additives in food: you are what you eat—but shouldn’t you know that? ASN Kidney News. 2014;6(5):8.
5. Narva A, Norton J. Medical nutrition therapy for CKD. ASN Kidney News. 2014;6(5):7.
Q) In my admitting orders for a CKD patient, I wrote for a “renal diet.” However, the nephrology practitioners changed it to a DASH diet. What is the difference? Why would they not want a “renal diet”?
The answer to this question is: It depends—on the patient, his/her comorbidities, and the need for dialysis treatments (and if so, what type he/she is receiving). Renal diet is a general term used to refer to medical nutrition therapy (MNT) given to a patient with CKD. Each of the five stages of CKD has its own specific MNT requirements.
The MNT for CKD patients often involves modification of the following nutrients: protein, sodium, potassium, phosphorus, and sometimes fluid. The complexity of this therapy often confuses health care professionals when a CKD patient is admitted to the hospital. Let’s examine each nutrient modification to understand optimal nutrition for CKD patients.
Protein. As kidneys fail to excrete urea, protein metabolism is compromised. Thus, in CKD stages 1 and 2, the general recommendations for protein intake are 0.8 to 1.4 g/kg/d. As a patient progresses into CKD stages 3 and 4, these recommendations decrease to 0.6 to 0.8 g/kg/d. In addition, the Kidney Disease Outcomes Quality Initiative (KDOQI) and the Academy of Nutrition and Dietetics recommend that at least 50% of that protein intake be of high biological value (eg, foods of animal origin, soy proteins, dairy, legumes, and nuts and nut butters).2
Why the wide range in protein intake? Needs vary depending on the patient’s comorbidities and nutritional status. Patients with greater need (eg, documented malnutrition, infections, or wounds) will require more protein than those without documented catabolic stress. Additionally, protein needs are based on weight, making it crucial to obtain an accurate weight. When managing a very overweight or underweight patient, an appropriate standard body weight must be calculated. This assessment should be done by a registered dietitian (RD).
Also, renal replacement therapies, once introduced, sharply increase protein needs. Hemodialysis (HD) can account for free amino acid losses of 5 to 20 g per treatment. Peritoneal dialysis (PD) can result in albumin losses of 5 to 15 g/d.2 As a result, protein needs in HD and PD patients are about 1.2 g/kg/d of standard body weight. It has been reported that 30% to 50% of patients are not consuming these amounts, placing them at risk for malnutrition and a higher incidence of morbidity and mortality.2
Sodium. In CKD stages 1 to 4, dietary sodium intake should be less than 2,400 mg/d. The Dietary Approaches to Stop Hypertension (DASH) diet and a Mediterranean diet have been associated with reduced risk for decline in glomerular filtration rate (GFR) and better blood pressure control.3 Both of these diets can be employed, especially in the beginning stages of CKD. But as CKD progresses and urine output declines, recommendations for sodium intake for both HD and PD patients decrease to 2,000 mg/d. In an anuric patient, 2,000 mg/d is the maximum.2
Potassium. As kidney function declines, potassium retention occurs. In CKD stages 1 to 4, potassium restriction is not employed unless the serum level rises above normal.2 The addition of an ACE inhibitor or an angiotensin II receptor blocker to the medication regimen necessitates close monitoring of potassium levels. Potassium allowance for HD varies according to the patient’s urine output and can range from 2 to 4 g/d. PD patients generally can tolerate 3 to 4 g/d of potassium without becoming hyperkalemic, as potassium is well cleared with PD.2
Continue for further examinations >>
Phosphorus. Mineral bone abnormalities begin early in the course of CKD and lead to high-turnover bone disease, adynamic bone disease, fractures, and soft-tissue calcification. Careful monitoring of calcium, intact parathyroid hormone, and phosphorus levels is required throughout all stages of CKD, with hyperphosphatemia of particular concern.
In CKD stages 1 and 2, dietary phosphorus should be limited to maintain a normal serum level.2 As CKD progresses and phosphorus retention increases, 800 to 1,000 mg/d or 10 to 12 mg of phosphorus per gram of protein should be prescribed.
Even with the limitation of dietary phosphorus, phosphate-binding medications may be required to control serum phosphorus in later CKD stages and in HD and PD patients.2 Limiting dietary phosphorus can be difficult for patients because of inorganic phosphate salt additives widely found in canned and processed foods; they are also added to dark colas and to meats and poultry to act as preservatives and improve flavor and texture. Phosphorus additives are 100% bioavailable and therefore more readily absorbed than organic phosphorus.4
Fluid. Lastly, CKD patients need to think about their fluid intake. HD patients with a urine output of > 1,000 mL/24-h period will be allowed up to 2,000 mL/d of fluid. (A 12-oz canned drink is 355 mL.) Those with less than 1,000 mL of urine output will be allowed 1,000 to 1,500 mL/d, with anuric patients capped at 1,000 mL/d. PD patients are allowed 1,000 to 3,000 mL/d depending on urine output and overall status.2 Patients should also be reminded that foods such as soup and gelatin are counted in their fluid allowance.
The complexities of the “renal diet” make patient education by an RD critical. However, a recent article suggested that MNT for CKD patients is underutilized, with limited referrals and lack of education for primary care providers and RDs cited as reasons.5 This is mystifying considering that Medicare will pay for RD services for CKD patients.
The National Kidney Disease Education Program, in association with the Academy of Nutrition and Dietetics, has developed free professional and patient education materials to address this need; they are available at http://nkdep.nih.gov/.
Luanne DiGuglielmo, MS, RD, CSR
DaVita Summit Renal Center
Mountainside, New Jersey
REFERENCES
1. McMahon GM, Preis SR, Hwang S-J, Fox CS. Mid-adulthood risk factor profiles for CKD. J Am Soc Nephrol. 2014 Jun 26; [Epub ahead of print].
2. Byham-Gray L, Stover J, Wiesen K. A Clinical Guide to Nutrition Care in Kidney Disease. 2nd ed. The Academy of Nutrition and Dietetics; 2013.
3. Crews DC. Chronic kidney disease and access to healthful foods. ASN Kidney News. 2014;6(5):11.
4. Moe SM. Phosphate additives in food: you are what you eat—but shouldn’t you know that? ASN Kidney News. 2014;6(5):8.
5. Narva A, Norton J. Medical nutrition therapy for CKD. ASN Kidney News. 2014;6(5):7.
Q) In my admitting orders for a CKD patient, I wrote for a “renal diet.” However, the nephrology practitioners changed it to a DASH diet. What is the difference? Why would they not want a “renal diet”?
The answer to this question is: It depends—on the patient, his/her comorbidities, and the need for dialysis treatments (and if so, what type he/she is receiving). Renal diet is a general term used to refer to medical nutrition therapy (MNT) given to a patient with CKD. Each of the five stages of CKD has its own specific MNT requirements.
The MNT for CKD patients often involves modification of the following nutrients: protein, sodium, potassium, phosphorus, and sometimes fluid. The complexity of this therapy often confuses health care professionals when a CKD patient is admitted to the hospital. Let’s examine each nutrient modification to understand optimal nutrition for CKD patients.
Protein. As kidneys fail to excrete urea, protein metabolism is compromised. Thus, in CKD stages 1 and 2, the general recommendations for protein intake are 0.8 to 1.4 g/kg/d. As a patient progresses into CKD stages 3 and 4, these recommendations decrease to 0.6 to 0.8 g/kg/d. In addition, the Kidney Disease Outcomes Quality Initiative (KDOQI) and the Academy of Nutrition and Dietetics recommend that at least 50% of that protein intake be of high biological value (eg, foods of animal origin, soy proteins, dairy, legumes, and nuts and nut butters).2
Why the wide range in protein intake? Needs vary depending on the patient’s comorbidities and nutritional status. Patients with greater need (eg, documented malnutrition, infections, or wounds) will require more protein than those without documented catabolic stress. Additionally, protein needs are based on weight, making it crucial to obtain an accurate weight. When managing a very overweight or underweight patient, an appropriate standard body weight must be calculated. This assessment should be done by a registered dietitian (RD).
Also, renal replacement therapies, once introduced, sharply increase protein needs. Hemodialysis (HD) can account for free amino acid losses of 5 to 20 g per treatment. Peritoneal dialysis (PD) can result in albumin losses of 5 to 15 g/d.2 As a result, protein needs in HD and PD patients are about 1.2 g/kg/d of standard body weight. It has been reported that 30% to 50% of patients are not consuming these amounts, placing them at risk for malnutrition and a higher incidence of morbidity and mortality.2
Sodium. In CKD stages 1 to 4, dietary sodium intake should be less than 2,400 mg/d. The Dietary Approaches to Stop Hypertension (DASH) diet and a Mediterranean diet have been associated with reduced risk for decline in glomerular filtration rate (GFR) and better blood pressure control.3 Both of these diets can be employed, especially in the beginning stages of CKD. But as CKD progresses and urine output declines, recommendations for sodium intake for both HD and PD patients decrease to 2,000 mg/d. In an anuric patient, 2,000 mg/d is the maximum.2
Potassium. As kidney function declines, potassium retention occurs. In CKD stages 1 to 4, potassium restriction is not employed unless the serum level rises above normal.2 The addition of an ACE inhibitor or an angiotensin II receptor blocker to the medication regimen necessitates close monitoring of potassium levels. Potassium allowance for HD varies according to the patient’s urine output and can range from 2 to 4 g/d. PD patients generally can tolerate 3 to 4 g/d of potassium without becoming hyperkalemic, as potassium is well cleared with PD.2
Continue for further examinations >>
Phosphorus. Mineral bone abnormalities begin early in the course of CKD and lead to high-turnover bone disease, adynamic bone disease, fractures, and soft-tissue calcification. Careful monitoring of calcium, intact parathyroid hormone, and phosphorus levels is required throughout all stages of CKD, with hyperphosphatemia of particular concern.
In CKD stages 1 and 2, dietary phosphorus should be limited to maintain a normal serum level.2 As CKD progresses and phosphorus retention increases, 800 to 1,000 mg/d or 10 to 12 mg of phosphorus per gram of protein should be prescribed.
Even with the limitation of dietary phosphorus, phosphate-binding medications may be required to control serum phosphorus in later CKD stages and in HD and PD patients.2 Limiting dietary phosphorus can be difficult for patients because of inorganic phosphate salt additives widely found in canned and processed foods; they are also added to dark colas and to meats and poultry to act as preservatives and improve flavor and texture. Phosphorus additives are 100% bioavailable and therefore more readily absorbed than organic phosphorus.4
Fluid. Lastly, CKD patients need to think about their fluid intake. HD patients with a urine output of > 1,000 mL/24-h period will be allowed up to 2,000 mL/d of fluid. (A 12-oz canned drink is 355 mL.) Those with less than 1,000 mL of urine output will be allowed 1,000 to 1,500 mL/d, with anuric patients capped at 1,000 mL/d. PD patients are allowed 1,000 to 3,000 mL/d depending on urine output and overall status.2 Patients should also be reminded that foods such as soup and gelatin are counted in their fluid allowance.
The complexities of the “renal diet” make patient education by an RD critical. However, a recent article suggested that MNT for CKD patients is underutilized, with limited referrals and lack of education for primary care providers and RDs cited as reasons.5 This is mystifying considering that Medicare will pay for RD services for CKD patients.
The National Kidney Disease Education Program, in association with the Academy of Nutrition and Dietetics, has developed free professional and patient education materials to address this need; they are available at http://nkdep.nih.gov/.
Luanne DiGuglielmo, MS, RD, CSR
DaVita Summit Renal Center
Mountainside, New Jersey
REFERENCES
1. McMahon GM, Preis SR, Hwang S-J, Fox CS. Mid-adulthood risk factor profiles for CKD. J Am Soc Nephrol. 2014 Jun 26; [Epub ahead of print].
2. Byham-Gray L, Stover J, Wiesen K. A Clinical Guide to Nutrition Care in Kidney Disease. 2nd ed. The Academy of Nutrition and Dietetics; 2013.
3. Crews DC. Chronic kidney disease and access to healthful foods. ASN Kidney News. 2014;6(5):11.
4. Moe SM. Phosphate additives in food: you are what you eat—but shouldn’t you know that? ASN Kidney News. 2014;6(5):8.
5. Narva A, Norton J. Medical nutrition therapy for CKD. ASN Kidney News. 2014;6(5):7.
Prescribing Statins for Patients With ACS? No Need to Wait
PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1
STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1
ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?
Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6
Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7
Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.
Continue for study summary >>
STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.
For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.
Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).
In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.
WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.
Next page: Caveats and challenges >>
CAVEATS
Benefits might vary with different statins
The studies evaluated in this meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.
CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.
REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.
ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.
Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.
PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1
STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1
ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?
Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6
Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7
Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.
Continue for study summary >>
STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.
For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.
Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).
In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.
WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.
Next page: Caveats and challenges >>
CAVEATS
Benefits might vary with different statins
The studies evaluated in this meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.
CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.
REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.
ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.
Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.
PRACTICE CHANGER
Prescribe a high-dose statin before any patient with acute coronary syndrome (ACS) undergoes percutaneous coronary intervention (PCI); it may be reasonable to extend this to patients being evaluated for ACS.1
STRENGTH OF RECOMMENDATION
A: Based on a meta-analysis1
ILLUSTRATIVE CASE
A 48-year-old man comes to the emergency department with chest pain and is diagnosed with ACS. He is scheduled to have PCI within the next 24 hours. When should you start him on a statin?
Statins are the mainstay pharmaceutical treatment for hyperlipidemia and are used for primary and secondary prevention of coronary artery disease and stroke.2,3 Well known for their cholesterol-lowering effect, they also offer benefits independent of lipids, including improving endothelial function, decreasing oxidative stress, and decreasing vascular inflammation.4-6
Compared to patients with stable angina, those with ACS experience markedly higher rates of coronary events, especially immediately before and after PCI and during the subsequent 30 days.1 American College of Cardiology/American Heart Association (ACC/AHA) guidelines for the management of non-ST elevation myocardial infarction (NSTEMI) advocate starting statins before patients are discharged from the hospital, but they don’t specify precisely when.7
Considering the higher risk for coronary events before and after PCI and statins’ pleiotropic effects, it is reasonable to investigate the optimal time to start statins in patients with ACS.
Continue for study summary >>
STUDY SUMMARY
Meta-analysis shows statins before PCI cut risk for MI
Navarese et al1 performed a systematic review and meta-analysis of studies comparing the clinical outcomes of patients with ACS who received statins before or after PCI (statins group) with those who received low-dose or no statins (control group). The authors searched PubMed, Cochrane, Google Scholar, and CINAHL databases as well as key conference proceedings for studies published before November 2013. Using reasonable inclusion and exclusion criteria and appropriate statistical methods, they analyzed the results of 20 randomized controlled trials that included 8,750 patients. Four studies enrolled only patients with ST elevation MI (STEMI), eight were restricted to NSTEMI, and the remaining eight studies enrolled patients with any type of MI or unstable angina.
For patients who were started on a statin before PCI, the mean timing of administration was 0.53 days before. For those started after PCI, the average time to administration was 3.18 days after.
Administering statins before PCI resulted in a greater reduction in the odds of MI than did starting them afterward. Whether administered before or after PCI, statins reduced the incidence of MIs. The overall 30-day incidence of MIs was 3.4% (123 of 3,621) in the statins group and 5% (179 of 3,577) in the control group. This resulted in an absolute risk reduction of 1.6% (number needed to treat = 62.5) and a 33% reduction of the odds of MI (odds ratio [OR] = 0.67). There was also a trend toward reduced mortality in the statin group (OR = 0.66).
In addition, administering statins before PCI resulted in a greater reduction in the odds of MI at 30 days (OR = 0.38) than starting them post-PCI (OR = 0.85) when compared to the controls. The difference between the pre-PCI OR and the post-PCI OR was statistically significant; these findings persisted past 30 days.
WHAT’S NEW
Early statin administration is most effective
According to ACC/AHA guidelines, all patients with ACS should be receiving a statin by the time they are discharged. However, when to start the statin is not specified. This meta-analysis is the first report to show that administering a statin before PCI can significantly reduce the risk for subsequent MI.
Next page: Caveats and challenges >>
CAVEATS
Benefits might vary with different statins
The studies evaluated in this meta-analysis used various statins and dosing regimens, which could have affected the results. However, sensitivity analyses found similar benefits across different types of statins. In addition, most of the included trials used high doses of statins, which minimized the potential discrepancy in outcomes from various dosing regimens. And while the included studies were not perfect, Navarese et al1 used reasonable methods to identify potential biases.
CHALLENGES TO IMPLEMENTATION
No barriers to earlier start
Implementing this intervention may be as simple as editing a standard order. This meta-analysis also suggests that the earlier the intervention, the greater the benefit, which may be an argument for starting a statin when a patient first presents for evaluation for ACS, since the associated risks are quite low. We believe it would be beneficial if the next update of the ACC/AHA guidelines7 included this recommendation.
REFERENCES
1. Navarese EP, Kowalewski M, Andreotti F, et al. Meta-analysis of time-related benefits of statin therapy in patients with acute coronary syndrome undergoing percutaneous coronary intervention. Am J Cardiol. 2014;113:1753-1764.
2. Pignone M, Phillips C, Mulrow C. Use of lipid lowering drugs for primary prevention of coronary heart disease: meta-analysis of randomised trials. BMJ. 2000;321:983-986.
3. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group. Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med. 1998;339:1349-1357.
4. Liao JK. Beyond lipid lowering: the role of statins in vascular protection. Int J Cardiol. 2002;86:5-18.
5. Li J, Li JJ, He JG, et al. Atorvastatin decreases C-reactive protein-induced inflammatory response in pulmonary artery smooth muscle cells by inhibiting nuclear factor-kappaB pathway. Cardiovasc Ther. 2010;28:8-14.
6. Tandon V, Bano G, Khajuria V, et al. Pleiotropic effects of statins. Indian J Pharmacol. 2005; 37:77-85.
7. Wright RS, Anderson JL, Adams CD, et al; American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. 2011 ACCF/AHA focused update incorporated into the ACC/AHA 2007 Guidelines for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial Infarction: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines developed in collaboration with the American Academy of Family Physicians, Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons. J Am Coll Cardiol. 2011;57: e215-e367.
ACKNOWLEDGEMENT
The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center For Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center For Research Resources or the National Institutes of Health.
Copyright © 2014. The Family Physicians Inquiries Network. All rights reserved.
Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice. 2014;63(12):735, 738.
Pain Out of Proportion to a Fracture
A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.
The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.
In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.
OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.
Continue for David M. Lang's comments >>
COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.
Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.
Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.
Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.
In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.
Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.
One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.
But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.
In this case, recognition of out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her. —DML
A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.
The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.
In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.
OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.
Continue for David M. Lang's comments >>
COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.
Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.
Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.
Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.
In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.
Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.
One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.
But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.
In this case, recognition of out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her. —DML
A 57-year-old woman sustained an injury to her left shoulder during a fall down stairs. She presented to the emergency department, where a physician ordered x-rays that a radiologist interpreted as depicting a simple fracture.
The patient claimed that the radiologist misread the x-rays and that the emergency medicine (EM) physician failed to realize her pain was out of proportion to a fracture. She said the EM physician should have ordered additional tests and sought a radiologic consult. The patient contended that she had actually dislocated her shoulder and that the delay in treatment caused her condition to worsen, leaving her unable to use her left hand.
In addition to the radiologist and the EM physician, two nurses were named as defendants. The plaintiff maintained that they had failed to notify the physician when her condition deteriorated.
OUTCOME
A $2.75 million settlement was reached. The hospital, the EM physician, and the nurses were responsible for $1.5 million and the radiologist, $1.25 million.
Continue for David M. Lang's comments >>
COMMENT
Although complex regional pain syndrome (CRPS, formerly known as reflex sympathetic dystrophy) is not specifically mentioned in this case synopsis, the size of the settlement suggests that it was likely claimed as the resulting injury. CRPS is frequently a source of litigation.
Relatively minor trauma can lead to CRPS; why only certain patients subsequently develop the syndrome, however, is a mystery. What is certain is that CRPS is recognized as one of the most painful conditions known to humankind. Once it develops, the syndrome can result in constant, debilitating pain, the loss of a limb, and near-total decay of a patient’s quality of life.
Plaintiffs’ attorneys are quick to claim negligence and substantial damages for these patients, with their sad, compelling stories. Because the underlying pathophysiology of CRPS is unclear, liability is often hotly debated, with cases difficult to defend.
Malpractice cases generally involve two elements: liability (the presence and magnitude of the error) and damages (the severity of the injury and impact on life). CRPS cases are often considered “damages” cases, because while liability may be uncertain, the patient’s damages are very clear. An understandingly sympathetic jury panel sees the unfortunate patient’s red, swollen, misshapen limb, hears the story of the patient’s ever-present, exquisite pain, and (based largely on human emotion) infers negligence based on the magnitude of the patient’s suffering.
In this case, the patient sustained a shoulder injury in a fall that was initially treated as a fracture (presumptively proximal) but later determined to be a dislocation. Management of the injury was not described, but we can assume that if a fracture was diagnosed, the shoulder joint was immobilized. The plaintiff did not claim that there were any diminished neurovascular findings at the time of injury. We are not told whether follow-up was arranged for the patient, what the final, full diagnosis was (eg, fracture/anterior dislocation of the proximal humerus), or when/if the shoulder was actively reduced.
Under these circumstances, what could a bedside clinician have done differently? The most prominent element is the report of “pain out of proportion to the diagnosis.” When confronted with pain that seems out of proportion to a limb injury, stop and review the case. Be sure to consider occult or evolving neurovascular injury (eg, compartment syndrome, brachial plexus injury). Seek consultation and a second opinion in cases involving pain that seems intractable and out of proportion.
One quick word about pain and drug-seeking behavior. Many of us are all too familiar with patients who overstate their symptoms to obtain narcotic pain medications. Will you encounter drug seekers who embellish their level of pain to obtain narcotics? You know the answer to that question.
But it is necessary to take an injured patient’s claim of pain as stated. Don’t view yourself as “wrong” or “fooled” if patients misstate their level of pain and you respond accordingly. In many cases, there is no way to differentiate between genuine manifestations of pain and gamesmanship. To attempt to do so is dangerous because it may lead you to dismiss a patient with genuine pain for fear of being “fooled.” Don’t. Few situations will irritate a jury more than a patient with genuine pathology who is wrongly considered a “drug seeker.” Take patients at face value and act appropriately if substance misuse is later discovered.
In this case, recognition of out-of-control pain may have resulted in an orthopedic consultation. At minimum, that would demonstrate that the patient’s pain was taken seriously and the clinicians acted with due concern for her. —DML
Varying cutoffs of vitamin D add confusion to field
Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.
“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”
In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”
Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”
Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”
Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”
During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”
The presenters reported having no financial disclosures.
On Twitter @dougbrunk
Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.
“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”
In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”
Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”
Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”
Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”
During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”
The presenters reported having no financial disclosures.
On Twitter @dougbrunk
Efforts to reach agreement on how vitamin D deficiency is defined are complicated by the fact that the cutoff points used in reports from clinical laboratories vary widely.
“I think reporting is a great problem because primary care physicians are very hurried,” Dr. John F. Aloia said at a public conference on vitamin D sponsored by the National Institutes of Health. “When you look at the laboratory report, what you get is a column that’s normal and another column that’s low or high. The choice of the laboratories to choose their own cutpoints is really a problem. The other part of that reporting is using the low level of normal in a range at the RDA [recommended daily allowance].”
In its recently updated recommendations on vitamin D screening, the U.S. Preventive Services Task Force noted that variability between serum vitamin D assay methods “and between laboratories using the same methods may range from 10% to 20%, and classification of samples as ‘deficient’ or ‘nondeficient’ may vary by 4% to 32%, depending on which assay is used. Another factor that may complicate interpretation is that 25-(OH)D may act as a negative acute-phase reactant and its levels may decrease in response to inflammation. Lastly, whether common laboratory reference ranges are appropriate for all ethnic groups is unclear.”
Trying to exert influence on what ranges of serum vitamin D laboratories are using in reporting data “is an issue,” said Dr. Aloia, director of the Bone Mineral Research Center at Winthrop University Hospital, Mineola, N.Y., and professor of medicine at Stony Brook (N.Y.) University. “A laboratory can report anything it chooses to. For instance, the American College of Pathology and other [professional organizations] don’t have the responsibility for [the cut-offs in] those reports.”
Dr. Aloia favors translating the reporting of vitamin D levels based on something like Z scores, “so when you see lab reports, some of them will have a paragraph of explanation to guide the physician,” he explained. “We’re going to need that. We have to move away from just [a] cutpoint range and the lower level of the range being the RDA.”
Dr. Roger Bouillon, professor emeritus of internal medicine at the University of Leuven (Belgium), supports a threshold of 20 ng/mL serum vitamin D in adults. “I don’t like a range [of vitamin D]; they just need to have a level above 20 ng/mL. For me, a threshold is the best strategy on a population basis.”
During an open comment session, attendee Dr. Neil C. Binkley expressed concern over applying Z-score principles to the vitamin D field. “I love bone density measurement,” said Dr. Binkley, codirector of the Osteoporosis Clinical Center & Research Program at the University of Wisconsin, Madison, and past president of the International Society for Clinical Densitometry. “The T-score was in fact an advance in the field. But I can’t tell you how strongly I would urge you to not consider T-scores or Z-scores or something like that in the vitamin D field. Rather, I would urge that we do a better job at measuring 25-hydroxyvitamin D so our laboratories agree and have concise guidance for primary care. If you choose to go into the probability realm and the Z-scores, it is going to be a disaster.”
The presenters reported having no financial disclosures.
On Twitter @dougbrunk
FROM AN NIH PUBLIC CONFERENCE ON VITAMIN D
A step away from immediate umbilical cord clamping
The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.
Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.
Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).
With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.
Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.
Timing in term infants
The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).
Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.
On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.
For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.
The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)
There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.
The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.
Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.
In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.
Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.
A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.
Timing in preterm infants
Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.
As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).
Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.
While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.
The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.
When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.
Our practice and the future
At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.
We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.
Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.
These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.
Dr. Macones said he had no relevant financial disclosures.
Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.
The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.
Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.
Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).
With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.
Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.
Timing in term infants
The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).
Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.
On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.
For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.
The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)
There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.
The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.
Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.
In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.
Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.
A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.
Timing in preterm infants
Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.
As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).
Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.
While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.
The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.
When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.
Our practice and the future
At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.
We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.
Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.
These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.
Dr. Macones said he had no relevant financial disclosures.
Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.
The common practice of immediate cord clamping, which generally means clamping within 15-20 seconds after birth, was fueled by efforts to reduce the risk of postpartum hemorrhage, a leading cause of maternal death worldwide. Immediate clamping was part of a full active management intervention recommended in 2007 by the World Health Organization, along with the use of uterotonics (generally oxytocin) immediately after birth and controlled cord traction to quickly deliver the placenta.
Adoption of the WHO-recommended “active management during the third stage of labor” (AMTSL) worked, leading to a 70% reduction in postpartum hemorrhage and a 60% reduction in blood transfusion over passive management. However, it appears that immediate cord clamping has not played an important role in these reductions. Several randomized controlled trials have shown that early clamping does not impact the risk of postpartum hemorrhage (> 1000 cc or > 500 cc), nor does it impact the need for manual removal of the placenta or the need for blood transfusion.
Instead, the critical component of the AMTSL package appears to be administration of a uterotonic, as reported in a large WHO-directed multicenter clinical trial published in 2012. The study also found that women who received controlled cord traction bled an average of 11 cc less – an insignificant difference – than did women who delivered their placentas by their own effort. Moreover, they had a third stage of labor that was an average of 6 minutes shorter (Lancet 2012;379:1721-7).
With assurance that the timing of umbilical cord clamping does not impact maternal outcomes, investigators have begun to look more at the impact of immediate versus delayed cord clamping on the health of the baby.
Thus far, the issues in this arena are a bit more complicated than on the maternal side. There are indications, however, that slight delays in umbilical cord clamping may be beneficial for the newborn – particularly for preterm infants, who appear in systemic reviews to have a nearly 50% reduction in intraventricular hemorrhage when clamping is delayed.
Timing in term infants
The theoretical benefits of delayed cord clamping include increased neonatal blood volume (improved perfusion and decreased organ injury), more time for spontaneous breathing (reduced risks of resuscitation and a smoother transition of cardiopulmonary and cerebral circulation), and increased stem cells for the infant (anti-inflammatory, neurotropic, and neuroprotective effects).
Theoretically, delayed clamping will increase the infant’s iron stores and lower the incidence of iron deficiency anemia during infancy. This is particularly relevant in developing countries, where up to 50% of infants have anemia by 1 year of age. Anemia is consistently associated with abnormal neurodevelopment, and treatment may not always reverse developmental issues.
On the negative side, delayed clamping is associated with theoretical concerns about hyperbilirubinemia and jaundice, hypothermia, polycythemia, and delays in the bonding of infants and mothers.
For term infants, our best reading on the benefits and risks of delayed umbilical cord clamping comes from a 2013 Cochrane systematic review that assessed results from 15 randomized controlled trials involving 3,911 women and infant pairs. Early cord clamping was generally carried out within 60 seconds of birth, whereas delayed cord clamping involved clamping the umbilical cord more than 1 minute after birth or when cord pulsation has ceased.
The review found that delayed clamping was associated with a significantly higher neonatal hemoglobin concentration at 24-48 hours postpartum (a weighted mean difference of 2 g/dL) and increased iron reserves up to 6 months after birth. Infants in the early clamping group were more than twice as likely to be iron deficient at 3-6 months compared with infants whose cord clamping was delayed (Cochrane Database Syst. Rev. 2013;7:CD004074)
There were no significant differences between early and late clamping in neonatal mortality or for most other neonatal morbidity outcomes. Delayed clamping also did not increase the risk of severe postpartum hemorrhage, blood loss, or reduced hemoglobin levels in mothers.
The downside to delayed cord clamping was an increased risk of jaundice requiring phototherapy. Infants in the later cord clamping group were 40% more likely to need phototherapy – a difference that equates to 3% of infants in the early clamping group and 5% of infants in the late clamping group.
Data were insufficient in the Cochrane review to draw reliable conclusions about the comparative effects on other short-term outcomes such as symptomatic polycythemia, respiratory problems, hypothermia, and infection, as data were limited on long-term outcomes.
In practice, this means that the risk of jaundice must be weighed against the risk of iron deficiency. In developed countries we have the resources both to increase iron stores of infants and to provide phototherapy. While the WHO recommends umbilical cord clamping after 1-3 minutes to improve an infant’s iron status, I do not believe the evidence is strong enough to universally adopt such delayed cord clamping in the United States.
Considering the risks of jaundice and the relative infrequency of iron deficiency in the United States, we should not routinely delay clamping for term infants at this point.
A recent committee opinion developed by the American College of Obstetricians and Gynecologists and endorsed by the American Academy of Pediatrics (No. 543, December 2012) captures this view by concluding that “insufficient evidence exists to support or to refute the benefits from delayed umbilical cord clamping for term infants that are born in settings with rich resources.” Although the ACOG opinion preceded the Cochrane review, the committee, of which I was a member, reviewed much of the same literature.
Timing in preterm infants
Preterm neonates are at increased risk of temperature dysregulation, hypotension, and the need for rapid initial pediatric care and blood transfusion. The increased risk of intraventricular hemorrhage and necrotizing enterocolitis in preterm infants is possibly related to the increased risk of hypotension.
As with term infants, a 2012 Cochrane systematic review offers good insight on our current knowledge. This review of umbilical cord clamping at preterm birth covers 15 studies that included 738 infants delivered between 24 and 36 weeks of gestation. The timing of umbilical cord clamping ranged from 25 seconds to a maximum of 180 seconds (Cochrane Database Syst. Rev. 2012;8:CD003248).
Delayed cord clamping was associated with fewer transfusions for anemia or low blood pressure, less intraventricular hemorrhage of all grades (relative risk 0.59), and a lower risk for necrotizing enterocolitis (relative risk 0.62), compared with immediate clamping.
While there were no clear differences with respect to severe intraventricular hemorrhage (grades 3-4), the nearly 50% reduction in intraventricular hemorrhage overall among deliveries with delayed clamping was significant enough to prompt ACOG to conclude that delayed cord clamping should be considered for preterm infants. This reduction in intraventricular hemorrhage appears to be the single most important benefit, based on current findings.
The data on cord clamping in preterm infants are suggestive of benefit, but are not robust. The studies published thus far have been small, and many of them, as the 2012 Cochrane review points out, involved incomplete reporting and wide confidence intervals. Moreover, just as with the studies on term infants, there has been a lack of long-term follow-up in most of the published trials.
When considering delayed cord clamping in preterm infants, as the ACOG Committee Opinion recommends, I urge focusing on earlier gestational ages. Allowing more placental transfusion at births that occur at or after 36 weeks of gestation may not make much sense because by that point the risk of intraventricular hemorrhage is almost nonexistent.
Our practice and the future
At our institution, births that occur at less than 32 weeks of gestation are eligible for delayed umbilical cord clamping, usually at 30-45 seconds after birth. The main contraindications are placental abruption and multiples.
We do not perform any milking or stripping of the umbilical cord, as the risks are unknown and it is not yet clear whether such practices are equivalent to delayed cord clamping. Compared with delayed cord clamping, which is a natural passive transfusion of placental blood to the infant, milking and stripping are not physiologic.
Additional data from an ongoing large international multicenter study, the Australian Placental Transfusion Study, may resolve some of the current controversy. This study is evaluating the cord clamping in neonates < 30 weeks’ gestation. Another study ongoing in Europe should also provide more information.
These studies – and other trials that are larger and longer than the trials published thus far – are necessary to evaluate long-term outcomes and to establish the ideal timing for umbilical cord clamping. Research is also needed to evaluate the management of the third stage of labor relative to umbilical cord clamping as well as the timing in relation to the initiation of voluntary or assisted ventilation.
Dr. Macones said he had no relevant financial disclosures.
Dr. Macones is the Mitchell and Elaine Yanow Professor and Chair, and director of the division of maternal-fetal medicine and ultrasound in the department of obstetrics and gynecology at Washington University, St. Louis.
CKT more important than del(17p) in CLL, group finds
SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.
The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.
According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).
However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.
“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.
Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.
Patient characteristics
Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.
The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.
FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).
Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.
Event-free survival
The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.
“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”
EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).
EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).
In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).
“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.
In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).
Overall survival
There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).
However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).
Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).
In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).
“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.
“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”
*Information in the abstract differs from that presented at the meeting.
SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.
The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.
According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).
However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.
“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.
Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.
Patient characteristics
Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.
The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.
FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).
Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.
Event-free survival
The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.
“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”
EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).
EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).
In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).
“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.
In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).
Overall survival
There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).
However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).
Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).
In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).
“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.
“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”
*Information in the abstract differs from that presented at the meeting.
SAN FRANCISCO—New research suggests complex metaphase karyotype (CKT) is a stronger predictor of inferior outcome than 17p deletion in patients with relapsed or refractory chronic lymphocytic leukemia (CLL) who are treated with the BTK inhibitor ibrutinib.
The study showed that CKT, defined as 3 or more distinct chromosomal abnormalities, was independently associated with inferior event-free survival (EFS) and overall survival (OS), but del(17p) was not.
According to investigators, this suggests that del(17p) patients without CKT could be managed with long-term ibrutinib and close monitoring, as these patients have similar outcomes as patients without del(17p).
However, patients with CKT will likely require treatment-intensification strategies after ibrutinib-based therapy.
“We believe that patients with a complex karyotype represent an ideal group in whom to study novel treatment approaches, including ibrutinib-based combination regimens and/or consolidated approaches after initial ibrutinib response,” said investigator Philip A. Thompson, MBBS, of the University of Texas MD Anderson Cancer Center in Houston.
Dr Thompson presented his group’s findings at the 2014 ASH Annual Meeting as abstract 22.* Investigators involved in this study received research funding or consultancy fees from Pharmacyclics, Inc., makers of ibrutinib.
Patient characteristics
Dr Thompson and his colleagues analyzed 100 patients with relapsed/refractory CLL who received treatment with ibrutinib-based regimens—50 with ibrutinib alone, 36 with ibrutinib and rituximab, and 14 with ibrutinib, rituximab, and bendamustine.
The median age was 65 (range, 35-83), patients received a median of 2 prior therapies (range, 1-12), and 19% were fludarabine-refractory. Sixty percent of patients had Rai stage III-IV disease, 52% had bulky adenopathy, 81% had unmutated IGHV, and 56% had β2-microglobulin ≥ 4.0 mg/L.
FISH was available for 94 patients, and metaphase analysis was available for 65 patients. Forty-two percent (27/65) of patients had CKT, 28% (26/94) had del(11q), and 48% (45/94) had del(17p).
Of the 45 patients who had del(17p), 23 also had CKT. And of the 49 patients who did not have del(17p), 4 had CKT.
Event-free survival
The median follow-up in surviving patients was 27 months (range, 11-48). Eight patients had planned allogeneic stem cell transplant and were censored for the EFS analysis.
“As has been shown previously, patients with 17p deletion by FISH did have inferior event-free survival,” Dr Thompson said. “And when we looked at those patients with complex metaphase karyotype, there was a highly significant inferior event-free survival in these patients, compared to those with complex karyotype.”
EFS was 78% in patients with neither del(17p) nor del(11q), 69% in patients with del(11q), and 60% in patients with del(17p) (P=0.014).
EFS was 82% in patients without CKT and 44% in those with CKT (P<0.0001). In patients with del(17p), EFS was 78% in those without CKT and 48% in those with CKT (P=0.047).
In patients without CKT, EFS was 79% in those without del(17p) or del(11q), 90% in those with del(11q), and 78% in those with del(17p) (P=0.516).
“Interestingly, when we looked at the events that occurred in those patients without complex karyotype, none were due to CLL progression or Richter’s transformation,” Dr Thompson said.
In multivariable analysis, CKT was significantly associated with EFS (P=0.011), but del(17p) was not (P=0.887).
Overall survival
There was no significant difference in OS according to the presence of del(17p) or del(11q). OS was 87% in patients with neither del(17p) nor del(11q), 81% in patients with del(11q), and 67% in patients with del(17p) (P=0.054).
However, there was a significant difference in OS for patients with and without CKT. OS was 82% in patients without CKT and 56% in patients with CKT (P=0.006).
Among patients without CKT, OS was 84% in those with neither del(17p) nor del(11q), 80% in those with del(11q), and 78% in those with del(17p) (P=0.52).
In multivariable analysis, OS was significantly associated with CKT (P=0.011) and fludarabine-refractory disease (P=0.004) but not del(17p) (P=0.981).
“So, in summary, complex karyotype appears to be a more important predictor of outcomes in patients with relapsed or refractory CLL treated with ibrutinib-based regimens than the presence of del(17p) by FISH,” Dr Thompson said.
“Patients without complex karyotype have a low rate of disease progression, including those who have del(17p). Most progressions during ibrutinib therapy occur late, beyond the 12-month time point, but survival is short after disease progression.”
*Information in the abstract differs from that presented at the meeting.
Long-Acting Insulin Analogs: Effects on Diabetic Retinopathy
Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.
The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.
The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).
They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.
Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.
Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.
Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.
The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.
The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).
They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.
Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.
Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.
Long-acting insulin analogs are designed to enhance glycemic control without excessively lowering blood glucose. But structural modifications of the insulin molecule can alter biological responses and binding characteristics with specific receptors; in short, they can potentially raise the risk of sight-threatening diabetic retinopathy (STDR), say researchers from Taipei City Hospital and National Taiwan University, both in Taiwan.
The researchers note that some clinical trials have reported that intensification of endogenous insulin might accelerate progression of pre-existing STDR. However, they add that some studies used cancer cell lines, and insulin was administered at supraphysiologic concentrations.
The researchers conducted a retrospective study to evaluate the effects of long-acting insulin analogs (glargine and/or detemir) with neutral protamine Hagedorn (NPH) insulin on the progression of STDR in 46,739 patients with type 2 diabetesmellitus (T2DM).
They found no changed risk of STDR with the long-acting insulin analogs, between either matched or unmatched cohorts. For instance, with a median follow-up of 483 days, they found 479 events with glargine initiators in 8,947 patients. There were 541 events in a median of 541 days’ follow-up for 8,947 patients in the NPH initiators group. The detemir group, with 411 days of follow-up, had 64 events.
Despite a “relatively short” observation period, the researchers say their findings agree with those of a previous open-label randomized study of patients with T2DM, which found treatment with insulin glargine over 5 years did not increase progression of STDR, compared with NPH insulin treatment.
Source
Lin JC, Shau WY, Lai MS. Clin Ther. 2014;36(9):1255-1268.
doi: 10.1016/j.clinthera.2014.06.031.3.
CHMP supports expanding use of lenalidomide in MM
The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).
The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.
Lenalidomide is not currently approved to treat newly diagnosed MM in any country.
The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.
Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.
The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.
The FIRST trial
In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.
Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).
Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.
The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.
This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).
The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).
Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).
The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.
The MM-015 trial
In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.
Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).
And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).
During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.
The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).
The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.
Lenalidomide is not currently approved to treat newly diagnosed MM in any country.
The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.
Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.
The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.
The FIRST trial
In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.
Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).
Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.
The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.
This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).
The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).
Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).
The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.
The MM-015 trial
In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.
Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).
And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).
During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.
The European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) is recommending approval of continuous oral treatment with lenalidomide (Revlimid) in adults with previously untreated multiple myeloma (MM) who are ineligible for hematopoietic stem cell transplant (HSCT).
The European Commission, which generally follows the CHMP’s recommendations, is expected to make its final decision in about 2 months.
Lenalidomide is not currently approved to treat newly diagnosed MM in any country.
The drug is approved in the European Union (EU) for use in combination with dexamethasone to treat adults with MM who have received at least one prior therapy.
Lenalidomide is also approved in the EU to treat patients with transfusion-dependent anemia due to low- or intermediate-1-risk myelodysplastic syndromes associated with 5q deletion when other therapeutic options are insufficient or inadequate.
The CHMP’s recommendation to extend the use of lenalidomide to HSCT-ineligible patients with newly diagnosed MM was based on the results of 2 studies: MM-015 and MM-020, also known as FIRST.
The FIRST trial
In the phase 3 FIRST trial, researchers enrolled 1623 patients who were newly diagnosed with MM and not eligible for HSCT.
Patients were randomized to receive lenalidomide and dexamethasone (Rd) in 28-day cycles until disease progression (n=535), 18 cycles of lenalidomide and dexamethasone (Rd18) for 72 weeks (n=541), or melphalan, prednisone, and thalidomide (MPT) for 72 weeks (n=547).
Response rates were significantly better with continuous Rd (75%) and Rd18 (73%) than with MPT (62%, P<0.001 for both comparisons). Complete response rates were 15%, 14%, and 9%, respectively.
The median progression-free survival was 25.5 months with continuous Rd, 20.7 months with Rd18, and 21.2 months with MPT.
This resulted in a 28% reduction in the risk of progression or death for patients treated with continuous Rd compared with those treated with MPT (hazard ratio[HR]=0.72, P<0.001) and a 30% reduction compared with Rd18 (HR=0.70, P<0.001).
The pre-planned interim analysis of overall survival showed a 22% reduction in the risk of death for continuous Rd vs MPT (HR=0.78, P=0.02), but the difference did not cross the pre-specified superiority boundary (P<0.0096).
Adverse events reported in 20% or more of patients in the continuous Rd, Rd18, or MPT arms included diarrhea (45.5%, 38.5%, 16.5%), anemia (43.8%, 35.7%, 42.3%), neutropenia (35.0%, 33.0%, 60.6%), fatigue (32.5%, 32.8%, 28.5%), back pain (32.0%, 26.9%, 21.4%), insomnia (27.6%, 23.5%, 9.8%), asthenia (28.2%, 22.8%, 22.9%), rash (26.1%, 28.0%, 19.4%), decreased appetite (23.1%, 21.3%, 13.3%), cough (22.7%, 17.4%, 12.6%), pyrexia (21.4%, 18.9%, 14.0%), muscle spasms (20.5%, 18.9%, 11.3%) and abdominal pain (20.5%, 14.4%, 11.1%).
The incidence of invasive second primary malignancies was 3% in patients taking continuous Rd, 6% in patients taking Rd18, and 5% in those taking MPT. The overall incidence of solid tumors was identical in the continuous Rd and MPT arms (3%) and 5% in the Rd18 arm.
The MM-015 trial
In the phase 3 MM-015 study, researchers enrolled 459 patients who were 65 or older and newly diagnosed with MM. The team compared melphalan-prednisone-lenalidomide induction followed by lenalidomide maintenance (MPR-R) with melphalan-prednisone-lenalidomide (MPR) or melphalan-prednisone (MP) followed by placebo.
Patients who received MPR-R or MPR had significantly better response rates than patients who received MP, at 77%, 68%, and 50%, respectively (P<0.001 and P=0.002, respectively, for the comparison with MP).
And the median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months, HR=0.49, P<0.001) or MP (13 months, HR=0.40, P<0.001).
During induction, the most frequent adverse events were hematologic. Grade 4 neutropenia occurred in 35% of patients in the MPR-R arm, 32% in the MPR arm, and 8% in the MP arm. The 3-year rate of second primary malignancies was 7%, 7%, and 3%, respectively.
Bisphosphonates may protect against endometrial cancer
The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.
The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.
The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).
PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.
The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.
“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.
The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.
The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.
The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.
The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).
PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.
The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.
“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.
The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.
The use of nitrogenous bisphosphonates was associated with a nearly 50% reduction in the incidence of endometrial cancer among women in the PLCO, or Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial.
The endometrial cancer incidence rate among women in the study who reported ever using nitrogenous bisphosphonates was 8.7/10,000 person-years, compared with 17.7/10,000 person-years among those who reported never being exposed to nitrogenous bisphosphonates (rate ratio, 0.49), Sharon Hensley Alford, Ph.D., of the Henry Ford Health System, Detroit, and her colleagues reported online Dec. 22 in Cancer.
The effect was similar after adjustment for age, race, body mass index, smoking status, and use of hormone therapy (hazard ratio, 0.56). The effect was also similar for both type I and type II disease, although there were only nine cases of type II disease, so the finding did not reach statistical significance, the investigators reported (Cancer 2014 Dec. 22 [doi:10.1002/cncr.28952]).
PLCO study subjects included in the current analysis were 23,485 women aged 55-74 years at study entry between 1993 and 2001, who had no cancer diagnosed prior to year 5 of the study when they completed a supplemental questionnaire to assess bone medication use. The women were followed until last known contact, death, or endometrial cancer diagnosis.
The findings support those of preclinical studies demonstrating antitumor effects of bisphosphonates, and suggest that their use may protect against endometrial cancer, the investigators said.
“However, additional studies are needed that include other potential confounders and a larger sample so that type II endometrial cancer could be assessed more confidently,” they concluded, adding that a trial assessing for endometrial, breast, and colorectal cancer in postmenopausal women would be ideal.
The PLCO trial was funded by the National Institutes of Health. The authors reported having no relevant financial disclosures.
FROM CANCER
Key clinical point: Women with a history of bisphosphonate use had a reduced risk of developing endometrial cancer.
Major finding: The endometrial cancer incidence rate was 8.7 vs. 17.7/10,000 person-years for ever vs. never users of nitrogenous bisphosphonates (rate ratio, 0.49).
Data source: An analysis of data from 23,485 women from a randomized population-based trial.
Disclosures: The PLCO trial was funded by the National Institutes of Health. The authors reported having no financial disclosures.
A Decision Aid Did Not Improve Patient Empowerment for Setting and Achieving Diabetes Treatment Goals
Study Overview
Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.
Design. Randomized controlled trial.
Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.
The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.
Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.
Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.
Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.
Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.
Commentary
Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management. The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.
This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.
Applications for Clinical Practice
Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.
—William Hung, MD, MPH
1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.
2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.
3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.
4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.
Study Overview
Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.
Design. Randomized controlled trial.
Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.
The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.
Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.
Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.
Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.
Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.
Commentary
Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management. The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.
This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.
Applications for Clinical Practice
Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.
—William Hung, MD, MPH
Study Overview
Objective. To determine if a patient-oriented decision aid for prioritizing treatment goals in diabetes leads to changes in patient empowerment for setting and achieving goals and in treatment.
Design. Randomized controlled trial.
Setting and participants. Study participants were recruited from 18 general practices in the north of the Netherlands between April 2011 and August 2012. Participants were included if they had a diagnosis of type 2 diabetes and were managed in primary care. Participants were identified from the electronic medical record system and at least 40 patients were selected from each practice to be contacted for participation. Subjects were excluded if they had myocardial infarction in the preceding year, experienced a stroke, had heart failure, angina, or a terminal illness, or were more than 65 years of age when they received their diabetes diagnosis. Other exclusion criteria include dementia, cognitive deficits, blindness, or an inability to read Dutch. Eligibility criteria were confirmed with the health care provider from each practice. Practices that were included in the study had several features: (1) each had an electronic medical record system supporting structured care protocols; (2) most practices have a nurse practitioner or specialized assistant for diabetes care who carries out the quarterly diabetes checks and is trained to conduct physical examinations, risk assessments, patient education, and counseling; (3) all practices received training in motivational interviewing.
The decision aid format was either a computer screen or printed version, and presented as either a short version, showing treatment effects on myocardial infarction risk only, or as an extended version, including effects on additional outcomes (stroke, amputation, blindness, renal failure). Practices were randomly assigned to use the computer screen or printed version, stratified by practice size (< 2500 patients or > 2500 patients) and number of GPs (solo or several). Within each practice, consenting patients were randomized to receive the short version aid, the extended version, or to the control group.
Intervention. The decision aid presents individually tailored information on risks and treatment options for multiple risk factors. The aid focuses on shared goal setting and decision making, particularly with respect to the drug treatment of risk factors including hemoglobin A1c, systolic blood pressure, low density lipoprotein cholesterol, and smoking. The decision aid is designed to be used by patients before a regular check-up and discussed with their health care provider during a visit to help prioritize treatment that will maximize outcomes; the aid helps to summarize effects of the various treatment options. The patients were asked to come to the practice 15 minutes in advance to go through the information, either in print or on the computer; health care providers were expected to support patients to think about treatment goals and options. Patients in the control received care as usual.
Main outcome measures. The primary outcome measure was the empowerment of patients for setting and achieving goals, which was measured with the Diabetes Empowerment Scale (DES-III). Other outcome measures included changes in treatment, including intensification of drug treatment and treatment with ACE inhibitors.
Main results. A total of 344 patients were included in the study and were randomized to the intervention (n = 225) or usual care group (n = 119). Patients in the intervention group were comparable to usual care patients in terms of age, sex, and educational level. However, there were several differences between the 2 groups: intervention patients were more likely to have well-controlled HbA1c level at baseline and less likely to have well-controlled blood pressure at baseline. Among participants in the intervention group, only 46% reported to have received the basic elements of the intervention. The mean empowerment score increased 0.1 point on a 5-point scale in the intervention group, which was not different from the control group (mean adjusted difference, 0.039 points [95% confidence interval {CI}], −0.056 to 0.134). Lipid lowering medication treatment was intensified in 25% of intervention and 12% of control participants (odds ratio [OR], 2.5 [95% CI, 0.89–7.23]). Explorative analyses comparing printed version of the aid with control did find that lipid lowering medication treatment was more intensified although the confidence interval was wide (OR, 3.90 [95% CI, 1.29–11.80]). No other differences in treatment plan were observed.
Conclusions. The treatment decision aid for diabetes did not improve patient empowerment or substantially alter treatment plan when compared to usual care. However, this finding is limited by the uptake of use of the decision aid during the study period.
Commentary
Patient engagement through shared decision making is an important element in chronic disease management, particularly in diseases such as diabetes where there are a number of significant tasks, including monitoring and administration of medication, that are key to its successful management. The use of decision aids is an innovation that has demonstrated effects in improving patient understanding of disease, and has potential downstream effect in improving management and control of the disease [1]. However, the use of decision aids is not without limitations—patients with poorer health literacy, and perhaps lower socioeconomic status, may derive less clinical benefit [2], and in older adults cognitive and physical limitations may also limit their use.
This study found that the decision aid used in the study did not significantly improve patient empowerment or alter treatment plan. In comparison with previous studies on decision aids for diabetes [3,4], this study is notable that it did not find any significant clinical impact of the decision aid when compared with usual care. However, it is important to consider reasons that may explain its null finding. First, the study has a rather complicated design, with 4 different intervention groups. The study design attempts to differentiate intervention groups with differences in its delivery (computer screen vs. printed) and content (focused information on myocardial infarction risk outcome only vs. all outcomes). The rationale was that it could provide evidence to perhaps suggest the most effective decision aid, but the drawback is that it has the potential to weaken the power of the study, increasing the likelihood of a false-negative finding. Second, in contrast to other studies, this study also uses a different measurement as its primary outcome—a measurement of patient empowerment. Though an important concept to measure, it is less clear what the expected impact and what the level of clinical significance would be. Third, as noted by the investigators, the decision aid had limited uptake in the intervention group; this may be related to its design and format. The challenge in design of a decision aid is that it needs to be simple and easy to use, consume little time, yet be adequately informative with helpful information for patients. Finally, another unique feature of the study is that the control group was an active control group, in that the providers in the practices had significant training in motivational interviewing and communication, which may have made it more challenging to demonstrate impact in intervention group.
Applications for Clinical Practice
Decision aids remain a potentially important addition for patients in the management of chronic diseases such as diabetes. Most studies have demonstrated significant impact. Despite the limitations of the current study, it does point out that different formats of decision aid may have different effects on patient outcomes. For practices that are adopting decision aids for chronic disease management, they need to take into account the format, the information, and the burden of use of the decision aid. Further studies may help to elucidate how decision aids can be optimized for maximizing clinical impact.
—William Hung, MD, MPH
1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.
2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.
3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.
4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.
1. Stacey D, Légaré F, Col NF, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2014;1:CD001431.
2. Coylewright M, Branda M, Inselman JW, et al. Impact of sociodemographic patient characteristics on the efficacy of decision AIDS: a patient-level meta-analysis of 7 randomized trials. Circ Cardiovasc Qual Outcomes 2014;7:360–7.
3. Mathers N, Ng CJ, Campbell MJ, et al. Clinical effectiveness of a patient decision aid to improve decision quality and glycaemic control in people with diabetes making treatment choices: a cluster randomized controlled trial (PANDAs) in general practice. BMJ Open 2012;2:e001469.
4. Branda ME, LeBlanc A, Shah ND, et al. Shared decision making for patients with type 2 diabetes: a randomized trial in primary care. BMC Health Serv Res 2013;13:301.