User login
Heart failure severity at AMI predicts long-term CV death risk
CHICAGO – The severity of heart failure in the setting of acute myocardial infarction predicts long-term cardiovascular death risk, according to a post hoc analysis of data from the IMPROVE IT Trial.
Among 11,185 individuals with MI and known Killip Classification who were part of that randomized, double-blind trial, those with Killip Class II or greater had more than double the risk of long-term cardiovascular death, compared with those with Killip Class I heart failure, Dr Michael G. Silverman, a cardiovascular medicine fellow at Brigham and Women’s Hospital, Boston and a research fellow at the Thrombolysis in MI (TIMI) Study Group reported in a poster at the annual meeting of the American College of Cardiology.
After adjusting for a number of factors, including age, gender, diabetes, hypertension, left ventricular ejection fraction (LVEF), beta blocker and ACE inhibitor/angiotensin receptor blocker use at randomization, and percutaneous coronary intervention at the index event, the 7-year event rate was 14.5% among those with Killip Class II or higher vs. 5.7% those with Killip Class I heart failure (adjusted hazard ratio, 1.9), Dr. Silverman reported on behalf of the TIMI Study Group.
The event rates from 30 days to 6 months were 4.85% and 1.25% in the groups, respectively (adjusted hazard ratio, 1.96), and from 6 months to 7 years they were 1.52% and 0.61%, in the groups, respectively, (adjusted hazard ratio, 1.85).
Further, the increased risk of cardiovascular death associated with Killip Class II or higher was also apparent among important subgroups, including those with ST Segment Elevation MI, those with non-STEMI, those with LVEF of 50 or greater, those with LVEF less than 50, those with diabetes, those without diabetes, men, and women (adjusted hazard ratios ranging from 1.6 to 2.1), Dr Silverman explained in an interview.
The severity of heart failure according to Killip Class is a strong independent predictor of mortality in the setting of acute MI, and the current findings demonstrate that it also predicts cardiovascular death for at least 7 years, suggesting a need for careful attention to the findings of the physical exam in AMI, as it can serve as an important biomarker of long-term cardiovascular death risk, he said.
“AMI patients with Killip Class II or greater warrant continued close medical follow-up and adherence to guideline -directed medial therapy beyond the acute hospitalization to prevent this potentially modifiable outcome,” he concluded.
Dr. Silverman reported having no disclosures.
CHICAGO – The severity of heart failure in the setting of acute myocardial infarction predicts long-term cardiovascular death risk, according to a post hoc analysis of data from the IMPROVE IT Trial.
Among 11,185 individuals with MI and known Killip Classification who were part of that randomized, double-blind trial, those with Killip Class II or greater had more than double the risk of long-term cardiovascular death, compared with those with Killip Class I heart failure, Dr Michael G. Silverman, a cardiovascular medicine fellow at Brigham and Women’s Hospital, Boston and a research fellow at the Thrombolysis in MI (TIMI) Study Group reported in a poster at the annual meeting of the American College of Cardiology.
After adjusting for a number of factors, including age, gender, diabetes, hypertension, left ventricular ejection fraction (LVEF), beta blocker and ACE inhibitor/angiotensin receptor blocker use at randomization, and percutaneous coronary intervention at the index event, the 7-year event rate was 14.5% among those with Killip Class II or higher vs. 5.7% those with Killip Class I heart failure (adjusted hazard ratio, 1.9), Dr. Silverman reported on behalf of the TIMI Study Group.
The event rates from 30 days to 6 months were 4.85% and 1.25% in the groups, respectively (adjusted hazard ratio, 1.96), and from 6 months to 7 years they were 1.52% and 0.61%, in the groups, respectively, (adjusted hazard ratio, 1.85).
Further, the increased risk of cardiovascular death associated with Killip Class II or higher was also apparent among important subgroups, including those with ST Segment Elevation MI, those with non-STEMI, those with LVEF of 50 or greater, those with LVEF less than 50, those with diabetes, those without diabetes, men, and women (adjusted hazard ratios ranging from 1.6 to 2.1), Dr Silverman explained in an interview.
The severity of heart failure according to Killip Class is a strong independent predictor of mortality in the setting of acute MI, and the current findings demonstrate that it also predicts cardiovascular death for at least 7 years, suggesting a need for careful attention to the findings of the physical exam in AMI, as it can serve as an important biomarker of long-term cardiovascular death risk, he said.
“AMI patients with Killip Class II or greater warrant continued close medical follow-up and adherence to guideline -directed medial therapy beyond the acute hospitalization to prevent this potentially modifiable outcome,” he concluded.
Dr. Silverman reported having no disclosures.
CHICAGO – The severity of heart failure in the setting of acute myocardial infarction predicts long-term cardiovascular death risk, according to a post hoc analysis of data from the IMPROVE IT Trial.
Among 11,185 individuals with MI and known Killip Classification who were part of that randomized, double-blind trial, those with Killip Class II or greater had more than double the risk of long-term cardiovascular death, compared with those with Killip Class I heart failure, Dr Michael G. Silverman, a cardiovascular medicine fellow at Brigham and Women’s Hospital, Boston and a research fellow at the Thrombolysis in MI (TIMI) Study Group reported in a poster at the annual meeting of the American College of Cardiology.
After adjusting for a number of factors, including age, gender, diabetes, hypertension, left ventricular ejection fraction (LVEF), beta blocker and ACE inhibitor/angiotensin receptor blocker use at randomization, and percutaneous coronary intervention at the index event, the 7-year event rate was 14.5% among those with Killip Class II or higher vs. 5.7% those with Killip Class I heart failure (adjusted hazard ratio, 1.9), Dr. Silverman reported on behalf of the TIMI Study Group.
The event rates from 30 days to 6 months were 4.85% and 1.25% in the groups, respectively (adjusted hazard ratio, 1.96), and from 6 months to 7 years they were 1.52% and 0.61%, in the groups, respectively, (adjusted hazard ratio, 1.85).
Further, the increased risk of cardiovascular death associated with Killip Class II or higher was also apparent among important subgroups, including those with ST Segment Elevation MI, those with non-STEMI, those with LVEF of 50 or greater, those with LVEF less than 50, those with diabetes, those without diabetes, men, and women (adjusted hazard ratios ranging from 1.6 to 2.1), Dr Silverman explained in an interview.
The severity of heart failure according to Killip Class is a strong independent predictor of mortality in the setting of acute MI, and the current findings demonstrate that it also predicts cardiovascular death for at least 7 years, suggesting a need for careful attention to the findings of the physical exam in AMI, as it can serve as an important biomarker of long-term cardiovascular death risk, he said.
“AMI patients with Killip Class II or greater warrant continued close medical follow-up and adherence to guideline -directed medial therapy beyond the acute hospitalization to prevent this potentially modifiable outcome,” he concluded.
Dr. Silverman reported having no disclosures.
AT ACC 16
Key clinical point: The severity of heart failure in the setting of acute myocardial infarction predicts long-term cardiovascular death risk, according to a post hoc analysis of data from the IMPROVE IT Trial.
Major finding: The 7-year event rate was 14.5% among those with Killip Class II or higher vs. 5.7% among those with Killip Class I heart failure (adjusted hazard ratio, 1.9).
Data source: A post-hoc analysis of data from 11,185 subjects from the IMPROVE IT trial.
Disclosures: Dr. Silverman reported having no disclosures.
Women with suspected CAD classified as lower risk than men
Women with suspected coronary artery disease had similar symptoms and more heart disease risk factors, compared with men, but were assessed as lower risk by their providers and on all standard risk scores, according to a secondary analysis of the PROMISE trial.
The results “highlight the need for sex-specific approaches to coronary artery disease evaluation and testing,” said Kshipra Hemal at Duke Clinical Research Institute in Durham, N.C., and her associates. The findings will be presented April 3 at the annual meeting of the American College of Cardiology and were published online March 23 in the Journal of the American College of Cardiology: Cardiovascular Imaging.
The PROMISE (Prospective Multicenter Imaging Study for the Evaluation of Chest Pain) trial is one of the largest contemporary trials of symptomatic, nonacute suspected CAD. The study included 10,003 stable outpatients, nearly half of whom were women. The researchers calculated the 2008 Framingham score, 2013 Atherosclerotic Cardiovascular Disease score, 1979 Diamond and Forrester score, modified 2011 Diamond and Forrester score, and 2012 combined Diamond-Forrester and Coronary Artery Surgery Study scores for all patients. Patients also were randomly assigned to either anatomical testing with CT angiography or to functional testing with exercise electrocardiogram, stress nuclear imaging, or stress echocardiogram (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.001).
Women in the study were an average of 3 years older than the men and were significantly more likely to be hypertensive (67% vs. 63%), dyslipidemic (69% vs. 66%), and to have a family history of premature CAD (35% vs. 29%; P less than .01 for all comparisons), the researchers reported. Nonetheless, all five risk scores characterized women as lower risk than men (P less than .001 for mean differences). Moreover, before testing, providers characterized 41% of women having a low (less than 30%) likelihood of CAD, compared with 34% of men (P less than .001).
Women were more likely than men to be referred for stress echocardiography or nuclear stress test, but only 9.7% had a positive noninvasive test, compared with 15% of men (P less than .001), the researchers also reported. “A number of characteristics predicted positive test results, and many characteristics were similar between the sexes,” they added. “However, in multivariable models, key predictors of test positivity were few and varied by sex.” Body mass index and Framingham risk score predicted a positive test for women, while both the Framingham and modified Diamond-Forrester risk scores predicted a positive test for men.
Chest pain was the most common primary symptom reported by nearly three-quarters of women and men and was described as “crushing/pressure/squeezing/tightness” 53% and 46% of the time, respectively (P less than .001). Dyspnea was the second most frequent primary symptom at 15% for both sexes. Women were more likely than men to describe back pain, neck or jaw pain, or palpitations, but only 0.6% to 2.7% of patients ranked these among their main symptoms.
“Further studies are warranted to examine the underlying pathophysiology and implications for clinical care of the sex-based clinical differences observed along the entire diagnostic pathway of suspected CAD, including risk factor burden, presenting symptoms, and testing results,” the researchers concluded.
The PROMISE study was funded by the National Heart, Lung, and Blood Institute. Ms. Hemal had no disclosures. Senior author Dr. Pamela S. Douglas disclosed grant support from HeartFlow and having served on a data and safety monitoring board for General Electric Healthcare. Two of the other 15 coinvestigators also disclosed relationships with industry; the rest had no disclosures.
Despite symptomatic presentation, greater family history of premature coronary artery disease, and higher risk factor burden, including older age and greater prevalence of hypertension and dyslipidemia, the women in PROMISE were more likely to be characterized as low risk based on standard cardiovascular risk assessment scores and thus, not surprisingly, also were considered to be at lower risk by their providers. These findings add credence to the ongoing concerns that women are preferentially likely to receive less intensive management of CAD than their male counterparts.
The 2014 American Heart Association Consensus Statement on noninvasive diagnostic testing in women with suspected ischemic heart disease highlighted the development of novel diagnostic tools that have an expanded role in the evaluation of symptomatic female patients to detect not only focal epicardial coronary stenosis, but also nonobstructive atherosclerosis as well as the identification of ischemia resulting from microvascular dysfunction. Such methods using advanced imaging are making steady progress in the understanding of microvascular disease and its consequences.
We agree with the PROMISE investigators that focused sex-specific diagnostic strategies are needed to reduce the cardiovascular mortality and morbidity in women. With emerging data on the full pathophysiologic spectrum of ischemic heart disease in women, diagnostic algorithms must include functional and anatomic cardiac tests as well as physiologic assessments of endothelial and microvascular function, for accurately establishing the diagnosis and prognosis of women with suspected IHD.
Dr. Jennifer H. Mieres is with Hofstra University, Hempstead, N.Y. Dr. Robert O. Bonow is with Northwestern University, Chicago. They had no disclosures. These comments are from their editorial (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.0089).
Despite symptomatic presentation, greater family history of premature coronary artery disease, and higher risk factor burden, including older age and greater prevalence of hypertension and dyslipidemia, the women in PROMISE were more likely to be characterized as low risk based on standard cardiovascular risk assessment scores and thus, not surprisingly, also were considered to be at lower risk by their providers. These findings add credence to the ongoing concerns that women are preferentially likely to receive less intensive management of CAD than their male counterparts.
The 2014 American Heart Association Consensus Statement on noninvasive diagnostic testing in women with suspected ischemic heart disease highlighted the development of novel diagnostic tools that have an expanded role in the evaluation of symptomatic female patients to detect not only focal epicardial coronary stenosis, but also nonobstructive atherosclerosis as well as the identification of ischemia resulting from microvascular dysfunction. Such methods using advanced imaging are making steady progress in the understanding of microvascular disease and its consequences.
We agree with the PROMISE investigators that focused sex-specific diagnostic strategies are needed to reduce the cardiovascular mortality and morbidity in women. With emerging data on the full pathophysiologic spectrum of ischemic heart disease in women, diagnostic algorithms must include functional and anatomic cardiac tests as well as physiologic assessments of endothelial and microvascular function, for accurately establishing the diagnosis and prognosis of women with suspected IHD.
Dr. Jennifer H. Mieres is with Hofstra University, Hempstead, N.Y. Dr. Robert O. Bonow is with Northwestern University, Chicago. They had no disclosures. These comments are from their editorial (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.0089).
Despite symptomatic presentation, greater family history of premature coronary artery disease, and higher risk factor burden, including older age and greater prevalence of hypertension and dyslipidemia, the women in PROMISE were more likely to be characterized as low risk based on standard cardiovascular risk assessment scores and thus, not surprisingly, also were considered to be at lower risk by their providers. These findings add credence to the ongoing concerns that women are preferentially likely to receive less intensive management of CAD than their male counterparts.
The 2014 American Heart Association Consensus Statement on noninvasive diagnostic testing in women with suspected ischemic heart disease highlighted the development of novel diagnostic tools that have an expanded role in the evaluation of symptomatic female patients to detect not only focal epicardial coronary stenosis, but also nonobstructive atherosclerosis as well as the identification of ischemia resulting from microvascular dysfunction. Such methods using advanced imaging are making steady progress in the understanding of microvascular disease and its consequences.
We agree with the PROMISE investigators that focused sex-specific diagnostic strategies are needed to reduce the cardiovascular mortality and morbidity in women. With emerging data on the full pathophysiologic spectrum of ischemic heart disease in women, diagnostic algorithms must include functional and anatomic cardiac tests as well as physiologic assessments of endothelial and microvascular function, for accurately establishing the diagnosis and prognosis of women with suspected IHD.
Dr. Jennifer H. Mieres is with Hofstra University, Hempstead, N.Y. Dr. Robert O. Bonow is with Northwestern University, Chicago. They had no disclosures. These comments are from their editorial (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.0089).
Women with suspected coronary artery disease had similar symptoms and more heart disease risk factors, compared with men, but were assessed as lower risk by their providers and on all standard risk scores, according to a secondary analysis of the PROMISE trial.
The results “highlight the need for sex-specific approaches to coronary artery disease evaluation and testing,” said Kshipra Hemal at Duke Clinical Research Institute in Durham, N.C., and her associates. The findings will be presented April 3 at the annual meeting of the American College of Cardiology and were published online March 23 in the Journal of the American College of Cardiology: Cardiovascular Imaging.
The PROMISE (Prospective Multicenter Imaging Study for the Evaluation of Chest Pain) trial is one of the largest contemporary trials of symptomatic, nonacute suspected CAD. The study included 10,003 stable outpatients, nearly half of whom were women. The researchers calculated the 2008 Framingham score, 2013 Atherosclerotic Cardiovascular Disease score, 1979 Diamond and Forrester score, modified 2011 Diamond and Forrester score, and 2012 combined Diamond-Forrester and Coronary Artery Surgery Study scores for all patients. Patients also were randomly assigned to either anatomical testing with CT angiography or to functional testing with exercise electrocardiogram, stress nuclear imaging, or stress echocardiogram (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.001).
Women in the study were an average of 3 years older than the men and were significantly more likely to be hypertensive (67% vs. 63%), dyslipidemic (69% vs. 66%), and to have a family history of premature CAD (35% vs. 29%; P less than .01 for all comparisons), the researchers reported. Nonetheless, all five risk scores characterized women as lower risk than men (P less than .001 for mean differences). Moreover, before testing, providers characterized 41% of women having a low (less than 30%) likelihood of CAD, compared with 34% of men (P less than .001).
Women were more likely than men to be referred for stress echocardiography or nuclear stress test, but only 9.7% had a positive noninvasive test, compared with 15% of men (P less than .001), the researchers also reported. “A number of characteristics predicted positive test results, and many characteristics were similar between the sexes,” they added. “However, in multivariable models, key predictors of test positivity were few and varied by sex.” Body mass index and Framingham risk score predicted a positive test for women, while both the Framingham and modified Diamond-Forrester risk scores predicted a positive test for men.
Chest pain was the most common primary symptom reported by nearly three-quarters of women and men and was described as “crushing/pressure/squeezing/tightness” 53% and 46% of the time, respectively (P less than .001). Dyspnea was the second most frequent primary symptom at 15% for both sexes. Women were more likely than men to describe back pain, neck or jaw pain, or palpitations, but only 0.6% to 2.7% of patients ranked these among their main symptoms.
“Further studies are warranted to examine the underlying pathophysiology and implications for clinical care of the sex-based clinical differences observed along the entire diagnostic pathway of suspected CAD, including risk factor burden, presenting symptoms, and testing results,” the researchers concluded.
The PROMISE study was funded by the National Heart, Lung, and Blood Institute. Ms. Hemal had no disclosures. Senior author Dr. Pamela S. Douglas disclosed grant support from HeartFlow and having served on a data and safety monitoring board for General Electric Healthcare. Two of the other 15 coinvestigators also disclosed relationships with industry; the rest had no disclosures.
Women with suspected coronary artery disease had similar symptoms and more heart disease risk factors, compared with men, but were assessed as lower risk by their providers and on all standard risk scores, according to a secondary analysis of the PROMISE trial.
The results “highlight the need for sex-specific approaches to coronary artery disease evaluation and testing,” said Kshipra Hemal at Duke Clinical Research Institute in Durham, N.C., and her associates. The findings will be presented April 3 at the annual meeting of the American College of Cardiology and were published online March 23 in the Journal of the American College of Cardiology: Cardiovascular Imaging.
The PROMISE (Prospective Multicenter Imaging Study for the Evaluation of Chest Pain) trial is one of the largest contemporary trials of symptomatic, nonacute suspected CAD. The study included 10,003 stable outpatients, nearly half of whom were women. The researchers calculated the 2008 Framingham score, 2013 Atherosclerotic Cardiovascular Disease score, 1979 Diamond and Forrester score, modified 2011 Diamond and Forrester score, and 2012 combined Diamond-Forrester and Coronary Artery Surgery Study scores for all patients. Patients also were randomly assigned to either anatomical testing with CT angiography or to functional testing with exercise electrocardiogram, stress nuclear imaging, or stress echocardiogram (J Am Coll Cardiol Img. 2016 Mar 23. doi: 10.1016/j.jcmg.2016.02.001).
Women in the study were an average of 3 years older than the men and were significantly more likely to be hypertensive (67% vs. 63%), dyslipidemic (69% vs. 66%), and to have a family history of premature CAD (35% vs. 29%; P less than .01 for all comparisons), the researchers reported. Nonetheless, all five risk scores characterized women as lower risk than men (P less than .001 for mean differences). Moreover, before testing, providers characterized 41% of women having a low (less than 30%) likelihood of CAD, compared with 34% of men (P less than .001).
Women were more likely than men to be referred for stress echocardiography or nuclear stress test, but only 9.7% had a positive noninvasive test, compared with 15% of men (P less than .001), the researchers also reported. “A number of characteristics predicted positive test results, and many characteristics were similar between the sexes,” they added. “However, in multivariable models, key predictors of test positivity were few and varied by sex.” Body mass index and Framingham risk score predicted a positive test for women, while both the Framingham and modified Diamond-Forrester risk scores predicted a positive test for men.
Chest pain was the most common primary symptom reported by nearly three-quarters of women and men and was described as “crushing/pressure/squeezing/tightness” 53% and 46% of the time, respectively (P less than .001). Dyspnea was the second most frequent primary symptom at 15% for both sexes. Women were more likely than men to describe back pain, neck or jaw pain, or palpitations, but only 0.6% to 2.7% of patients ranked these among their main symptoms.
“Further studies are warranted to examine the underlying pathophysiology and implications for clinical care of the sex-based clinical differences observed along the entire diagnostic pathway of suspected CAD, including risk factor burden, presenting symptoms, and testing results,” the researchers concluded.
The PROMISE study was funded by the National Heart, Lung, and Blood Institute. Ms. Hemal had no disclosures. Senior author Dr. Pamela S. Douglas disclosed grant support from HeartFlow and having served on a data and safety monitoring board for General Electric Healthcare. Two of the other 15 coinvestigators also disclosed relationships with industry; the rest had no disclosures.
FROM ACC 16
Key clinical point: Women with suspected coronary artery disease had similar symptoms and more risk factors for coronary artery disease, compared with men, but were classified as lower risk on risk scores and by providers.
Major finding: All risk scores assessed women as being at lower risk than men. Providers characterized 41% of pretest women and 34% of men as low risk (P less than .001).
Data source: A prospective, multicenter, randomized trial of 10,003 symptomatic outpatients with suspected coronary artery disease.
Disclosures: The PROMISE study was funded by the National Heart, Lung, and Blood Institute. Dr. Hemal had no disclosures. Senior author Dr. Pamela Douglas disclosed grant support from HeartFlow and having served on a data and safety monitoring board for General Electric Healthcare. Two of the other 15 coinvestigators also disclosed relationships with industry; the rest had no disclosures.
Risk score predicts rehospitalization after heart surgery
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
Dr. Kilic's data illustrates common factors resulting in rehospitalization after cardiac surgery. Fastidious fluid management in these patients and others is critical to reduce hospital readmissions. A further point to consider is that many pleural effusions, especially those on the left side, are due to retained hemothorax rather than fluid overload. In those instances, early surgical intervention with video-assisted thoracoscopic surgery, rather than prolonged diuresis, would be optimal.
Dr. Francis J. Podbielski, FCCP, serves on the editorial advisory board for CHEST Physician.
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
PHOENIX – A simple, five-element formula can help identify the patients undergoing heart surgery who face the greatest risk for a hospital readmission within 30 days following discharge from their index hospitalization.
The surgeons who developed this formula hope to use it in an investigational program that will target intensified management resources in postsurgical patients who face the highest readmission risk, to cut rehospitalizations and better improve their clinical status and quality of life.
The analysis that produced this formula also documented that the worst offender for triggering rehospitalizations following heart surgery is fluid overload, the proximate readmission cause for 23% of postsurgical patients, Dr. Arman Kilic said at the annual meeting of the Society of Thoracic Surgeons. The next most common cause was infection, which led to 20% of readmissions, followed by arrhythmias, responsible for 8% of readmissions, said Dr. Kilic, a thoracic surgeon at the University of Pennsylvania in Philadelphia.
Because fluid overload, often in the form of pleural effusion, is such an important driver of rehospitalizations, a more targeted management program would include better titration of diuretic treatment to patients following heart surgery, thoracentesis, and closer monitoring of clinical features that flag fluid overload such as weight.
“The volume overload issue is where the money is. If we can reduce that, it could really impact readmissions,” Dr. Kilic said in an interview.
An investigational program to target rehospitalization risk in heart surgery patients is planned at Johns Hopkins Hospital in Baltimore, where Dr. Kilic worked when he performed this analysis. Surgeons at Johns Hopkins are now in the process of getting funding for this pilot program, said Dr. John V. Conte Jr., professor of surgery and director of mechanical circulatory support at Johns Hopkins and a collaborator with Dr. Kilic on developing the risk formula.
“We’ll tailor postoperative follow-up. We’ll get high-risk patients back to the clinic sooner, and we’ll send nurse practitioners to see them to make sure they’re taking their medications and are getting weighed daily,” Dr. Conte said in an interview. “When a patient has heart surgery, they typically retain about 5-10 pounds of fluid. Patients with good renal function give up that fluid easily, but others are difficult to diurese. Many patients go home before they have been fully diuresed, and we need to follow these patients and transition them better to out-of-hospital care.”
He noted that other situations also come up that unnecessarily drive patients back to the hospital when an alternative and less expensive intervention might be equally effective. For example, some patients return to the hospital out of concern for how their chest wound is healing. Instead of being rehospitalized, such patients could be reassured by having them send a nurse a photo of their wound or by coming to an outpatient clinic.
“We need to engage more often with recently discharged patients,” Dr. Conte said in an interview. “Discharging them doesn’t mean separating them from the health care system; it should mean interacting with patients in a different way” that produces better outcomes and patient satisfaction for less money. Developing improved ways to manage recent heart surgery patients following discharge becomes even more critical later this year when, in July, the Centers for Medicare & Medicaid Services adds 30-day readmissions following coronary artery bypass grafting (CABG) to its list of procedures that can generate a penalty to hospitals if they exceed U.S. norms for readmission rates.
The risk model developed by Dr. Kilic, Dr. Conte, and their associates used data collected from 5,360 heart surgery patients treated at Johns Hopkins during 2008-2013. Nearly half the patients underwent isolated CABG, and 20% had isolated valve surgery. Overall, 585 patients (11%) had a hospital readmission within 30 days of their index discharge. One limitation of the analysis was it used data only on readmissions back to Johns Hopkins Hospital.
The researchers used data from three-quarters of the database to derive the risk formula, and from the remaining 25% of the database to validate the formula. A multivariate analysis of demographic and clinical characteristics that significantly linked with an elevated risk for readmissions identified five factors that independently made a significant contribution to readmission risk. The researchers assigned each of these five factors points depending on its relative contribution to readmission risk in the adjusted model: Severe chronic lung disease received 6 points; placement of a ventricular assist device received 5 points, while other types of heart surgery that was not CABG or valve surgery received 4 points (isolated CABG, isolated valve, or combined CABG and valve surgery received 0 points); development of acute renal failure postoperatively but before index discharge received 4 points; an index length of stay beyond 7 days received 4 points; and African American race received 3 points. The maximum number of points a patient could receive was 22.
Patients with a score of 0 had a 6% rate of a 30-day readmission; those with a score of 22 had a 63% readmission rate. For simplicity, Dr. Kilic suggested dividing patients into three categories based on their readmission risk score: Low-risk patients with a score of 0 had a readmission risk of 6%, medium-risk patients with a score of 1-10 had a readmission risk of 12%, and high-risk patients with a score of 11 or more had a readmission risk of 31%. The researchers found a 96% correlation when comparing these predicted readmission risk rates based on the derivation-subgroup analysis with the actual readmission rates seen in the validation subgroup of their database. The targeted risk-management program planned by Dr. Conte would primarily focus on high-risk patients.
Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
[email protected]
On Twitter @mitchelzoler
Key clinical point: A risk score predicted which heart surgery patients faced the greatest risk for hospital readmission within 30 days of their index discharge.
Major finding: Patients with a 0 score had a 6% 30-day readmission rate; a high score of 22 linked with a 63% rate.
Data source: A review of 5,360 heart surgery patients treated at one U.S. center.
Disclosures: Dr. Kilic and Dr. Conte said they had no relevant financial disclosures.
Heart attack patients getting younger, fatter, and less healthy
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
Despite advances in the prevention and early detection of cardiovascular disease, heart attack patients are getting younger, fatter, and less health conscious.
A look at 10 years’ worth of patient data reveals these and other “alarming trends,” according to Dr. Samir R. Kapadia of the Cleveland Clinic.
“What we found was so very contradictory to what we expected,” he said at a press briefing held in advance of the annual meeting of the American College of Cardiology. “Amazingly, we saw that patients presenting with myocardial infarction were getting younger, and their body mass index was going up. There was more smoking, more hypertension, and more diabetes. And all of this despite our better understanding of cardiovascular risk factors.”
The findings seem to point to a serious gap between gathering scientific knowledge and putting that knowledge into practice.
“We have to extend our efforts and put a lot more into educating patients,” Dr. Kapadia said. “Maybe it’s not enough to just tell people to eat right and exercise – maybe we should also be providing them with a structured program. But this is not just the job of the cardiologist. Primary care physicians have to also have this insight, communicate it to the patients, and get them the resources they need to help prevent heart attacks.”
His retrospective study comprised 3,912 consecutive patients who were treated for ST-segment elevation MI (STEMI) from 1995 to 2014. Data were collected on age, gender, diabetes, hypertension, smoking, lipid levels, chronic renal impairment, and obesity. The group was divided into four epochs: 1995-1999, 2000-2004, 2005-2009, and 2010-2014. The researchers examined these factors both in the entire cohort and in a subset of 1,325 who had a diagnosis of coronary artery disease at the time of their MI.
Patients became significantly younger over the entire study period. In epoch 1, the mean age of the entire cohort was 63.6 years. By epoch 3, this had declined to 60.3 years – a significant drop. The change was also evident in the CAD subset; among these patients, mean age declined from 64.1 years in epoch 1 to 61.8 years in epoch 4.
Tobacco use increased significantly in both groups as well. In the overall cohort, the rate was 27.7% in epoch 1 and 45.4% in epoch 4. In the CAD subset, it rose from 24.6% to 42.7%.
Hypertension in the entire cohort increased from 56.7% to 77.3%. In the CAD subset, it increased from 60.9% to 89%.
Obesity increased in both cohorts in overlapping trends, from about 30% in epoch 1 to 40% in epoch 4.
Diabetes increased as well. In the entire cohort, it rose from 24.6% to 30.6%. In the CAD subset, it rose from 25.4% to 41.5%.
Dr. Kapadia noted that the proportion of patients with at least three major risk factors rose from 65% to 85%, and that the incidence of chronic obstructive pulmonary disease increased from 5% to 12%, although he didn’t break this trend down by group.
He had no financial disclosures.
FROM ACC 16
Key clinical point: Despite advances in understanding heart disease prevention, patients with heart attack are younger and less healthy than they were 10 years ago.
Major finding: Patients are an average of 3 years younger than in 1994, and more are obese and use tobacco.
Data source: A retrospective study of 3,912 patients with acute ST-segment elevation MI.
Disclosures: Dr. Samir Kapadia had no financial disclosures.
Lies, damn lies, and research: Improving reproducibility in biomedical science
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”
To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.
The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.
Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.
Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.
In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.
Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.
According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).
A litany of concerns
In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).
Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).
According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”
This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.
Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).
The paucity of P
Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).
Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.
In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).
So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).
And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.
A muddling of mice (and more)
Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.
Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).
A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).
But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).
Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).
Reviewer, view thyself
The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.
The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.
Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.
Not taking grants for granted
The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.
“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”
Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).
FDA proposes ban on powdered gloves
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
The Food and Drug Administration has proposed a ban on most powdered gloves used during surgery and for patient examination, and on absorbable powder used for lubricating surgeons’ gloves.
Aerosolized glove powder on natural rubber latex gloves can cause respiratory allergic reactions, and while powdered synthetic gloves don’t present the risk of allergic reactions, all powdered gloves have been associated with numerous potentially serious adverse events, including severe airway inflammation, wound inflammation, and postsurgical adhesions, according to an FDA statement.
The proposed ban would not apply to powdered radiographic protection gloves; the agency is not aware of any such gloves that are currently on the market. The ban also would not affect non-powdered gloves.
The decision to move forward with the proposed ban was based on a determination that the affected products “are dangerous and present an unreasonable and substantial risk,” according to the statement.
In making this determination, the FDA considered the available evidence, including a literature review and the 285 comments received on a February 2011 Federal Register Notice.
That notice announced the establishment of a public docket to receive comments related to powdered gloves and followed the FDA’s receipt of two citizen petitions requesting a ban on such gloves because of the adverse health effects associated with use of the gloves. The comments overwhelmingly supported a warning or ban.
The FDA determined that the risks associated with powdered gloves cannot be corrected through new or updated labeling, and thus moved forward with the proposed ban.
“This ban is about protecting patients and health care professionals from a danger they might not even be aware of,” Dr. Jeffrey Shuren, director of the FDA Center for Devices and Radiological Health said in the statement. “We take bans very seriously and only take this action when we feel it’s necessary to protect the public health.”
In fact, should this ban be put into place, it would be only the second such ban; the first was the 1983 ban of prosthetic hair fibers, which were found to provide no public health benefit. The benefits cited for powdered gloves were almost entirely related to greater ease of putting the gloves on and taking them off, Eric Pahon of the FDA said in an interview.
A ban on the gloves was not proposed sooner in part because when concerns were first raised about the risks associated with powdered gloves, a ban would have created a shortage, and the risks of a glove shortage outweighed the benefits of banning the gloves, Mr. Pahon said.
However, a recent economic analysis conducted by the FDA because of the critical role medical gloves play in protecting patients and health care providers showed that a powdered glove ban would not cause a glove shortage or have a significant economic impact, and that a ban would not be likely to affect medical practice since numerous non-powdered gloves options are now available, the agency noted.
The proposed rule will be available online March 22 at the Federal Register, and is open for public comment for 90 days.
If finalized, the powdered gloves and absorbable powder used for lubricating surgeons’ gloves would be removed from the marketplace.
Outcomes in major surgery unchanged by continuing clopidogrel
MONTREAL – Patients who stayed on antiplatelet therapy close to – or even up until – major surgery fared just as well as those who stopped their medication earlier in a retrospective, single-center study.
The study found no difference in blood product administration, adverse perioperative events, or all-cause 30-day mortality regardless of whether patients stopped clopidogrel (Plavix) the recommended 5 days before surgery.
“We believe that continuing clopidogrel in elective and emergent surgical situations appears to be safe, and may challenge the current recommendations,” said presenter Dr. David Strosberg.
The study addressed a thorny question for surgeons, Dr. Strosberg said. “As surgeons, we face a dilemma: Do we take the risk of thrombotic complications in stopping the antiplatelet drugs, or do we take the risk of increased surgical bleeding with continuing therapy?”
The package insert for clopidogrel advises discontinuation 5 days prior to surgery. However, manufacturer labeling also states that discontinuation of clopidogrel can lead to adverse cardiac events, said Dr. Strosberg, a general surgery resident at Ohio State University in Columbus.
The aim of the study, presented at the annual meeting of the Central Surgical Association, was to ascertain whether continuing antiplatelet therapy increased the rate of adverse surgical outcomes in those undergoing major emergent or elective surgery.
Dr. Strosberg and his colleagues retrospectively reviewed the record of patients over a 4-year period at a single institution and included those undergoing major general, thoracic, or vascular surgery who were taking clopidogrel at the time of presentation.
Data collected included patient characteristics, including demographic data and comorbidities, as well as transfusion requirements and perioperative events.
A total of 200 patients who had 205 qualifying procedures and were taking clopidogrel were included in the study. Of these, 116 patients (Group A) had their clopidogrel held for at least 5 days preoperatively. The remaining 89 patients (Group B) had their clopidogrel held for less than 5 days, or not at all.
Patient demographics were similar between the two groups. Patients in Group A were more likely to have emergency surgery, to have peripheral stents placed, to have COPD or peripheral vascular disease, to have a malignancy, and to have received aspirin within five days of surgery (P less than .01 for all).
Blood product administration rates and volumes did not differ significantly between the two groups, and there was no difference between the groups in the incidence of myocardial infarctions, cerebrovascular events, or acute visceral or lower extremity ischemia.
Three patients in each group died within 30 days of the procedure, a nonsignificant difference. However, in the group that had clopidogrel held, three patients had perioperative myocardial infarctions, and two of these patients died. In discussing the study, Dr. Michael Dalsing said, “I think a lot of us would accept bleeding more over myocardial infarction.”
A subgroup analysis of the group who had clopidogrel held for fewer than 5 days compared outcomes for emergent vs. non-emergent surgery. The emergent surgery subgroup had a significantly higher rate of preoperative platelet transfusions, although numbers overall were small (2/17, 11.8%, vs. 0/72; P = .03).
Dr. Strosberg noted study limitations that included the retrospective, single-center nature of the study, and the fact that one variable, estimated blood loss, is notoriously subjective and inaccurate.
Dr. Dalsing, chief of vascular surgery at Indiana University, Indianapolis, said that he “was surprised that not even one patient went back for postoperative bleeding in this high-risk group of patients,” and raised the question of potential selection bias. Dr. Strosberg replied that comorbidities were ascertained by the physician at the time of surgery planning; since no differences were seen between study groups, investigators didn’t go back and parse out details about comorbid conditions.
In discussion following the presentation, surgeons spoke to the real-world challenges of performing surgery on a patient with antiplatelet therapy on board.
“Overall, I think your data support kind of a bias I have. Since I’m a vascular surgeon, we almost always operate on clopidogrel, and I don’t know if our bleeding risk is worse or better. But it’s something we almost have to do to keep our grafts going,” Dr. Dalsing said.
Dr. Peter Henke, professor of vascular surgery at the University of Michigan, Ann Arbor, said, “I’d be a little bit cautious with this. If you’ve ever done a big aortic procedure on someone on Plavix, I’ve seen them lose up to a couple of liters of blood just with oozing.”
“Those of us who do open aortic surgery know that very few things bleed like the back wall of an aortic anastomosis of a patient on Plavix,” echoed Dr. Peter Rossi, associate professor of vascular surgery at the Medical College of Wisconsin, Milwaukee.
The study authors reported no relevant disclosures.
On Twitter @karioakes
MONTREAL – Patients who stayed on antiplatelet therapy close to – or even up until – major surgery fared just as well as those who stopped their medication earlier in a retrospective, single-center study.
The study found no difference in blood product administration, adverse perioperative events, or all-cause 30-day mortality regardless of whether patients stopped clopidogrel (Plavix) the recommended 5 days before surgery.
“We believe that continuing clopidogrel in elective and emergent surgical situations appears to be safe, and may challenge the current recommendations,” said presenter Dr. David Strosberg.
The study addressed a thorny question for surgeons, Dr. Strosberg said. “As surgeons, we face a dilemma: Do we take the risk of thrombotic complications in stopping the antiplatelet drugs, or do we take the risk of increased surgical bleeding with continuing therapy?”
The package insert for clopidogrel advises discontinuation 5 days prior to surgery. However, manufacturer labeling also states that discontinuation of clopidogrel can lead to adverse cardiac events, said Dr. Strosberg, a general surgery resident at Ohio State University in Columbus.
The aim of the study, presented at the annual meeting of the Central Surgical Association, was to ascertain whether continuing antiplatelet therapy increased the rate of adverse surgical outcomes in those undergoing major emergent or elective surgery.
Dr. Strosberg and his colleagues retrospectively reviewed the record of patients over a 4-year period at a single institution and included those undergoing major general, thoracic, or vascular surgery who were taking clopidogrel at the time of presentation.
Data collected included patient characteristics, including demographic data and comorbidities, as well as transfusion requirements and perioperative events.
A total of 200 patients who had 205 qualifying procedures and were taking clopidogrel were included in the study. Of these, 116 patients (Group A) had their clopidogrel held for at least 5 days preoperatively. The remaining 89 patients (Group B) had their clopidogrel held for less than 5 days, or not at all.
Patient demographics were similar between the two groups. Patients in Group A were more likely to have emergency surgery, to have peripheral stents placed, to have COPD or peripheral vascular disease, to have a malignancy, and to have received aspirin within five days of surgery (P less than .01 for all).
Blood product administration rates and volumes did not differ significantly between the two groups, and there was no difference between the groups in the incidence of myocardial infarctions, cerebrovascular events, or acute visceral or lower extremity ischemia.
Three patients in each group died within 30 days of the procedure, a nonsignificant difference. However, in the group that had clopidogrel held, three patients had perioperative myocardial infarctions, and two of these patients died. In discussing the study, Dr. Michael Dalsing said, “I think a lot of us would accept bleeding more over myocardial infarction.”
A subgroup analysis of the group who had clopidogrel held for fewer than 5 days compared outcomes for emergent vs. non-emergent surgery. The emergent surgery subgroup had a significantly higher rate of preoperative platelet transfusions, although numbers overall were small (2/17, 11.8%, vs. 0/72; P = .03).
Dr. Strosberg noted study limitations that included the retrospective, single-center nature of the study, and the fact that one variable, estimated blood loss, is notoriously subjective and inaccurate.
Dr. Dalsing, chief of vascular surgery at Indiana University, Indianapolis, said that he “was surprised that not even one patient went back for postoperative bleeding in this high-risk group of patients,” and raised the question of potential selection bias. Dr. Strosberg replied that comorbidities were ascertained by the physician at the time of surgery planning; since no differences were seen between study groups, investigators didn’t go back and parse out details about comorbid conditions.
In discussion following the presentation, surgeons spoke to the real-world challenges of performing surgery on a patient with antiplatelet therapy on board.
“Overall, I think your data support kind of a bias I have. Since I’m a vascular surgeon, we almost always operate on clopidogrel, and I don’t know if our bleeding risk is worse or better. But it’s something we almost have to do to keep our grafts going,” Dr. Dalsing said.
Dr. Peter Henke, professor of vascular surgery at the University of Michigan, Ann Arbor, said, “I’d be a little bit cautious with this. If you’ve ever done a big aortic procedure on someone on Plavix, I’ve seen them lose up to a couple of liters of blood just with oozing.”
“Those of us who do open aortic surgery know that very few things bleed like the back wall of an aortic anastomosis of a patient on Plavix,” echoed Dr. Peter Rossi, associate professor of vascular surgery at the Medical College of Wisconsin, Milwaukee.
The study authors reported no relevant disclosures.
On Twitter @karioakes
MONTREAL – Patients who stayed on antiplatelet therapy close to – or even up until – major surgery fared just as well as those who stopped their medication earlier in a retrospective, single-center study.
The study found no difference in blood product administration, adverse perioperative events, or all-cause 30-day mortality regardless of whether patients stopped clopidogrel (Plavix) the recommended 5 days before surgery.
“We believe that continuing clopidogrel in elective and emergent surgical situations appears to be safe, and may challenge the current recommendations,” said presenter Dr. David Strosberg.
The study addressed a thorny question for surgeons, Dr. Strosberg said. “As surgeons, we face a dilemma: Do we take the risk of thrombotic complications in stopping the antiplatelet drugs, or do we take the risk of increased surgical bleeding with continuing therapy?”
The package insert for clopidogrel advises discontinuation 5 days prior to surgery. However, manufacturer labeling also states that discontinuation of clopidogrel can lead to adverse cardiac events, said Dr. Strosberg, a general surgery resident at Ohio State University in Columbus.
The aim of the study, presented at the annual meeting of the Central Surgical Association, was to ascertain whether continuing antiplatelet therapy increased the rate of adverse surgical outcomes in those undergoing major emergent or elective surgery.
Dr. Strosberg and his colleagues retrospectively reviewed the record of patients over a 4-year period at a single institution and included those undergoing major general, thoracic, or vascular surgery who were taking clopidogrel at the time of presentation.
Data collected included patient characteristics, including demographic data and comorbidities, as well as transfusion requirements and perioperative events.
A total of 200 patients who had 205 qualifying procedures and were taking clopidogrel were included in the study. Of these, 116 patients (Group A) had their clopidogrel held for at least 5 days preoperatively. The remaining 89 patients (Group B) had their clopidogrel held for less than 5 days, or not at all.
Patient demographics were similar between the two groups. Patients in Group A were more likely to have emergency surgery, to have peripheral stents placed, to have COPD or peripheral vascular disease, to have a malignancy, and to have received aspirin within five days of surgery (P less than .01 for all).
Blood product administration rates and volumes did not differ significantly between the two groups, and there was no difference between the groups in the incidence of myocardial infarctions, cerebrovascular events, or acute visceral or lower extremity ischemia.
Three patients in each group died within 30 days of the procedure, a nonsignificant difference. However, in the group that had clopidogrel held, three patients had perioperative myocardial infarctions, and two of these patients died. In discussing the study, Dr. Michael Dalsing said, “I think a lot of us would accept bleeding more over myocardial infarction.”
A subgroup analysis of the group who had clopidogrel held for fewer than 5 days compared outcomes for emergent vs. non-emergent surgery. The emergent surgery subgroup had a significantly higher rate of preoperative platelet transfusions, although numbers overall were small (2/17, 11.8%, vs. 0/72; P = .03).
Dr. Strosberg noted study limitations that included the retrospective, single-center nature of the study, and the fact that one variable, estimated blood loss, is notoriously subjective and inaccurate.
Dr. Dalsing, chief of vascular surgery at Indiana University, Indianapolis, said that he “was surprised that not even one patient went back for postoperative bleeding in this high-risk group of patients,” and raised the question of potential selection bias. Dr. Strosberg replied that comorbidities were ascertained by the physician at the time of surgery planning; since no differences were seen between study groups, investigators didn’t go back and parse out details about comorbid conditions.
In discussion following the presentation, surgeons spoke to the real-world challenges of performing surgery on a patient with antiplatelet therapy on board.
“Overall, I think your data support kind of a bias I have. Since I’m a vascular surgeon, we almost always operate on clopidogrel, and I don’t know if our bleeding risk is worse or better. But it’s something we almost have to do to keep our grafts going,” Dr. Dalsing said.
Dr. Peter Henke, professor of vascular surgery at the University of Michigan, Ann Arbor, said, “I’d be a little bit cautious with this. If you’ve ever done a big aortic procedure on someone on Plavix, I’ve seen them lose up to a couple of liters of blood just with oozing.”
“Those of us who do open aortic surgery know that very few things bleed like the back wall of an aortic anastomosis of a patient on Plavix,” echoed Dr. Peter Rossi, associate professor of vascular surgery at the Medical College of Wisconsin, Milwaukee.
The study authors reported no relevant disclosures.
On Twitter @karioakes
AT THE ANNUAL MEETING OF THE CENTRAL SURGICAL ASSOCIATION
Key clinical point: Outcomes were similar whether patients did or didn’t stop clopidogrel before major surgery.
Major finding: No significant differences in blood product use, adverse events, or death were seen with continuing clopidogrel.
Data source: Retrospective, single-center review of 200 patients undergoing major elective or emergent surgery and taking clopidogrel.
Disclosures: The study authors reported no relevant disclosures.
Age over 80 years should not preclude stenting of left main artery
WASHINGTON – Octogenarians rejected for coronary surgical revascularization can expect outcomes from percutaneous coronary intervention similar to those provided to younger patients who have also been considered to be too high risk for surgery, according to an experience reported by researchers from the University of Southern California, Los Angeles, at the Cardiovascular Research Technologies 2016 conference, sponsored by the Cardiovascular Research Institute at Washington Hospital Center. In a series of patients requiring revascularization for unprotected left main coronary artery (ULMCA) stenosis, there were no significant differences in either short-term or long-term outcomes, reported Dr. Meena R. Narayanan of the University of Southern California, Los Angeles.
The analysis was based on a series of 71 patients with ULMCA stenosis who were considered to be too high risk for surgical revascularization and underwent percutaneous coronary intervention and stent placement as an alternative. Of these, 18 were more than 80 years of age and 53 were younger.
When the two groups were compared, most of the baseline characteristics were similar. However, there were exceptions. Diabetes mellitus was substantially more frequent in the younger patients (55% vs. 22%) but advanced chronic kidney disease was far more common in the octogenarians (61% vs. 30%).
The octogenarians also had significantly higher Society of Thoracic Surgeons (STS) scores (14.1 vs. 6.5; P = .009) and higher European System for Cardiac Operative Risk Evaluation (EUROSCORE) numbers (17.0 vs. 8.2; P = .01), both signifying a worse prognosis. However, both scoring systems use older age as an incremental factor for increased risk.
Regarding mortality, both the 30-day (17% vs. 4%) and the 1-year (28% vs. 21%) rates were higher for the octogenarians relative to those younger, but neither difference reached statistical significance, according to Dr. Narayanan. There also did not appear to be any differences in complications during acute recovery after percutaneous coronary intervention. For example, the need for temporary dialysis was exactly the same (20% in both groups) and the average length of stay, although longer among those older (12.0 vs. 8.4 days), also did not differ significantly.
“Elderly patients are well known to have higher mortality rates associated with coronary surgical revascularization,” reported Dr. Narayanan, but these data suggest that stenting is a reasonable alternative. It is notable that this study is not the first to suggest that age above 80 years may not be an appropriate exclusion factor for coronary stenting. In a study published 3 years ago, outcomes were evaluated in 70 consecutive patients 80 years of age or older undergoing left main coronary stenting (Cardiovasc Revasc Med. 2012;13:119-24). In-hospital mortality was 11% but overall mortality after a mean follow-up time of 30.5 months was 28%, which was considered reasonable in a high-risk population.
The authors of the 2012 study, like Dr. Narayanan, concluded that stenting appears to be a reasonable approach in octogenarians who are not candidates for surgical revascularization.
WASHINGTON – Octogenarians rejected for coronary surgical revascularization can expect outcomes from percutaneous coronary intervention similar to those provided to younger patients who have also been considered to be too high risk for surgery, according to an experience reported by researchers from the University of Southern California, Los Angeles, at the Cardiovascular Research Technologies 2016 conference, sponsored by the Cardiovascular Research Institute at Washington Hospital Center. In a series of patients requiring revascularization for unprotected left main coronary artery (ULMCA) stenosis, there were no significant differences in either short-term or long-term outcomes, reported Dr. Meena R. Narayanan of the University of Southern California, Los Angeles.
The analysis was based on a series of 71 patients with ULMCA stenosis who were considered to be too high risk for surgical revascularization and underwent percutaneous coronary intervention and stent placement as an alternative. Of these, 18 were more than 80 years of age and 53 were younger.
When the two groups were compared, most of the baseline characteristics were similar. However, there were exceptions. Diabetes mellitus was substantially more frequent in the younger patients (55% vs. 22%) but advanced chronic kidney disease was far more common in the octogenarians (61% vs. 30%).
The octogenarians also had significantly higher Society of Thoracic Surgeons (STS) scores (14.1 vs. 6.5; P = .009) and higher European System for Cardiac Operative Risk Evaluation (EUROSCORE) numbers (17.0 vs. 8.2; P = .01), both signifying a worse prognosis. However, both scoring systems use older age as an incremental factor for increased risk.
Regarding mortality, both the 30-day (17% vs. 4%) and the 1-year (28% vs. 21%) rates were higher for the octogenarians relative to those younger, but neither difference reached statistical significance, according to Dr. Narayanan. There also did not appear to be any differences in complications during acute recovery after percutaneous coronary intervention. For example, the need for temporary dialysis was exactly the same (20% in both groups) and the average length of stay, although longer among those older (12.0 vs. 8.4 days), also did not differ significantly.
“Elderly patients are well known to have higher mortality rates associated with coronary surgical revascularization,” reported Dr. Narayanan, but these data suggest that stenting is a reasonable alternative. It is notable that this study is not the first to suggest that age above 80 years may not be an appropriate exclusion factor for coronary stenting. In a study published 3 years ago, outcomes were evaluated in 70 consecutive patients 80 years of age or older undergoing left main coronary stenting (Cardiovasc Revasc Med. 2012;13:119-24). In-hospital mortality was 11% but overall mortality after a mean follow-up time of 30.5 months was 28%, which was considered reasonable in a high-risk population.
The authors of the 2012 study, like Dr. Narayanan, concluded that stenting appears to be a reasonable approach in octogenarians who are not candidates for surgical revascularization.
WASHINGTON – Octogenarians rejected for coronary surgical revascularization can expect outcomes from percutaneous coronary intervention similar to those provided to younger patients who have also been considered to be too high risk for surgery, according to an experience reported by researchers from the University of Southern California, Los Angeles, at the Cardiovascular Research Technologies 2016 conference, sponsored by the Cardiovascular Research Institute at Washington Hospital Center. In a series of patients requiring revascularization for unprotected left main coronary artery (ULMCA) stenosis, there were no significant differences in either short-term or long-term outcomes, reported Dr. Meena R. Narayanan of the University of Southern California, Los Angeles.
The analysis was based on a series of 71 patients with ULMCA stenosis who were considered to be too high risk for surgical revascularization and underwent percutaneous coronary intervention and stent placement as an alternative. Of these, 18 were more than 80 years of age and 53 were younger.
When the two groups were compared, most of the baseline characteristics were similar. However, there were exceptions. Diabetes mellitus was substantially more frequent in the younger patients (55% vs. 22%) but advanced chronic kidney disease was far more common in the octogenarians (61% vs. 30%).
The octogenarians also had significantly higher Society of Thoracic Surgeons (STS) scores (14.1 vs. 6.5; P = .009) and higher European System for Cardiac Operative Risk Evaluation (EUROSCORE) numbers (17.0 vs. 8.2; P = .01), both signifying a worse prognosis. However, both scoring systems use older age as an incremental factor for increased risk.
Regarding mortality, both the 30-day (17% vs. 4%) and the 1-year (28% vs. 21%) rates were higher for the octogenarians relative to those younger, but neither difference reached statistical significance, according to Dr. Narayanan. There also did not appear to be any differences in complications during acute recovery after percutaneous coronary intervention. For example, the need for temporary dialysis was exactly the same (20% in both groups) and the average length of stay, although longer among those older (12.0 vs. 8.4 days), also did not differ significantly.
“Elderly patients are well known to have higher mortality rates associated with coronary surgical revascularization,” reported Dr. Narayanan, but these data suggest that stenting is a reasonable alternative. It is notable that this study is not the first to suggest that age above 80 years may not be an appropriate exclusion factor for coronary stenting. In a study published 3 years ago, outcomes were evaluated in 70 consecutive patients 80 years of age or older undergoing left main coronary stenting (Cardiovasc Revasc Med. 2012;13:119-24). In-hospital mortality was 11% but overall mortality after a mean follow-up time of 30.5 months was 28%, which was considered reasonable in a high-risk population.
The authors of the 2012 study, like Dr. Narayanan, concluded that stenting appears to be a reasonable approach in octogenarians who are not candidates for surgical revascularization.
AT THE CARDIOVASCULAR RESEARCH TECHNOLOGIES 2016
Key clinical point: Among patients who are not candidates for surgical revascularization of stenosis in the left main coronary artery, those over the age of 80 years appear to achieve similar outcomes relative to younger patients.
Major finding: When patients over 80 years of age were compared with younger patients, there were no significant differences in any outcome, including 30-day and 1-year mortality.
Data source: Observational study.
Disclosures: Dr. Narayanan reports no financial relationships relevant to this study.
Safety of bioresorbable stents does not match that of metal stents
Bioresorbable vascular scaffold stents are improving rapidly but they are still associated with a higher risk of complications compared with drug-eluting metal stents, according to a meta-analysis of published studies presented at Cardiovascular Research Technologies 2016.
“Bioresorbable stents are clearly an attractive strategy, but our data suggest that physicians and patients should remain aware of the risks,” reported Dr. Alok Saurav of Creighton University Medical Center, Omaha, Neb.
The first bioresorbable vascular scaffold (BVS) device, Synergy, was approved this past October, but this stent, despite bioresorbable struts, still has body parts that are not fully bioresorbable. However, several fully bioresorbable devices have reached late stages of testing and may receive regulatory approval this year.
In the meta-analysis, eight studies – five randomized trials, two studies with propensity matching, and an observational study –the primary goal was to compare BVS to drug eluting metal (DEM) stents for definite stent thrombosis. Secondary outcomes included subacute stent thrombosis within 30 days and within 1 year and cardiac death, all-cause death, MI, and ischemia-driven target vessel revascularization (TVR).
Despite the fact that the mean age and gender distribution was the same when the 2,760 patients receiving BVS stents were compared to the 2,212 receiving DEM stents, and both received comparable antiplatelet regimens after the stent was placed, there was an 80% greater relative risk for definite stent thrombosis in the BVS group. Although this difference fell short of statistical significance (P = .06), Dr. Saurav called it a “strong trend.”
Several of the adverse events that were analyzed as secondary outcomes in this study were less frequent with the BVS, such as cardiac death (relative risk, 0.83) and all-cause death (RR, 0.74), but the statistics did not suggest a trend, so Dr. Saurav characterized these outcomes as similar. MI was an exception. This was more frequent in those received a BVS stent (RR, 1.35; P = .049), and this reached significance.
Most of the studies included in this analysis were conducted with the everolimus-eluting Absorb BVS device, which many are predicting will be the first fully bioresorbable stent to receive regulatory approval.
It is notable that another meta-analysis including some of the same studies and published just weeks prior to the CRT meeting drew the same conclusion about the increased risk of stent thrombosis with BVS relative to DEM stents (Lancet 2016;387:537-44). This meta-analysis was restricted to six trials with 3,738 randomized patients. Unlike the meta-analysis presented at CRT, this study compared the two types of stents for both definite and probable stent thrombosis. For BVS relative to DEM stents, the relative risk for this outcome was 1.99 (P = .05).
“We think our restriction to definite stent thrombosis provides a stricter endpoint, but it’s notable that the results were relatively consistent,” Dr. Saurav reported.
Acknowledging that the increased risk of stent thrombosis appears to be modest for BVS relative to DEM stents, Dr. Saurav emphasized that these data should not discourage further development of bioresorbable stents, which are conceptually attractive.
“We cannot take these bioresorbable devices off the table,” he said. “But we do need more data to evaluate their risks relative to the conventional devices that are now available.”
The meeting was sponsored by the Cardiovascular Research Institute at Washington Hospital Center. Dr. Saurav reported no conflicts of interest.
Bioresorbable vascular scaffold stents are improving rapidly but they are still associated with a higher risk of complications compared with drug-eluting metal stents, according to a meta-analysis of published studies presented at Cardiovascular Research Technologies 2016.
“Bioresorbable stents are clearly an attractive strategy, but our data suggest that physicians and patients should remain aware of the risks,” reported Dr. Alok Saurav of Creighton University Medical Center, Omaha, Neb.
The first bioresorbable vascular scaffold (BVS) device, Synergy, was approved this past October, but this stent, despite bioresorbable struts, still has body parts that are not fully bioresorbable. However, several fully bioresorbable devices have reached late stages of testing and may receive regulatory approval this year.
In the meta-analysis, eight studies – five randomized trials, two studies with propensity matching, and an observational study –the primary goal was to compare BVS to drug eluting metal (DEM) stents for definite stent thrombosis. Secondary outcomes included subacute stent thrombosis within 30 days and within 1 year and cardiac death, all-cause death, MI, and ischemia-driven target vessel revascularization (TVR).
Despite the fact that the mean age and gender distribution was the same when the 2,760 patients receiving BVS stents were compared to the 2,212 receiving DEM stents, and both received comparable antiplatelet regimens after the stent was placed, there was an 80% greater relative risk for definite stent thrombosis in the BVS group. Although this difference fell short of statistical significance (P = .06), Dr. Saurav called it a “strong trend.”
Several of the adverse events that were analyzed as secondary outcomes in this study were less frequent with the BVS, such as cardiac death (relative risk, 0.83) and all-cause death (RR, 0.74), but the statistics did not suggest a trend, so Dr. Saurav characterized these outcomes as similar. MI was an exception. This was more frequent in those received a BVS stent (RR, 1.35; P = .049), and this reached significance.
Most of the studies included in this analysis were conducted with the everolimus-eluting Absorb BVS device, which many are predicting will be the first fully bioresorbable stent to receive regulatory approval.
It is notable that another meta-analysis including some of the same studies and published just weeks prior to the CRT meeting drew the same conclusion about the increased risk of stent thrombosis with BVS relative to DEM stents (Lancet 2016;387:537-44). This meta-analysis was restricted to six trials with 3,738 randomized patients. Unlike the meta-analysis presented at CRT, this study compared the two types of stents for both definite and probable stent thrombosis. For BVS relative to DEM stents, the relative risk for this outcome was 1.99 (P = .05).
“We think our restriction to definite stent thrombosis provides a stricter endpoint, but it’s notable that the results were relatively consistent,” Dr. Saurav reported.
Acknowledging that the increased risk of stent thrombosis appears to be modest for BVS relative to DEM stents, Dr. Saurav emphasized that these data should not discourage further development of bioresorbable stents, which are conceptually attractive.
“We cannot take these bioresorbable devices off the table,” he said. “But we do need more data to evaluate their risks relative to the conventional devices that are now available.”
The meeting was sponsored by the Cardiovascular Research Institute at Washington Hospital Center. Dr. Saurav reported no conflicts of interest.
Bioresorbable vascular scaffold stents are improving rapidly but they are still associated with a higher risk of complications compared with drug-eluting metal stents, according to a meta-analysis of published studies presented at Cardiovascular Research Technologies 2016.
“Bioresorbable stents are clearly an attractive strategy, but our data suggest that physicians and patients should remain aware of the risks,” reported Dr. Alok Saurav of Creighton University Medical Center, Omaha, Neb.
The first bioresorbable vascular scaffold (BVS) device, Synergy, was approved this past October, but this stent, despite bioresorbable struts, still has body parts that are not fully bioresorbable. However, several fully bioresorbable devices have reached late stages of testing and may receive regulatory approval this year.
In the meta-analysis, eight studies – five randomized trials, two studies with propensity matching, and an observational study –the primary goal was to compare BVS to drug eluting metal (DEM) stents for definite stent thrombosis. Secondary outcomes included subacute stent thrombosis within 30 days and within 1 year and cardiac death, all-cause death, MI, and ischemia-driven target vessel revascularization (TVR).
Despite the fact that the mean age and gender distribution was the same when the 2,760 patients receiving BVS stents were compared to the 2,212 receiving DEM stents, and both received comparable antiplatelet regimens after the stent was placed, there was an 80% greater relative risk for definite stent thrombosis in the BVS group. Although this difference fell short of statistical significance (P = .06), Dr. Saurav called it a “strong trend.”
Several of the adverse events that were analyzed as secondary outcomes in this study were less frequent with the BVS, such as cardiac death (relative risk, 0.83) and all-cause death (RR, 0.74), but the statistics did not suggest a trend, so Dr. Saurav characterized these outcomes as similar. MI was an exception. This was more frequent in those received a BVS stent (RR, 1.35; P = .049), and this reached significance.
Most of the studies included in this analysis were conducted with the everolimus-eluting Absorb BVS device, which many are predicting will be the first fully bioresorbable stent to receive regulatory approval.
It is notable that another meta-analysis including some of the same studies and published just weeks prior to the CRT meeting drew the same conclusion about the increased risk of stent thrombosis with BVS relative to DEM stents (Lancet 2016;387:537-44). This meta-analysis was restricted to six trials with 3,738 randomized patients. Unlike the meta-analysis presented at CRT, this study compared the two types of stents for both definite and probable stent thrombosis. For BVS relative to DEM stents, the relative risk for this outcome was 1.99 (P = .05).
“We think our restriction to definite stent thrombosis provides a stricter endpoint, but it’s notable that the results were relatively consistent,” Dr. Saurav reported.
Acknowledging that the increased risk of stent thrombosis appears to be modest for BVS relative to DEM stents, Dr. Saurav emphasized that these data should not discourage further development of bioresorbable stents, which are conceptually attractive.
“We cannot take these bioresorbable devices off the table,” he said. “But we do need more data to evaluate their risks relative to the conventional devices that are now available.”
The meeting was sponsored by the Cardiovascular Research Institute at Washington Hospital Center. Dr. Saurav reported no conflicts of interest.
AT CARDIOVASCULAR RESEARCH TECHNOLOGIES 2016
Key clinical point: Trial data suggest the risk of thrombosis and other adverse events remains higher with bioresorbable stents than with conventional drug-eluting metal stents.
Major finding: In a meta-analysis, the 80% increased risk of definite stent thrombosis for bioresorbable relative to metal stents fell just short of significance (P = .06) but the 35% increased risk of subsequent MI was significant (P = .049).
Data source: Meta-analysis of eight studies.
Disclosures: Dr. Saurav reported no conflicts of interest.
Pro basketball players’ hearts: LV keeps growing, aortic root doesn’t
For the first time, cardiologists have characterized the adaptive cardiac remodeling in a large cohort of National Basketball Association players, which establishes a normative database and allows physicians to distinguish it from occult pathologic changes that may precipitate sudden cardiac death, according to an imaging study.
“We hope that the present data will help to focus decision making and improve clinical acumen for the purpose of primary prevention of cardiac emergencies in U.S. basketball players and in the athletic community at large,” said Dr. David J. Engel and his associates of Columbia University, New York.
Until now, most of the literature concerning the structural features of the athletic heart has been based on European studies, where comprehensive cardiac screening of all elite athletes is mandatory. The typical sports activities and the demographics of athletes in the U.S. are different, and their cardiologic profiles have not been well studied because detailed cardiac examinations are not compulsory. But the NBA recently mandated that all athletes undergo annual preseason medical evaluations including stress echocardiograms, and allowed the division of cardiology at Columbia to assess the results each year.
“A detailed understanding of normal and expected cardiac remodeling in U.S. basketball players has significant clinical importance given that the incidence of sports-related sudden cardiac death in the U.S. is highest among basketball players and that the most common cause ... in this population is hypertrophic cardiomyopathy,” the investigators noted.
Their analysis of all 526 ECGs performed on NBA players during a 1-year period “will provide an invaluable frame of reference to enhance player safety for the large group of U.S. basketball players in training at all skill levels, and in the athletic community at large,” they said.
The study participants were aged 18-39 years (mean age, 25.7 years). Roughly 77% were African American, 20% were white, 2% were Hispanic, and 1% were Asian or other ethnicities. The mean height was 200.2 cm (6’7”).
Left ventricular cavity size was larger than that in the general population, but LV size was proportional to the athletes’ large body size. “Scaling LV size to body size is vitally important in the cardiac evaluation of basketball players, whose heights extend to 218 cm and body surface areas to 2.8 m2,” Dr. Engel and his associates said (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0252).
Left ventricular hypertrophy (LVH) was identified in only 27% of the athletes. African Americans had increased indices of LVH, compared with whites, and had a higher incidence of nondilated concentric hypertrophy, while whites showed predominantly eccentric dilated hypertrophy. These findings should help clinicians recognize genuine hypertrophic cardiomyopathy, which is a contraindication to participating in all but the most low-intensity competitive sports.
Most of the participants had a normal left ventricular ejection fraction, and all showed normal augmentation of LV systolic function with exercise.
Aortic root diameter was larger than that in the general population but similar to that in other elite athletes. Surprisingly, aortic root diameter increased with increasing body size only to a certain point, reaching a plateau at 31-35 mm. Fewer than 5% of the participants had an aortic root diameter of 40 mm or more, and the maximal diameter was 42 mm. “These data have important implications in the evaluation of exceptionally large athletes and question the applicability in individuals with significantly increased biometrics of the traditional formula to estimate aortic root diameter that assumes a linear association between [it] and body surface area,” they noted.
“We hope that the results of this study will assist recognition of cardiac pathologic change and provide a framework to help avoid unnecessary exclusions of athletes from competition. We believe that these data have additional applicability to other sports that preselect for athletes with height, such as volleyball, rowing, and track and field,” Dr. Engel and his associates added.
This study was supported by the National Basketball Association as part of a medical services agreement with Columbia University. Dr. Engel and his associates reported having no relevant financial disclosures.
The most interesting finding of this study was that despite the immense body size of the athletes, aortic root diameter exceeded 40 mm in less than 5%, and when dilation did occur it was of a very small magnitude, with a maximal diameter of 42 mm.
This important finding confirms that only mild aortic dilation should be considered physiologic among athletes, and that even athletes at the extreme end of the height spectrum should not be expected to show proportionally extreme aortic dilation.
Unlike ventricular size, which increases proportionally with body size, aortic dilation has an upper limit. Athletes with aortic dimensions that exceed this limit should be considered at risk for aortopathy and either prohibited from competitive sports or closely monitored if they do participate.
Dr. Aaron L. Baggish of the Cardiovascular Performance Program at Massachusetts General Hospital, Boston, made these remarks in an accompanying editorial (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0289). He reported having no relevant financial conflicts of interest.
The most interesting finding of this study was that despite the immense body size of the athletes, aortic root diameter exceeded 40 mm in less than 5%, and when dilation did occur it was of a very small magnitude, with a maximal diameter of 42 mm.
This important finding confirms that only mild aortic dilation should be considered physiologic among athletes, and that even athletes at the extreme end of the height spectrum should not be expected to show proportionally extreme aortic dilation.
Unlike ventricular size, which increases proportionally with body size, aortic dilation has an upper limit. Athletes with aortic dimensions that exceed this limit should be considered at risk for aortopathy and either prohibited from competitive sports or closely monitored if they do participate.
Dr. Aaron L. Baggish of the Cardiovascular Performance Program at Massachusetts General Hospital, Boston, made these remarks in an accompanying editorial (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0289). He reported having no relevant financial conflicts of interest.
The most interesting finding of this study was that despite the immense body size of the athletes, aortic root diameter exceeded 40 mm in less than 5%, and when dilation did occur it was of a very small magnitude, with a maximal diameter of 42 mm.
This important finding confirms that only mild aortic dilation should be considered physiologic among athletes, and that even athletes at the extreme end of the height spectrum should not be expected to show proportionally extreme aortic dilation.
Unlike ventricular size, which increases proportionally with body size, aortic dilation has an upper limit. Athletes with aortic dimensions that exceed this limit should be considered at risk for aortopathy and either prohibited from competitive sports or closely monitored if they do participate.
Dr. Aaron L. Baggish of the Cardiovascular Performance Program at Massachusetts General Hospital, Boston, made these remarks in an accompanying editorial (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0289). He reported having no relevant financial conflicts of interest.
For the first time, cardiologists have characterized the adaptive cardiac remodeling in a large cohort of National Basketball Association players, which establishes a normative database and allows physicians to distinguish it from occult pathologic changes that may precipitate sudden cardiac death, according to an imaging study.
“We hope that the present data will help to focus decision making and improve clinical acumen for the purpose of primary prevention of cardiac emergencies in U.S. basketball players and in the athletic community at large,” said Dr. David J. Engel and his associates of Columbia University, New York.
Until now, most of the literature concerning the structural features of the athletic heart has been based on European studies, where comprehensive cardiac screening of all elite athletes is mandatory. The typical sports activities and the demographics of athletes in the U.S. are different, and their cardiologic profiles have not been well studied because detailed cardiac examinations are not compulsory. But the NBA recently mandated that all athletes undergo annual preseason medical evaluations including stress echocardiograms, and allowed the division of cardiology at Columbia to assess the results each year.
“A detailed understanding of normal and expected cardiac remodeling in U.S. basketball players has significant clinical importance given that the incidence of sports-related sudden cardiac death in the U.S. is highest among basketball players and that the most common cause ... in this population is hypertrophic cardiomyopathy,” the investigators noted.
Their analysis of all 526 ECGs performed on NBA players during a 1-year period “will provide an invaluable frame of reference to enhance player safety for the large group of U.S. basketball players in training at all skill levels, and in the athletic community at large,” they said.
The study participants were aged 18-39 years (mean age, 25.7 years). Roughly 77% were African American, 20% were white, 2% were Hispanic, and 1% were Asian or other ethnicities. The mean height was 200.2 cm (6’7”).
Left ventricular cavity size was larger than that in the general population, but LV size was proportional to the athletes’ large body size. “Scaling LV size to body size is vitally important in the cardiac evaluation of basketball players, whose heights extend to 218 cm and body surface areas to 2.8 m2,” Dr. Engel and his associates said (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0252).
Left ventricular hypertrophy (LVH) was identified in only 27% of the athletes. African Americans had increased indices of LVH, compared with whites, and had a higher incidence of nondilated concentric hypertrophy, while whites showed predominantly eccentric dilated hypertrophy. These findings should help clinicians recognize genuine hypertrophic cardiomyopathy, which is a contraindication to participating in all but the most low-intensity competitive sports.
Most of the participants had a normal left ventricular ejection fraction, and all showed normal augmentation of LV systolic function with exercise.
Aortic root diameter was larger than that in the general population but similar to that in other elite athletes. Surprisingly, aortic root diameter increased with increasing body size only to a certain point, reaching a plateau at 31-35 mm. Fewer than 5% of the participants had an aortic root diameter of 40 mm or more, and the maximal diameter was 42 mm. “These data have important implications in the evaluation of exceptionally large athletes and question the applicability in individuals with significantly increased biometrics of the traditional formula to estimate aortic root diameter that assumes a linear association between [it] and body surface area,” they noted.
“We hope that the results of this study will assist recognition of cardiac pathologic change and provide a framework to help avoid unnecessary exclusions of athletes from competition. We believe that these data have additional applicability to other sports that preselect for athletes with height, such as volleyball, rowing, and track and field,” Dr. Engel and his associates added.
This study was supported by the National Basketball Association as part of a medical services agreement with Columbia University. Dr. Engel and his associates reported having no relevant financial disclosures.
For the first time, cardiologists have characterized the adaptive cardiac remodeling in a large cohort of National Basketball Association players, which establishes a normative database and allows physicians to distinguish it from occult pathologic changes that may precipitate sudden cardiac death, according to an imaging study.
“We hope that the present data will help to focus decision making and improve clinical acumen for the purpose of primary prevention of cardiac emergencies in U.S. basketball players and in the athletic community at large,” said Dr. David J. Engel and his associates of Columbia University, New York.
Until now, most of the literature concerning the structural features of the athletic heart has been based on European studies, where comprehensive cardiac screening of all elite athletes is mandatory. The typical sports activities and the demographics of athletes in the U.S. are different, and their cardiologic profiles have not been well studied because detailed cardiac examinations are not compulsory. But the NBA recently mandated that all athletes undergo annual preseason medical evaluations including stress echocardiograms, and allowed the division of cardiology at Columbia to assess the results each year.
“A detailed understanding of normal and expected cardiac remodeling in U.S. basketball players has significant clinical importance given that the incidence of sports-related sudden cardiac death in the U.S. is highest among basketball players and that the most common cause ... in this population is hypertrophic cardiomyopathy,” the investigators noted.
Their analysis of all 526 ECGs performed on NBA players during a 1-year period “will provide an invaluable frame of reference to enhance player safety for the large group of U.S. basketball players in training at all skill levels, and in the athletic community at large,” they said.
The study participants were aged 18-39 years (mean age, 25.7 years). Roughly 77% were African American, 20% were white, 2% were Hispanic, and 1% were Asian or other ethnicities. The mean height was 200.2 cm (6’7”).
Left ventricular cavity size was larger than that in the general population, but LV size was proportional to the athletes’ large body size. “Scaling LV size to body size is vitally important in the cardiac evaluation of basketball players, whose heights extend to 218 cm and body surface areas to 2.8 m2,” Dr. Engel and his associates said (JAMA Cardiol. 2016 Feb 24. doi: 10.1001/jamacardio.2015.0252).
Left ventricular hypertrophy (LVH) was identified in only 27% of the athletes. African Americans had increased indices of LVH, compared with whites, and had a higher incidence of nondilated concentric hypertrophy, while whites showed predominantly eccentric dilated hypertrophy. These findings should help clinicians recognize genuine hypertrophic cardiomyopathy, which is a contraindication to participating in all but the most low-intensity competitive sports.
Most of the participants had a normal left ventricular ejection fraction, and all showed normal augmentation of LV systolic function with exercise.
Aortic root diameter was larger than that in the general population but similar to that in other elite athletes. Surprisingly, aortic root diameter increased with increasing body size only to a certain point, reaching a plateau at 31-35 mm. Fewer than 5% of the participants had an aortic root diameter of 40 mm or more, and the maximal diameter was 42 mm. “These data have important implications in the evaluation of exceptionally large athletes and question the applicability in individuals with significantly increased biometrics of the traditional formula to estimate aortic root diameter that assumes a linear association between [it] and body surface area,” they noted.
“We hope that the results of this study will assist recognition of cardiac pathologic change and provide a framework to help avoid unnecessary exclusions of athletes from competition. We believe that these data have additional applicability to other sports that preselect for athletes with height, such as volleyball, rowing, and track and field,” Dr. Engel and his associates added.
This study was supported by the National Basketball Association as part of a medical services agreement with Columbia University. Dr. Engel and his associates reported having no relevant financial disclosures.
FROM JAMA CARDIOLOGY
Key clinical point: Cardiologists characterized normal, adaptive cardiac remodeling in NBA players, allowing physicians to distinguish it from occult pathologic changes that may precipitate sudden cardiac death.
Major finding: Aortic root diameter increased with increasing body size only to a certain point, reaching a plateau at 31-35 mm.
Data source: An observational cohort study in which echocardiograms of 526 professional athletes were analyzed.
Disclosures: This study was supported by the National Basketball Association as part of a medical services agreement with Columbia University. Dr. Engel and his associates reported having no relevant financial disclosures.