User login
Discontinuing Inhaled Corticosteroids in COPD Reduces Risk of Pneumonia
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
MEDS Score for Sepsis Might Best Predict ED Mortality
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Continuous Chest Compressions Do Not Improve Outcome Compared to Chest Compressions Interrupted for Ventilation
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
ATRIA Better at Predicting Stroke Risk in Patients with Atrial Fibrillation Than CHADS2, CHA2DS2-VAS
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.