User login
Ischemic Hepatitis Associated with High Inpatient Mortality
Clinical question: What is the incidence and outcome of patients with ischemic hepatitis?
Background: Ischemic hepatitis, or shock liver, is often diagnosed in patients with massive increase in aminotransferase levels most often exceeding 1000 IU/L in the setting of hepatic hypoperfusion. The data on overall incidence and mortality of these patients are limited.
Study Design: Systematic review and meta-analysis.
Setting: Variable.
Synopsis: Using a combination of PubMed, Embase, and Web of Science, the study included 24 papers on incidence and outcomes of ischemic hepatitis published between 1965 and 2015 with a combined total of 1,782 cases. The incidence of ischemic hepatitis varied based on patient location with incidence of 2/1000 in all inpatient admissions and 2.5/100 in ICU admissions. The majority of patients suffered from cardiac comorbidities and decompensation during their admission. Inpatient mortality with ischemic hepatitis was 49%.
Interestingly, only 52.9% of patients had an episode of documented hypotension.
Hospitalists taking care of patients with massive rise in aminotransferases should consider ischemic hepatitis higher in their differential, even in the absence of documented hypotension.
There was significant variability in study design, sample size, and inclusion criteria among the studies, which reduces generalizability of this systematic review.
Bottom line: Ischemic hepatitis is associated with very high mortality and should be suspected in patients with high levels of alanine aminotransferase/aspartate aminotransferase even in the absence of documented hypotension.
Citation: Tapper EB, Sengupta N, Bonder A. The incidence and outcomes of ischemic hepatitis: a systematic review with meta-analysis. Am J Med. 2015;128(12):1314-1321.
Short Take
Music Can Help Ease Pain and Anxiety after Surgery
A systematic review and meta-analysis showed that music reduces pain and anxiety and decreases the need for pain medication in postoperative patients regardless of type of music or at what interval of the operative period the music was initiated.
Citation: Hole J, Hirsch M, Ball E, Meads C. Music as an aid for postoperative recovery in adults: a systematic review and meta-analysis. Lancet. 2015;386(10004):1659-1671
Clinical question: What is the incidence and outcome of patients with ischemic hepatitis?
Background: Ischemic hepatitis, or shock liver, is often diagnosed in patients with massive increase in aminotransferase levels most often exceeding 1000 IU/L in the setting of hepatic hypoperfusion. The data on overall incidence and mortality of these patients are limited.
Study Design: Systematic review and meta-analysis.
Setting: Variable.
Synopsis: Using a combination of PubMed, Embase, and Web of Science, the study included 24 papers on incidence and outcomes of ischemic hepatitis published between 1965 and 2015 with a combined total of 1,782 cases. The incidence of ischemic hepatitis varied based on patient location with incidence of 2/1000 in all inpatient admissions and 2.5/100 in ICU admissions. The majority of patients suffered from cardiac comorbidities and decompensation during their admission. Inpatient mortality with ischemic hepatitis was 49%.
Interestingly, only 52.9% of patients had an episode of documented hypotension.
Hospitalists taking care of patients with massive rise in aminotransferases should consider ischemic hepatitis higher in their differential, even in the absence of documented hypotension.
There was significant variability in study design, sample size, and inclusion criteria among the studies, which reduces generalizability of this systematic review.
Bottom line: Ischemic hepatitis is associated with very high mortality and should be suspected in patients with high levels of alanine aminotransferase/aspartate aminotransferase even in the absence of documented hypotension.
Citation: Tapper EB, Sengupta N, Bonder A. The incidence and outcomes of ischemic hepatitis: a systematic review with meta-analysis. Am J Med. 2015;128(12):1314-1321.
Short Take
Music Can Help Ease Pain and Anxiety after Surgery
A systematic review and meta-analysis showed that music reduces pain and anxiety and decreases the need for pain medication in postoperative patients regardless of type of music or at what interval of the operative period the music was initiated.
Citation: Hole J, Hirsch M, Ball E, Meads C. Music as an aid for postoperative recovery in adults: a systematic review and meta-analysis. Lancet. 2015;386(10004):1659-1671
Clinical question: What is the incidence and outcome of patients with ischemic hepatitis?
Background: Ischemic hepatitis, or shock liver, is often diagnosed in patients with massive increase in aminotransferase levels most often exceeding 1000 IU/L in the setting of hepatic hypoperfusion. The data on overall incidence and mortality of these patients are limited.
Study Design: Systematic review and meta-analysis.
Setting: Variable.
Synopsis: Using a combination of PubMed, Embase, and Web of Science, the study included 24 papers on incidence and outcomes of ischemic hepatitis published between 1965 and 2015 with a combined total of 1,782 cases. The incidence of ischemic hepatitis varied based on patient location with incidence of 2/1000 in all inpatient admissions and 2.5/100 in ICU admissions. The majority of patients suffered from cardiac comorbidities and decompensation during their admission. Inpatient mortality with ischemic hepatitis was 49%.
Interestingly, only 52.9% of patients had an episode of documented hypotension.
Hospitalists taking care of patients with massive rise in aminotransferases should consider ischemic hepatitis higher in their differential, even in the absence of documented hypotension.
There was significant variability in study design, sample size, and inclusion criteria among the studies, which reduces generalizability of this systematic review.
Bottom line: Ischemic hepatitis is associated with very high mortality and should be suspected in patients with high levels of alanine aminotransferase/aspartate aminotransferase even in the absence of documented hypotension.
Citation: Tapper EB, Sengupta N, Bonder A. The incidence and outcomes of ischemic hepatitis: a systematic review with meta-analysis. Am J Med. 2015;128(12):1314-1321.
Short Take
Music Can Help Ease Pain and Anxiety after Surgery
A systematic review and meta-analysis showed that music reduces pain and anxiety and decreases the need for pain medication in postoperative patients regardless of type of music or at what interval of the operative period the music was initiated.
Citation: Hole J, Hirsch M, Ball E, Meads C. Music as an aid for postoperative recovery in adults: a systematic review and meta-analysis. Lancet. 2015;386(10004):1659-1671
AIMS65 Score Helps Predict Inpatient Mortality in Acute Upper Gastrointestinal Bleed
Clinical question: Does AIMS65 risk stratification score predict inpatient mortality in patients with acute upper gastrointestinal bleed (UGIB)?
Background: Acute UGIB is associated with significant morbidity and mortality, which makes it crucial to identify high-risk patients early. Several prognostic algorithms such as Glasgow-Blatchford (GBS) and pre-endoscopy (pre-RS) and post-endoscopy (post-RS) Rockall scores are available to triage such patients. The goal of this study was to validate AIMS65 score as a predictor of inpatient mortality in patients with acute UGIB compared to these other prognostic scores.
Study Design: Retrospective, cohort study.
Setting: Tertiary-care center in Australia, January 2010 to June 2013.
Synopsis: Using ICD-10 diagnosis codes, investigators identified 424 patients with UGIB requiring endoscopy. All patients were risk-stratified using AIMS65, GBS, pre-RS, and post-RS. The AIMS65 score was found to be superior in predicting inpatient mortality compared to GBS and pre-RS scores and statistically superior to all other scores in predicting need for ICU admission.
In addition to being a single-center, retrospective study, other limitations include the use of ICD-10 codes to identify patients. Further prospective studies are needed to further validate the AIMS65 in acute UGIB.
Bottom line: AIMS65 is a simple and useful tool in predicting inpatient mortality in patients with acute UGIB. However, its applicability in making clinical decisions remains unclear.
Citation: Robertson M, Majumdar A, Boyapati R, et al. Risk stratification in acute upper GI bleeding: comparison of the AIMS65 score with the Glasgow-Blatchford and Rockall scoring systems [published online ahead of print October 16, 2015]. Gastrointest Endosc. doi:10.1016/j.gie.2015.10.021.
Clinical question: Does AIMS65 risk stratification score predict inpatient mortality in patients with acute upper gastrointestinal bleed (UGIB)?
Background: Acute UGIB is associated with significant morbidity and mortality, which makes it crucial to identify high-risk patients early. Several prognostic algorithms such as Glasgow-Blatchford (GBS) and pre-endoscopy (pre-RS) and post-endoscopy (post-RS) Rockall scores are available to triage such patients. The goal of this study was to validate AIMS65 score as a predictor of inpatient mortality in patients with acute UGIB compared to these other prognostic scores.
Study Design: Retrospective, cohort study.
Setting: Tertiary-care center in Australia, January 2010 to June 2013.
Synopsis: Using ICD-10 diagnosis codes, investigators identified 424 patients with UGIB requiring endoscopy. All patients were risk-stratified using AIMS65, GBS, pre-RS, and post-RS. The AIMS65 score was found to be superior in predicting inpatient mortality compared to GBS and pre-RS scores and statistically superior to all other scores in predicting need for ICU admission.
In addition to being a single-center, retrospective study, other limitations include the use of ICD-10 codes to identify patients. Further prospective studies are needed to further validate the AIMS65 in acute UGIB.
Bottom line: AIMS65 is a simple and useful tool in predicting inpatient mortality in patients with acute UGIB. However, its applicability in making clinical decisions remains unclear.
Citation: Robertson M, Majumdar A, Boyapati R, et al. Risk stratification in acute upper GI bleeding: comparison of the AIMS65 score with the Glasgow-Blatchford and Rockall scoring systems [published online ahead of print October 16, 2015]. Gastrointest Endosc. doi:10.1016/j.gie.2015.10.021.
Clinical question: Does AIMS65 risk stratification score predict inpatient mortality in patients with acute upper gastrointestinal bleed (UGIB)?
Background: Acute UGIB is associated with significant morbidity and mortality, which makes it crucial to identify high-risk patients early. Several prognostic algorithms such as Glasgow-Blatchford (GBS) and pre-endoscopy (pre-RS) and post-endoscopy (post-RS) Rockall scores are available to triage such patients. The goal of this study was to validate AIMS65 score as a predictor of inpatient mortality in patients with acute UGIB compared to these other prognostic scores.
Study Design: Retrospective, cohort study.
Setting: Tertiary-care center in Australia, January 2010 to June 2013.
Synopsis: Using ICD-10 diagnosis codes, investigators identified 424 patients with UGIB requiring endoscopy. All patients were risk-stratified using AIMS65, GBS, pre-RS, and post-RS. The AIMS65 score was found to be superior in predicting inpatient mortality compared to GBS and pre-RS scores and statistically superior to all other scores in predicting need for ICU admission.
In addition to being a single-center, retrospective study, other limitations include the use of ICD-10 codes to identify patients. Further prospective studies are needed to further validate the AIMS65 in acute UGIB.
Bottom line: AIMS65 is a simple and useful tool in predicting inpatient mortality in patients with acute UGIB. However, its applicability in making clinical decisions remains unclear.
Citation: Robertson M, Majumdar A, Boyapati R, et al. Risk stratification in acute upper GI bleeding: comparison of the AIMS65 score with the Glasgow-Blatchford and Rockall scoring systems [published online ahead of print October 16, 2015]. Gastrointest Endosc. doi:10.1016/j.gie.2015.10.021.
No Mortality Benefit to Cardiac Catheterization in Patients with Stable Ischemic Heart Disease
Clinical question: Can cardiac catheterization prolong survival in patients with stable ischemic heart disease?
Background: Previous results from the COURAGE trial found no benefit of percutaneous intervention (PCI) as compared to medical therapy on a composite endpoint of death or nonfatal myocardial infarction or in total mortality at 4.6 years follow-up. The authors now report 15-year follow-up of the same patients.
Study design: Randomized, controlled trial.
Setting: The majority of the patients were from Veterans Affairs (VA) medical centers, although non-VA hospitals in the U.S. also were included.
Synopsis: Originally, 2,287 patients with stable ischemic heart disease and either an abnormal stress test or evidence of ischemia on ECG, as well at least 70% stenosis on angiography, were randomized to medical therapy or medical therapy plus PCI. Now, investigators have obtained extended follow-up information for 1,211 of the original patients (53%). They concluded that after 15 years of follow-up, there was no survival difference for the patients who initially received PCI in addition to medical management.
One limitation of the study was that it did not reflect important advances in both medical and interventional management of ischemic heart disease that have taken place since the study was conducted, which may affect patient mortality. It is also noteworthy that the investigators were unable to determine how many patients in the medical management group subsequently underwent revascularization after the study concluded and therefore may have crossed over between groups. Nevertheless, for now it appears that the major utility of PCI in stable ischemic heart disease is in symptomatic management.
Bottom Line: After 15 years of follow-up, there was still no mortality benefit to PCI as compared to optimal medical therapy for stable ischemic heart disease.
Citation: Sedlis SP, Hartigan PM, Teo KK, et al. Effect of PCI on long-term survival in patients with stable ischemic heart disease. N Engl J Med. 2015;373(20):1937-1946
Short Take
Cauti Infections Are Rarely Clinically Relevant and Associated with Low Complication Rate
A single-center retrospective study in the ICU setting shows that the definition of catheter-associated urinary tract infections (CAUTIs) is nonspecific and they’re mostly diagnosed when urine cultures are sent for workup of fever. Most of the time, there are alternative explanations for the fever.
Citation: Tedja R, Wentink J, O’Horo J, Thompson R, Sampathkumar P et al. Catheter-associated urinary tract infections in intensive care unit patients. Infect Control Hosp Epidemiol. 2015;36(11):1330-1334.
Clinical question: Can cardiac catheterization prolong survival in patients with stable ischemic heart disease?
Background: Previous results from the COURAGE trial found no benefit of percutaneous intervention (PCI) as compared to medical therapy on a composite endpoint of death or nonfatal myocardial infarction or in total mortality at 4.6 years follow-up. The authors now report 15-year follow-up of the same patients.
Study design: Randomized, controlled trial.
Setting: The majority of the patients were from Veterans Affairs (VA) medical centers, although non-VA hospitals in the U.S. also were included.
Synopsis: Originally, 2,287 patients with stable ischemic heart disease and either an abnormal stress test or evidence of ischemia on ECG, as well at least 70% stenosis on angiography, were randomized to medical therapy or medical therapy plus PCI. Now, investigators have obtained extended follow-up information for 1,211 of the original patients (53%). They concluded that after 15 years of follow-up, there was no survival difference for the patients who initially received PCI in addition to medical management.
One limitation of the study was that it did not reflect important advances in both medical and interventional management of ischemic heart disease that have taken place since the study was conducted, which may affect patient mortality. It is also noteworthy that the investigators were unable to determine how many patients in the medical management group subsequently underwent revascularization after the study concluded and therefore may have crossed over between groups. Nevertheless, for now it appears that the major utility of PCI in stable ischemic heart disease is in symptomatic management.
Bottom Line: After 15 years of follow-up, there was still no mortality benefit to PCI as compared to optimal medical therapy for stable ischemic heart disease.
Citation: Sedlis SP, Hartigan PM, Teo KK, et al. Effect of PCI on long-term survival in patients with stable ischemic heart disease. N Engl J Med. 2015;373(20):1937-1946
Short Take
Cauti Infections Are Rarely Clinically Relevant and Associated with Low Complication Rate
A single-center retrospective study in the ICU setting shows that the definition of catheter-associated urinary tract infections (CAUTIs) is nonspecific and they’re mostly diagnosed when urine cultures are sent for workup of fever. Most of the time, there are alternative explanations for the fever.
Citation: Tedja R, Wentink J, O’Horo J, Thompson R, Sampathkumar P et al. Catheter-associated urinary tract infections in intensive care unit patients. Infect Control Hosp Epidemiol. 2015;36(11):1330-1334.
Clinical question: Can cardiac catheterization prolong survival in patients with stable ischemic heart disease?
Background: Previous results from the COURAGE trial found no benefit of percutaneous intervention (PCI) as compared to medical therapy on a composite endpoint of death or nonfatal myocardial infarction or in total mortality at 4.6 years follow-up. The authors now report 15-year follow-up of the same patients.
Study design: Randomized, controlled trial.
Setting: The majority of the patients were from Veterans Affairs (VA) medical centers, although non-VA hospitals in the U.S. also were included.
Synopsis: Originally, 2,287 patients with stable ischemic heart disease and either an abnormal stress test or evidence of ischemia on ECG, as well at least 70% stenosis on angiography, were randomized to medical therapy or medical therapy plus PCI. Now, investigators have obtained extended follow-up information for 1,211 of the original patients (53%). They concluded that after 15 years of follow-up, there was no survival difference for the patients who initially received PCI in addition to medical management.
One limitation of the study was that it did not reflect important advances in both medical and interventional management of ischemic heart disease that have taken place since the study was conducted, which may affect patient mortality. It is also noteworthy that the investigators were unable to determine how many patients in the medical management group subsequently underwent revascularization after the study concluded and therefore may have crossed over between groups. Nevertheless, for now it appears that the major utility of PCI in stable ischemic heart disease is in symptomatic management.
Bottom Line: After 15 years of follow-up, there was still no mortality benefit to PCI as compared to optimal medical therapy for stable ischemic heart disease.
Citation: Sedlis SP, Hartigan PM, Teo KK, et al. Effect of PCI on long-term survival in patients with stable ischemic heart disease. N Engl J Med. 2015;373(20):1937-1946
Short Take
Cauti Infections Are Rarely Clinically Relevant and Associated with Low Complication Rate
A single-center retrospective study in the ICU setting shows that the definition of catheter-associated urinary tract infections (CAUTIs) is nonspecific and they’re mostly diagnosed when urine cultures are sent for workup of fever. Most of the time, there are alternative explanations for the fever.
Citation: Tedja R, Wentink J, O’Horo J, Thompson R, Sampathkumar P et al. Catheter-associated urinary tract infections in intensive care unit patients. Infect Control Hosp Epidemiol. 2015;36(11):1330-1334.
Increase in Broad-Spectrum Antibiotics Disproportionate to Rate of Resistant Organisms
Clinical question: Have healthcare-associated pneumonia (HCAP) guidelines improved treatment accuracy?
Background: Guidelines released in 2005 call for the use of broad-spectrum antibiotics for patients presenting with pneumonia who have had recent healthcare exposure. However, there is scant evidence to support the risk factors they identify, and the guidelines are likely to increase use of broad-spectrum antibiotics.
Study design: Observational, retrospective.
Setting: VA medical centers, 2006–2010.
Synopsis: In this study, VA medical center physicians evaluated 95,511 hospitalizations for pneumonia at 128 hospitals between 2006 and 2010, the years following the 2005 guidelines. Annual analyses were performed to assess antibiotics selection as well as evidence of resistant bacteria from blood and respiratory cultures. Researchers found that while the use of broad-spectrum antibiotics increased drastically during the study period (vancomycin from 16% to 31% and piperacillin-tazobactam from 16% to 27%, P<0.001 for both), the incidence of resistant organisms either decreased or remained stable.
Additionally, physicians were no better at matching broad-spectrum antibiotics to patients infected with resistant organisms at the end of the study period than they were at the start. They conclude that more research is urgently needed to identify patients at risk for resistant organisms in order to more appropriately prescribe broad-spectrum antibiotics.
This study did not evaluate patients’ clinical outcomes, so it is unclear whether they may have benefitted clinically from the implementation of the guidelines. For now, the optimal approach to empiric therapy for HCAP remains undefined.
Bottom line: Despite a marked increase in the use of broad-spectrum antibiotics for HCAP in the years following a change in treatment guidelines, doctors showed no improvement at matching these antibiotics to patients infected with resistant organisms.
Citation: Jones BE, Jones MM, Huttner B, et al. Trends in antibiotic use and nosocomial pathogens in hospitalized veterans with pneumonia at 128 medical centers, 2006-2010. Clin Infect Dis. 2015;61(9):1403-1410.
Clinical question: Have healthcare-associated pneumonia (HCAP) guidelines improved treatment accuracy?
Background: Guidelines released in 2005 call for the use of broad-spectrum antibiotics for patients presenting with pneumonia who have had recent healthcare exposure. However, there is scant evidence to support the risk factors they identify, and the guidelines are likely to increase use of broad-spectrum antibiotics.
Study design: Observational, retrospective.
Setting: VA medical centers, 2006–2010.
Synopsis: In this study, VA medical center physicians evaluated 95,511 hospitalizations for pneumonia at 128 hospitals between 2006 and 2010, the years following the 2005 guidelines. Annual analyses were performed to assess antibiotics selection as well as evidence of resistant bacteria from blood and respiratory cultures. Researchers found that while the use of broad-spectrum antibiotics increased drastically during the study period (vancomycin from 16% to 31% and piperacillin-tazobactam from 16% to 27%, P<0.001 for both), the incidence of resistant organisms either decreased or remained stable.
Additionally, physicians were no better at matching broad-spectrum antibiotics to patients infected with resistant organisms at the end of the study period than they were at the start. They conclude that more research is urgently needed to identify patients at risk for resistant organisms in order to more appropriately prescribe broad-spectrum antibiotics.
This study did not evaluate patients’ clinical outcomes, so it is unclear whether they may have benefitted clinically from the implementation of the guidelines. For now, the optimal approach to empiric therapy for HCAP remains undefined.
Bottom line: Despite a marked increase in the use of broad-spectrum antibiotics for HCAP in the years following a change in treatment guidelines, doctors showed no improvement at matching these antibiotics to patients infected with resistant organisms.
Citation: Jones BE, Jones MM, Huttner B, et al. Trends in antibiotic use and nosocomial pathogens in hospitalized veterans with pneumonia at 128 medical centers, 2006-2010. Clin Infect Dis. 2015;61(9):1403-1410.
Clinical question: Have healthcare-associated pneumonia (HCAP) guidelines improved treatment accuracy?
Background: Guidelines released in 2005 call for the use of broad-spectrum antibiotics for patients presenting with pneumonia who have had recent healthcare exposure. However, there is scant evidence to support the risk factors they identify, and the guidelines are likely to increase use of broad-spectrum antibiotics.
Study design: Observational, retrospective.
Setting: VA medical centers, 2006–2010.
Synopsis: In this study, VA medical center physicians evaluated 95,511 hospitalizations for pneumonia at 128 hospitals between 2006 and 2010, the years following the 2005 guidelines. Annual analyses were performed to assess antibiotics selection as well as evidence of resistant bacteria from blood and respiratory cultures. Researchers found that while the use of broad-spectrum antibiotics increased drastically during the study period (vancomycin from 16% to 31% and piperacillin-tazobactam from 16% to 27%, P<0.001 for both), the incidence of resistant organisms either decreased or remained stable.
Additionally, physicians were no better at matching broad-spectrum antibiotics to patients infected with resistant organisms at the end of the study period than they were at the start. They conclude that more research is urgently needed to identify patients at risk for resistant organisms in order to more appropriately prescribe broad-spectrum antibiotics.
This study did not evaluate patients’ clinical outcomes, so it is unclear whether they may have benefitted clinically from the implementation of the guidelines. For now, the optimal approach to empiric therapy for HCAP remains undefined.
Bottom line: Despite a marked increase in the use of broad-spectrum antibiotics for HCAP in the years following a change in treatment guidelines, doctors showed no improvement at matching these antibiotics to patients infected with resistant organisms.
Citation: Jones BE, Jones MM, Huttner B, et al. Trends in antibiotic use and nosocomial pathogens in hospitalized veterans with pneumonia at 128 medical centers, 2006-2010. Clin Infect Dis. 2015;61(9):1403-1410.
Discontinuing Inhaled Corticosteroids in COPD Reduces Risk of Pneumonia
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
Clinical question: Is discontinuation of inhaled corticosteroids (ICSs) in patients with COPD associated with a decreased risk of pneumonia?
Background: ICSs are used in up to 85% of patients treated for COPD but may be associated with adverse systemic side effects including pneumonia. Trials looking at weaning patients off ICSs and replacing with long-acting bronchodilators have found few adverse outcomes; however, the benefits of discontinuation on adverse events, including pneumonia, have been unclear.
Study design: Case-control study.
Setting: Quebec health systems.
Synopsis: Using the Quebec health insurance databases, a study cohort of 103,386 patients with COPD on ICSs was created. Patients were followed for a mean of 4.9 years; 14,020 patients who were hospitalized for pneumonia or died from pneumonia outside the hospital were matched to control subjects. Discontinuation of ICSs was associated with a 37% decrease in serious pneumonia (relative risk [RR] 0.63; 95% CI, 0.60–0.66). The risk reduction occurred as early as one month after discontinuation of ICSs. Risk reduction was greater with fluticasone (RR 0.58; 95% CI, 0.54–0.61) than with budesonide (RR 0.87; 95% CI, 0.7–0.97).
Population size and follow-up may contribute to why risk reduction in pneumonia was seen in this study but not in other recent randomized trials on discontinuation of ICSs. A limitation of this study was its observational design; however, its results suggest that use of ICSs in COPD patients should be highly selective, as indiscriminate use can subject patients to elevated risk of hospitalization or death from pneumonia.
Bottom line: Discontinuation of ICSs in patients with COPD is associated with a decreased risk of contracting serious pneumonia. This reduction appears greatest with fluticasone.
Citation: Suissa S, Coulombe J, Ernst P. Discontinuation of inhaled corticosteroids in COPD and the risk reduction of pneumonia. Chest. 2015;148(5):1177-1183.
Short Take
Increase in Rates of Prescription Drug Use and Polypharmacy Seen
The percentage of Americans who reported taking prescription medications increased substantially from 1999 to 2012 (51% to 59%), as did the percentage who reported taking at least five prescription medications.
Citation: Kantor ED, Rehm CD, Haas JS, Chan AT, Giovannucci EL. Trends in prescription drug use among adults in the United States from 1999-2012. JAMA. 2015;314(17):1818-1830.
MEDS Score for Sepsis Might Best Predict ED Mortality
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Clinical question: Which illness severity score best predicts outcomes in emergency department (ED) patients presenting with infection?
Background: Several scoring models have been developed to predict illness severity and mortality in patients with infection. Some scores were developed specifically for patients with sepsis and others for patients in a general critical care setting. These different scoring models have not been specifically compared and validated in the ED setting in patients with infection of various severities.
Study design: Prospective, observational study.
Setting: Adult ED in a metropolitan tertiary, university-affiliated hospital.
Synopsis: Investigators prospectively identified 8,871 adult inpatients with infection from a single-center ED. Data to calculate five prediction models were collected. The models were:
- Mortality in Emergency Department Sepsis (MEDS) score;
- Acute Physiology and Chronic Health Evaluation II (APACHE II);
- Simplified Acute Physiology Score II (SAPS II);
- Sequential Organ Failure Assessment (SOFA); and
- Severe Sepsis Score (SSS).
Severity score performance was assessed for the overall cohort and for subgroups, including infection without systemic inflammatory response syndrome, sepsis, severe sepsis, and septic shock. The MEDS score best predicted mortality in the cohort, with an area under the receiver operating characteristics curve of 0.92. However, older scoring models such as the APACHE II and SAPS II still discriminated well, especially in patients who were admitted to the ICU. All scores tended to overestimate mortality.
Bottom line: The MEDS score may best predict illness severity in septic patients presenting to the ED, but other scoring models may be better-suited for specific patient populations.
Citation: Williams JM, Greenslade JH, Chu K, Brown AF, Lipman J. Severity scores in emergency department patients with presumed infection: a prospective validation study. Crit Care Med. 2016;44(3):539-547.
Continuous Chest Compressions Do Not Improve Outcome Compared to Chest Compressions Interrupted for Ventilation
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
Clinical question: In cardiopulmonary resuscitation, do continuous chest compressions improve survival or neurologic outcome compared to chest compressions interrupted for ventilation?
Background: Animal models have demonstrated that interruptions in chest compressions are associated with decreased survival and worse neurologic outcome in cardiac arrests. Observational studies in humans have suggested that for out-of-hospital cardiac arrests, continuous compressions result in improved survival.
Study Design: Unblinded, randomized, cluster design with crossover.
Setting: One hundred fourteen emergency medical service (EMS) agencies across eight clinical sites in North America.
Synopsis: Patients with out-of-hospital cardiac arrest received either continuous chest compressions with asynchronous positive-pressure ventilations or interrupted compressions at a rate of 30 compressions to two ventilations. EMS agencies were divided into clusters and randomly assigned to deliver either resuscitation strategy. Twice per year, each cluster switched treatment strategies.
During the active enrollment phase, 12,653 patients were enrolled in the intervention arm and 11,058 were enrolled in the control arm. The primary outcome of survival to hospital discharge was comparable between the two groups, with 9.0% survival rate in the intervention group as compared to 9.7% in the control group (P=0.07). The secondary outcome of survivorship with favorable neurologic status was similar at 7.0% in the intervention group and 7.7% in the control group.
There was only a small difference in the proportion of minutes devoted to compressions between the two groups, so the similarity in outcomes may be reflective of high-quality chest compressions. Additional limitations include a lack of standardization of post-resuscitation care and a lack of measurement of oxygen or ventilation delivered.
Bottom line: For out-of-hospital cardiac arrests, continuous chest compressions with positive-pressure ventilation did not increase survival or improve neurologic outcome compared to interrupted chest compressions.
Citation: Nichol G, Lerou B, Wang H, et al. Trial of continuous or interrupted chest compressions during CPR. N Engl J Med. 2015;373(23):2203-2214.
ATRIA Better at Predicting Stroke Risk in Patients with Atrial Fibrillation Than CHADS2, CHA2DS2-VAS
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.
Clinical question: Does the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) risk score more accurately identify patients with atrial fibrillation (Afib) who are at low risk for ischemic stroke than the CHADS2 or CHA2DS2-VASc score?
Background: More accurate and reliable stroke risk prediction tools are needed to optimize anticoagulation decision making in patients with Afib. Recently, a new clinically based risk score, the ATRIA, has been developed and validated. This risk score assigns points based on four age categories (as well as an interaction of age and prior stroke); female gender; renal function; and history of diabetes, congestive heart failure, and hypertension. This study compared the predictive ability of the ATRIA risk score with the CHADS2 and CHA2DS2-VASc risk scores and their implications for anticoagulant treatment in Afib patients.
Study Design: Retrospective cohort study.
Setting: Afib patients not using warfarin from the United Kingdom’s Clinical Practice Research Datalink (CPRD) database, January 1998 to January 2012.
Synopsis: A total of 60,594 patients with Afib were followed until occurrence of ischemic stroke, prescription of warfarin, death, or the study’s end. The annualized stroke rate was 2.99%. Patients with moderate and high-risk CHA2DS2-VASc scores had lower event rates than those with corresponding ATRIA and CHADS2 scores. C-statistics for full point scores were 0.70 (95% CI, 0.69–0.71) for ATRIA and 0.68 (95% CI, 0.67–0.69) for both CHADS2 and CHA2DS2-VASc scores. The net reclassification index of ATRIA compared with CHADS2 and CHA2DS2-VASc risk scores were 0.137 and 0.233, respectively, reflecting that the ATRIA risk score better categorizes patients developing an event.
ATRIA risk score more accurately identified low-risk patients than the CHA2DS2-VASc score assigned to higher-risk categories. The results persisted even after restricting analysis to more recent follow-up, excluding unspecified strokes and excluding renal dysfunction as a predictor. Most improvements with ATRIA were the result of “down classification,” suggesting that using the CHA2DS2-VASc risk score could lead to overtreatment of patients at very low risk of stroke.
Bottom line: The ATRIA risk score better identifies Afib patients who are at low risk for stroke compared to CHADS2 and CHA2DS2-VASc scores.
Citation: van den Ham HA, Klungel OH, Singer DE, Leufkens HG, van Staa TP. Comparative performance of ATRIA, CHADS2, and CHA2DS2-VASc risk scores predicting stroke in patients with atrial fibrillation: results from a national primary care database. J Am Coll Cardiol. 2015;66(17):1851-1959.
Continued Statin Therapy Has No Survival Benefit in Advanced Life-Limiting Illness
Clinical question: What is the impact of statin discontinuation in palliative care setting?
Background: There is compelling evidence for prescribing statins for primary or secondary prevention of cardiovascular disease for patients with long life expectancy, but there is no evidence to guide decisions to discontinue therapy in those with limited prognosis.
Study design: Multicenter, unblinded, randomized, and pragmatic clinical trial.
Setting: Academic and community-based clinical sites as a part of the Palliative Care Research Cooperative Group.
Synopsis: The study analyzed the outcomes of 381 patients who had received a prognosis of one-month to one-year life expectancy, with an average age of 74. The participants were divided into two groups: continued statin group and discontinued statin group. Of the 381 participants, 212 survived beyond 60 days.
There was no significant difference between the proportion of participants who died within 60 days, with 45 (23.8%) in the discontinued statin group and 39 (20.3%) in the continued statin group (90% Cl, -3.5%–10.5%; P=0.36). Total quality of life was better for the group discontinuing statin therapy (mean McGill QOL score 7.11 versus 6.85; P=0.04). The researchers estimated that surviving participants would save $3.37 per day and $716 per patient.
Because of a lack of formal guidelines for discontinuation of statin therapy in patients with life-limiting illness, the discontinuation of statin therapy is mostly based on patient-provider decisions.
The results from this study will help physicians have thoughtful patient-provider discussions regarding statin discontinuation.
Citation: Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med. 2015;175(5):691–700. doi:10.1001/jamainternmed.2015.0289.
Visit our website for more research reviews.
Clinical question: What is the impact of statin discontinuation in palliative care setting?
Background: There is compelling evidence for prescribing statins for primary or secondary prevention of cardiovascular disease for patients with long life expectancy, but there is no evidence to guide decisions to discontinue therapy in those with limited prognosis.
Study design: Multicenter, unblinded, randomized, and pragmatic clinical trial.
Setting: Academic and community-based clinical sites as a part of the Palliative Care Research Cooperative Group.
Synopsis: The study analyzed the outcomes of 381 patients who had received a prognosis of one-month to one-year life expectancy, with an average age of 74. The participants were divided into two groups: continued statin group and discontinued statin group. Of the 381 participants, 212 survived beyond 60 days.
There was no significant difference between the proportion of participants who died within 60 days, with 45 (23.8%) in the discontinued statin group and 39 (20.3%) in the continued statin group (90% Cl, -3.5%–10.5%; P=0.36). Total quality of life was better for the group discontinuing statin therapy (mean McGill QOL score 7.11 versus 6.85; P=0.04). The researchers estimated that surviving participants would save $3.37 per day and $716 per patient.
Because of a lack of formal guidelines for discontinuation of statin therapy in patients with life-limiting illness, the discontinuation of statin therapy is mostly based on patient-provider decisions.
The results from this study will help physicians have thoughtful patient-provider discussions regarding statin discontinuation.
Citation: Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med. 2015;175(5):691–700. doi:10.1001/jamainternmed.2015.0289.
Visit our website for more research reviews.
Clinical question: What is the impact of statin discontinuation in palliative care setting?
Background: There is compelling evidence for prescribing statins for primary or secondary prevention of cardiovascular disease for patients with long life expectancy, but there is no evidence to guide decisions to discontinue therapy in those with limited prognosis.
Study design: Multicenter, unblinded, randomized, and pragmatic clinical trial.
Setting: Academic and community-based clinical sites as a part of the Palliative Care Research Cooperative Group.
Synopsis: The study analyzed the outcomes of 381 patients who had received a prognosis of one-month to one-year life expectancy, with an average age of 74. The participants were divided into two groups: continued statin group and discontinued statin group. Of the 381 participants, 212 survived beyond 60 days.
There was no significant difference between the proportion of participants who died within 60 days, with 45 (23.8%) in the discontinued statin group and 39 (20.3%) in the continued statin group (90% Cl, -3.5%–10.5%; P=0.36). Total quality of life was better for the group discontinuing statin therapy (mean McGill QOL score 7.11 versus 6.85; P=0.04). The researchers estimated that surviving participants would save $3.37 per day and $716 per patient.
Because of a lack of formal guidelines for discontinuation of statin therapy in patients with life-limiting illness, the discontinuation of statin therapy is mostly based on patient-provider decisions.
The results from this study will help physicians have thoughtful patient-provider discussions regarding statin discontinuation.
Citation: Kutner JS, Blatchford PJ, Taylor DH Jr, et al. Safety and benefit of discontinuing statin therapy in the setting of advanced, life-limiting illness: a randomized clinical trial. JAMA Intern Med. 2015;175(5):691–700. doi:10.1001/jamainternmed.2015.0289.
Visit our website for more research reviews.
D-Dimer Not Reliable Marker to Stop Anticoagulation Therapy
Clinical question: In patients with a first unprovoked VTE, is it safe to use a normalized D-dimer test to stop anticoagulation therapy?
Background: The risk of VTE recurrence after stopping anticoagulation is higher in patients who have elevated D-dimer levels after treatment. It is unknown whether we can use normalized D-dimer levels to guide the decision about whether or not to stop anticoagulation.
Study design: Prospective cohort study.
Setting: Thirteen university-affiliated centers.
Synopsis: Study authors screened 410 adult patients who had a first unprovoked VTE and completed three to seven months of anticoagulation therapy with D-dimer tests. In patients with negative D-dimer tests, anticoagulation was stopped, and D-dimer tests were repeated after a month. In those with two consecutive negative D-dimer tests, anticoagulation was stopped indefinitely; these patients were followed for an average of 2.2 years. Among those 319 patients, there was an overall recurrent VTE rate of 6.7% per patient year. Subgroup analysis was performed among men, women not on estrogen therapy, and women on estrogen therapy; recurrence rates per patient year were 9.7%, 5.4%, and 0%, respectively.
This study used a point-of-care D-dimer test that was either positive or negative; it is unclear if the results can be generalized to all D-dimer tests. Additionally, although the study found a lower recurrence VTE rate among women, the study was not powered for the subgroups.
Bottom line: The high rate of recurrent VTE among men makes the D-dimer test an unsafe marker to use in deciding whether or not to stop anticoagulation for an unprovoked VTE. Among women, D-dimer test can potentially be used to guide length of treatment, but, given the limitations of the study, more evidence is needed.
Citation: Kearon C, Spencer FA, O’Keeffe D, et al. D-Dimer testing to select patients with a first unprovoked venous thromboembolism who can stop anticoagulant therapy. Ann Intern Med. 2015;162(1):27-34.
Clinical question: In patients with a first unprovoked VTE, is it safe to use a normalized D-dimer test to stop anticoagulation therapy?
Background: The risk of VTE recurrence after stopping anticoagulation is higher in patients who have elevated D-dimer levels after treatment. It is unknown whether we can use normalized D-dimer levels to guide the decision about whether or not to stop anticoagulation.
Study design: Prospective cohort study.
Setting: Thirteen university-affiliated centers.
Synopsis: Study authors screened 410 adult patients who had a first unprovoked VTE and completed three to seven months of anticoagulation therapy with D-dimer tests. In patients with negative D-dimer tests, anticoagulation was stopped, and D-dimer tests were repeated after a month. In those with two consecutive negative D-dimer tests, anticoagulation was stopped indefinitely; these patients were followed for an average of 2.2 years. Among those 319 patients, there was an overall recurrent VTE rate of 6.7% per patient year. Subgroup analysis was performed among men, women not on estrogen therapy, and women on estrogen therapy; recurrence rates per patient year were 9.7%, 5.4%, and 0%, respectively.
This study used a point-of-care D-dimer test that was either positive or negative; it is unclear if the results can be generalized to all D-dimer tests. Additionally, although the study found a lower recurrence VTE rate among women, the study was not powered for the subgroups.
Bottom line: The high rate of recurrent VTE among men makes the D-dimer test an unsafe marker to use in deciding whether or not to stop anticoagulation for an unprovoked VTE. Among women, D-dimer test can potentially be used to guide length of treatment, but, given the limitations of the study, more evidence is needed.
Citation: Kearon C, Spencer FA, O’Keeffe D, et al. D-Dimer testing to select patients with a first unprovoked venous thromboembolism who can stop anticoagulant therapy. Ann Intern Med. 2015;162(1):27-34.
Clinical question: In patients with a first unprovoked VTE, is it safe to use a normalized D-dimer test to stop anticoagulation therapy?
Background: The risk of VTE recurrence after stopping anticoagulation is higher in patients who have elevated D-dimer levels after treatment. It is unknown whether we can use normalized D-dimer levels to guide the decision about whether or not to stop anticoagulation.
Study design: Prospective cohort study.
Setting: Thirteen university-affiliated centers.
Synopsis: Study authors screened 410 adult patients who had a first unprovoked VTE and completed three to seven months of anticoagulation therapy with D-dimer tests. In patients with negative D-dimer tests, anticoagulation was stopped, and D-dimer tests were repeated after a month. In those with two consecutive negative D-dimer tests, anticoagulation was stopped indefinitely; these patients were followed for an average of 2.2 years. Among those 319 patients, there was an overall recurrent VTE rate of 6.7% per patient year. Subgroup analysis was performed among men, women not on estrogen therapy, and women on estrogen therapy; recurrence rates per patient year were 9.7%, 5.4%, and 0%, respectively.
This study used a point-of-care D-dimer test that was either positive or negative; it is unclear if the results can be generalized to all D-dimer tests. Additionally, although the study found a lower recurrence VTE rate among women, the study was not powered for the subgroups.
Bottom line: The high rate of recurrent VTE among men makes the D-dimer test an unsafe marker to use in deciding whether or not to stop anticoagulation for an unprovoked VTE. Among women, D-dimer test can potentially be used to guide length of treatment, but, given the limitations of the study, more evidence is needed.
Citation: Kearon C, Spencer FA, O’Keeffe D, et al. D-Dimer testing to select patients with a first unprovoked venous thromboembolism who can stop anticoagulant therapy. Ann Intern Med. 2015;162(1):27-34.