User login
SIRS Criteria Could Identify More Patients with Severe Sepsis
Clinical question: Does inclusion of two or more SIRS criteria in the definition of severe sepsis accurately identify patients with higher mortality risk, as compared with patients with infection and organ failure but with fewer than two SIRS criteria?
Background: SIRS describes dysregulation of the inflammatory response to illness. The current definition of severe sepsis includes evidence of infection, organ failure, and two or more SIRS criteria. This study sought to test the validity of inclusion of two or more SIRS criteria in the definition of severe sepsis to differentiate patients at higher mortality risk.
Study design: 14-year, retrospective study.
Setting: One hundred seventy-two ICUs in Australia and New Zealand.
Synopsis: Investigators evaluated 109,663 patients; 87.9% had SIRS-positive severe sepsis, and 12.1% had SIRS-negative severe sepsis. Patients with SIRS-positive sepsis were younger, more ill with higher mortality, and more likely to have community-acquired infections. Both groups had decreased mortality over the 14-year study period; SIRS-positive patients decreased to 18.3% from 36.1%; SIRS-negative patients decreased to 8.5% from 27.7%.
Being SIRS-positive independently increased the risk of death by 26%; however, modeling showed a linear relationship between mortality and presence of SIRS criteria with each additional criteria, from zero to four, increasing mortality by 13%. There was no transitional increase in risk of mortality using two criteria as a cut-off.
Limiting the severe sepsis definition to two or more SIRS criteria missed one of eight patients admitted to ICU with organ failure and infection alone. SIRS-negative severe sepsis patients had significant mortality but showed similarities to SIRS-positive severe sepsis patients, suggesting they are separate phenotypes of the same condition.
Bottom line: This study challenges the sensitivity, face validity, and construct validity of the two-criteria SIRS cutoff. Redefining severe sepsis to include a lower number of SIRS criteria may diagnose more patients with organ failure and infection.
Citation: Kaukonen KM, Bailey M, Pilcher D, Cooper DJ, Bellomo R. Systemic inflammatory response syndrome criteria in defining severe sepsis. N Engl J Med. 2015;372:1629-1638.
Clinical question: Does inclusion of two or more SIRS criteria in the definition of severe sepsis accurately identify patients with higher mortality risk, as compared with patients with infection and organ failure but with fewer than two SIRS criteria?
Background: SIRS describes dysregulation of the inflammatory response to illness. The current definition of severe sepsis includes evidence of infection, organ failure, and two or more SIRS criteria. This study sought to test the validity of inclusion of two or more SIRS criteria in the definition of severe sepsis to differentiate patients at higher mortality risk.
Study design: 14-year, retrospective study.
Setting: One hundred seventy-two ICUs in Australia and New Zealand.
Synopsis: Investigators evaluated 109,663 patients; 87.9% had SIRS-positive severe sepsis, and 12.1% had SIRS-negative severe sepsis. Patients with SIRS-positive sepsis were younger, more ill with higher mortality, and more likely to have community-acquired infections. Both groups had decreased mortality over the 14-year study period; SIRS-positive patients decreased to 18.3% from 36.1%; SIRS-negative patients decreased to 8.5% from 27.7%.
Being SIRS-positive independently increased the risk of death by 26%; however, modeling showed a linear relationship between mortality and presence of SIRS criteria with each additional criteria, from zero to four, increasing mortality by 13%. There was no transitional increase in risk of mortality using two criteria as a cut-off.
Limiting the severe sepsis definition to two or more SIRS criteria missed one of eight patients admitted to ICU with organ failure and infection alone. SIRS-negative severe sepsis patients had significant mortality but showed similarities to SIRS-positive severe sepsis patients, suggesting they are separate phenotypes of the same condition.
Bottom line: This study challenges the sensitivity, face validity, and construct validity of the two-criteria SIRS cutoff. Redefining severe sepsis to include a lower number of SIRS criteria may diagnose more patients with organ failure and infection.
Citation: Kaukonen KM, Bailey M, Pilcher D, Cooper DJ, Bellomo R. Systemic inflammatory response syndrome criteria in defining severe sepsis. N Engl J Med. 2015;372:1629-1638.
Clinical question: Does inclusion of two or more SIRS criteria in the definition of severe sepsis accurately identify patients with higher mortality risk, as compared with patients with infection and organ failure but with fewer than two SIRS criteria?
Background: SIRS describes dysregulation of the inflammatory response to illness. The current definition of severe sepsis includes evidence of infection, organ failure, and two or more SIRS criteria. This study sought to test the validity of inclusion of two or more SIRS criteria in the definition of severe sepsis to differentiate patients at higher mortality risk.
Study design: 14-year, retrospective study.
Setting: One hundred seventy-two ICUs in Australia and New Zealand.
Synopsis: Investigators evaluated 109,663 patients; 87.9% had SIRS-positive severe sepsis, and 12.1% had SIRS-negative severe sepsis. Patients with SIRS-positive sepsis were younger, more ill with higher mortality, and more likely to have community-acquired infections. Both groups had decreased mortality over the 14-year study period; SIRS-positive patients decreased to 18.3% from 36.1%; SIRS-negative patients decreased to 8.5% from 27.7%.
Being SIRS-positive independently increased the risk of death by 26%; however, modeling showed a linear relationship between mortality and presence of SIRS criteria with each additional criteria, from zero to four, increasing mortality by 13%. There was no transitional increase in risk of mortality using two criteria as a cut-off.
Limiting the severe sepsis definition to two or more SIRS criteria missed one of eight patients admitted to ICU with organ failure and infection alone. SIRS-negative severe sepsis patients had significant mortality but showed similarities to SIRS-positive severe sepsis patients, suggesting they are separate phenotypes of the same condition.
Bottom line: This study challenges the sensitivity, face validity, and construct validity of the two-criteria SIRS cutoff. Redefining severe sepsis to include a lower number of SIRS criteria may diagnose more patients with organ failure and infection.
Citation: Kaukonen KM, Bailey M, Pilcher D, Cooper DJ, Bellomo R. Systemic inflammatory response syndrome criteria in defining severe sepsis. N Engl J Med. 2015;372:1629-1638.
Trimethoprim-Sulfamethoxazole Use in Older Patients Taking Spironolactone
Clinical question: Does trimethoprim-sulfamethoxazole (TMP-SMX) increase the risk of sudden death in older patients taking spironolactone?
Background: TMP-SMX increases the risk of hyperkalemia when used with spironolactone; however, previous studies have not examined whether the drug interaction is associated with an increased risk of sudden cardiac death, a predictable consequence of hyperkalemia.
Study design: Population-based, nested, case-control study.
Setting: Ontario, Canada.
Synopsis: Investigators identified 11,968 patients aged 66 years or older who suffered sudden death between 1994 and 2011 while receiving spironolactone; for 328 of these patients, death occurred within 14 days of antibiotic exposure. The rate of sudden death in patients receiving TMP-SMX was compared to the rate of sudden death in patients who instead received other study antibiotics.
Compared with amoxicillin, TMP-SMX was associated with a more than twofold increase in the risk of sudden death (OR 2.46, 95%; CI 1.55-3.90). The absolute rate of death of patients taking spironolactone who were prescribed TMP-SMX was 0.74%, compared to 0.35% in patients prescribed amoxicillin.
Because TMP-SMX and spironolactone are common medications, the likelihood of co-prescription leading to drug interaction is high. Although the study does not establish causality, it suggests that alternate antibiotics should be used in elderly patients on spironolactone when possible.
Bottom line: TMP-SMX increases the risk of sudden death in older patients taking spironolactone.
Citation: Antoniou T, Hollands S, Macdonald EM, Gomes T, Mamdani MM, Juurlink DN. Trimethoprim-sulfamethoxazole and risk of sudden death among patients taking spironolactone. CMAJ. 2015;187(4):E138-E143
Clinical question: Does trimethoprim-sulfamethoxazole (TMP-SMX) increase the risk of sudden death in older patients taking spironolactone?
Background: TMP-SMX increases the risk of hyperkalemia when used with spironolactone; however, previous studies have not examined whether the drug interaction is associated with an increased risk of sudden cardiac death, a predictable consequence of hyperkalemia.
Study design: Population-based, nested, case-control study.
Setting: Ontario, Canada.
Synopsis: Investigators identified 11,968 patients aged 66 years or older who suffered sudden death between 1994 and 2011 while receiving spironolactone; for 328 of these patients, death occurred within 14 days of antibiotic exposure. The rate of sudden death in patients receiving TMP-SMX was compared to the rate of sudden death in patients who instead received other study antibiotics.
Compared with amoxicillin, TMP-SMX was associated with a more than twofold increase in the risk of sudden death (OR 2.46, 95%; CI 1.55-3.90). The absolute rate of death of patients taking spironolactone who were prescribed TMP-SMX was 0.74%, compared to 0.35% in patients prescribed amoxicillin.
Because TMP-SMX and spironolactone are common medications, the likelihood of co-prescription leading to drug interaction is high. Although the study does not establish causality, it suggests that alternate antibiotics should be used in elderly patients on spironolactone when possible.
Bottom line: TMP-SMX increases the risk of sudden death in older patients taking spironolactone.
Citation: Antoniou T, Hollands S, Macdonald EM, Gomes T, Mamdani MM, Juurlink DN. Trimethoprim-sulfamethoxazole and risk of sudden death among patients taking spironolactone. CMAJ. 2015;187(4):E138-E143
Clinical question: Does trimethoprim-sulfamethoxazole (TMP-SMX) increase the risk of sudden death in older patients taking spironolactone?
Background: TMP-SMX increases the risk of hyperkalemia when used with spironolactone; however, previous studies have not examined whether the drug interaction is associated with an increased risk of sudden cardiac death, a predictable consequence of hyperkalemia.
Study design: Population-based, nested, case-control study.
Setting: Ontario, Canada.
Synopsis: Investigators identified 11,968 patients aged 66 years or older who suffered sudden death between 1994 and 2011 while receiving spironolactone; for 328 of these patients, death occurred within 14 days of antibiotic exposure. The rate of sudden death in patients receiving TMP-SMX was compared to the rate of sudden death in patients who instead received other study antibiotics.
Compared with amoxicillin, TMP-SMX was associated with a more than twofold increase in the risk of sudden death (OR 2.46, 95%; CI 1.55-3.90). The absolute rate of death of patients taking spironolactone who were prescribed TMP-SMX was 0.74%, compared to 0.35% in patients prescribed amoxicillin.
Because TMP-SMX and spironolactone are common medications, the likelihood of co-prescription leading to drug interaction is high. Although the study does not establish causality, it suggests that alternate antibiotics should be used in elderly patients on spironolactone when possible.
Bottom line: TMP-SMX increases the risk of sudden death in older patients taking spironolactone.
Citation: Antoniou T, Hollands S, Macdonald EM, Gomes T, Mamdani MM, Juurlink DN. Trimethoprim-sulfamethoxazole and risk of sudden death among patients taking spironolactone. CMAJ. 2015;187(4):E138-E143
Medicare Nonpayment for Hospital-Acquired Conditions May Have Reduced Infection Rates
Clinical question: What was the effect of the Centers for Medicare and Medicaid Services’ (CMS) nonpayment for hospital-acquired conditions?
Background: In 2008, CMS implemented the Hospital-Acquired Conditions (HAC) initiative, denying incremental payment to hospitals for complications of hospital care, including central-line associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), hospital-acquired pressure ulcers, and injurious patient falls.
Study design: Quasi-experimental data review, pre-post comparison of outcomes.
Setting: Nearly 1,400 U.S. hospitals contributing data to the National Database of Nursing Quality Indicators (NDNQI).
Synopsis: Using time points before and after implementation of the CMS initiative, the authors found that the rates of CLABSIs and CAUTIs dropped significantly after implementation (11% reduction of CLABSIs, 10% reduction of CAUTIs). The rates of pressure ulcers and falls did not change significantly.
Findings differ from an earlier study, which found the HAC initiative did not lead to a reduction in the rates of CLABSIs or CAUTIs. The authors point out that the databases used were different, as was the time frame of data collection.
The authors hypothesize that the reason CLABSI and CAUTI rates decreased while fall and pressure ulcer rates were unchanged was better evidence supporting infection prevention practices for the former. An accompanying editorial argues that the differential outcomes may have been due to increased challenges in implementing practices for the latter measures rather than differential evidence.
Limitations of the study include characteristics of hospitals reporting to the NDNQI and accuracy of data capture by individual reporting hospitals. Changes over time may also be attributed to factors other than the HAC initiative.
Bottom line: Nonpayment for HACs may have led to decreases in rates of CLABSIs and CAUTIs, but rates of pressure ulcers and falls remained unchanged.
Citation: Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare’s nonpayment for hospital-acquired conditions. JAMA Intern Med. 2015;175(3):347-354.
Clinical question: What was the effect of the Centers for Medicare and Medicaid Services’ (CMS) nonpayment for hospital-acquired conditions?
Background: In 2008, CMS implemented the Hospital-Acquired Conditions (HAC) initiative, denying incremental payment to hospitals for complications of hospital care, including central-line associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), hospital-acquired pressure ulcers, and injurious patient falls.
Study design: Quasi-experimental data review, pre-post comparison of outcomes.
Setting: Nearly 1,400 U.S. hospitals contributing data to the National Database of Nursing Quality Indicators (NDNQI).
Synopsis: Using time points before and after implementation of the CMS initiative, the authors found that the rates of CLABSIs and CAUTIs dropped significantly after implementation (11% reduction of CLABSIs, 10% reduction of CAUTIs). The rates of pressure ulcers and falls did not change significantly.
Findings differ from an earlier study, which found the HAC initiative did not lead to a reduction in the rates of CLABSIs or CAUTIs. The authors point out that the databases used were different, as was the time frame of data collection.
The authors hypothesize that the reason CLABSI and CAUTI rates decreased while fall and pressure ulcer rates were unchanged was better evidence supporting infection prevention practices for the former. An accompanying editorial argues that the differential outcomes may have been due to increased challenges in implementing practices for the latter measures rather than differential evidence.
Limitations of the study include characteristics of hospitals reporting to the NDNQI and accuracy of data capture by individual reporting hospitals. Changes over time may also be attributed to factors other than the HAC initiative.
Bottom line: Nonpayment for HACs may have led to decreases in rates of CLABSIs and CAUTIs, but rates of pressure ulcers and falls remained unchanged.
Citation: Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare’s nonpayment for hospital-acquired conditions. JAMA Intern Med. 2015;175(3):347-354.
Clinical question: What was the effect of the Centers for Medicare and Medicaid Services’ (CMS) nonpayment for hospital-acquired conditions?
Background: In 2008, CMS implemented the Hospital-Acquired Conditions (HAC) initiative, denying incremental payment to hospitals for complications of hospital care, including central-line associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), hospital-acquired pressure ulcers, and injurious patient falls.
Study design: Quasi-experimental data review, pre-post comparison of outcomes.
Setting: Nearly 1,400 U.S. hospitals contributing data to the National Database of Nursing Quality Indicators (NDNQI).
Synopsis: Using time points before and after implementation of the CMS initiative, the authors found that the rates of CLABSIs and CAUTIs dropped significantly after implementation (11% reduction of CLABSIs, 10% reduction of CAUTIs). The rates of pressure ulcers and falls did not change significantly.
Findings differ from an earlier study, which found the HAC initiative did not lead to a reduction in the rates of CLABSIs or CAUTIs. The authors point out that the databases used were different, as was the time frame of data collection.
The authors hypothesize that the reason CLABSI and CAUTI rates decreased while fall and pressure ulcer rates were unchanged was better evidence supporting infection prevention practices for the former. An accompanying editorial argues that the differential outcomes may have been due to increased challenges in implementing practices for the latter measures rather than differential evidence.
Limitations of the study include characteristics of hospitals reporting to the NDNQI and accuracy of data capture by individual reporting hospitals. Changes over time may also be attributed to factors other than the HAC initiative.
Bottom line: Nonpayment for HACs may have led to decreases in rates of CLABSIs and CAUTIs, but rates of pressure ulcers and falls remained unchanged.
Citation: Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare’s nonpayment for hospital-acquired conditions. JAMA Intern Med. 2015;175(3):347-354.
Peri-Operative Phlebotomy Might Cause Significant Blood Loss in Cardiac Surgery Patients
Clinical question: What are the frequency of laboratory testing, the average total blood volume drawn per patient, and the resulting transfusion utilization in patients undergoing cardiac surgery?
Background: Healthcare providers seldom recognize the amount of phlebotomy, and, therefore, its consequences and possible solutions have not been fully evaluated.
Study design: Retrospective, cohort study.
Setting: Major U.S. academic medical center.
Synopsis: The authors examined 1,894 patients undergoing cardiac surgery over a six-month period. They determined the number and type of lab tests drawn on each patient during hospitalization, as well as the estimated total blood volume drawn on each patient.
Patients averaged 115 lab tests during their hospitalization and had cumulative median phlebotomy volume of 454 ml (equivalent to one to two units of red blood cells). They also found that increasing total phlebotomy volume correlated with increased blood product use and that increasing length of stay correlated with higher levels of both.
During an average patient day in the ICU, of the average 116 ml of blood drawn, 80 ml was discarded at the bedside.
Limitations include the broad applicability of this study, which focused on cardiac surgery patients, all of whom stayed in an ICU and had central lines as the source of the majority of their blood draws. Appropriateness of lab testing and transfusions were not examined in this study.
Bottom line: Blood volumes equivalent to one to two units of red blood cells are drawn for lab tests on patients undergoing cardiac surgery, with a large portion of that blood being wasted at the bedside. Initiatives to reduce blood draw volume may help to reduce resource utilization related to such high rates of blood loss from phlebotomy.
Citation: Koch CG, Reinecks EZ, Tang AS, et al. Contemporary bloodletting in cardiac surgical care. Ann Thorac Surg. 2015;99(3):779-784.
Clinical question: What are the frequency of laboratory testing, the average total blood volume drawn per patient, and the resulting transfusion utilization in patients undergoing cardiac surgery?
Background: Healthcare providers seldom recognize the amount of phlebotomy, and, therefore, its consequences and possible solutions have not been fully evaluated.
Study design: Retrospective, cohort study.
Setting: Major U.S. academic medical center.
Synopsis: The authors examined 1,894 patients undergoing cardiac surgery over a six-month period. They determined the number and type of lab tests drawn on each patient during hospitalization, as well as the estimated total blood volume drawn on each patient.
Patients averaged 115 lab tests during their hospitalization and had cumulative median phlebotomy volume of 454 ml (equivalent to one to two units of red blood cells). They also found that increasing total phlebotomy volume correlated with increased blood product use and that increasing length of stay correlated with higher levels of both.
During an average patient day in the ICU, of the average 116 ml of blood drawn, 80 ml was discarded at the bedside.
Limitations include the broad applicability of this study, which focused on cardiac surgery patients, all of whom stayed in an ICU and had central lines as the source of the majority of their blood draws. Appropriateness of lab testing and transfusions were not examined in this study.
Bottom line: Blood volumes equivalent to one to two units of red blood cells are drawn for lab tests on patients undergoing cardiac surgery, with a large portion of that blood being wasted at the bedside. Initiatives to reduce blood draw volume may help to reduce resource utilization related to such high rates of blood loss from phlebotomy.
Citation: Koch CG, Reinecks EZ, Tang AS, et al. Contemporary bloodletting in cardiac surgical care. Ann Thorac Surg. 2015;99(3):779-784.
Clinical question: What are the frequency of laboratory testing, the average total blood volume drawn per patient, and the resulting transfusion utilization in patients undergoing cardiac surgery?
Background: Healthcare providers seldom recognize the amount of phlebotomy, and, therefore, its consequences and possible solutions have not been fully evaluated.
Study design: Retrospective, cohort study.
Setting: Major U.S. academic medical center.
Synopsis: The authors examined 1,894 patients undergoing cardiac surgery over a six-month period. They determined the number and type of lab tests drawn on each patient during hospitalization, as well as the estimated total blood volume drawn on each patient.
Patients averaged 115 lab tests during their hospitalization and had cumulative median phlebotomy volume of 454 ml (equivalent to one to two units of red blood cells). They also found that increasing total phlebotomy volume correlated with increased blood product use and that increasing length of stay correlated with higher levels of both.
During an average patient day in the ICU, of the average 116 ml of blood drawn, 80 ml was discarded at the bedside.
Limitations include the broad applicability of this study, which focused on cardiac surgery patients, all of whom stayed in an ICU and had central lines as the source of the majority of their blood draws. Appropriateness of lab testing and transfusions were not examined in this study.
Bottom line: Blood volumes equivalent to one to two units of red blood cells are drawn for lab tests on patients undergoing cardiac surgery, with a large portion of that blood being wasted at the bedside. Initiatives to reduce blood draw volume may help to reduce resource utilization related to such high rates of blood loss from phlebotomy.
Citation: Koch CG, Reinecks EZ, Tang AS, et al. Contemporary bloodletting in cardiac surgical care. Ann Thorac Surg. 2015;99(3):779-784.
Clostridium Difficile Infection Rates in the U.S. in 2011
Clinical question: What are the incidence, recurrence rate, and mortality rate of Clostridium difficile infection (CDI) in the U.S. in 2011?
Background: CDI has continued to change, and its impact on healthcare has continued to increase.
Study design: Cross-sectional analysis.
Setting: U.S.
Synopsis: The incidence, rate of recurrence, and rate of mortality of C. diff were estimated using 10 sites from the CDC Emerging Infections Program. C. diff incidence was estimated at 453,000 cases, with higher rates among females, whites, and those over age 65. One-third of the cases were community associated. There were an estimated 83,000 first-time recurrent infections and 29,300 estimated deaths within 30 days of diagnosis, with half of those deaths attributable to CDI itself.
This study was limited by the reliance of the case definition solely on positive test results and the trend of labs transitioning to nucleic acid amplification testing (NAAT), both of which can lead to inclusion of colonization (not just actual disease). Also, the recurrence and mortality rates were underestimated, because the study only included first-time recurrences and deaths that were documented in the medical record.
Bottom line: C. diff caused nearly half a million infections and was associated with roughly 29,000 deaths in the U.S. in 2011.
Citation: Less FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:825-834.
Clinical question: What are the incidence, recurrence rate, and mortality rate of Clostridium difficile infection (CDI) in the U.S. in 2011?
Background: CDI has continued to change, and its impact on healthcare has continued to increase.
Study design: Cross-sectional analysis.
Setting: U.S.
Synopsis: The incidence, rate of recurrence, and rate of mortality of C. diff were estimated using 10 sites from the CDC Emerging Infections Program. C. diff incidence was estimated at 453,000 cases, with higher rates among females, whites, and those over age 65. One-third of the cases were community associated. There were an estimated 83,000 first-time recurrent infections and 29,300 estimated deaths within 30 days of diagnosis, with half of those deaths attributable to CDI itself.
This study was limited by the reliance of the case definition solely on positive test results and the trend of labs transitioning to nucleic acid amplification testing (NAAT), both of which can lead to inclusion of colonization (not just actual disease). Also, the recurrence and mortality rates were underestimated, because the study only included first-time recurrences and deaths that were documented in the medical record.
Bottom line: C. diff caused nearly half a million infections and was associated with roughly 29,000 deaths in the U.S. in 2011.
Citation: Less FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:825-834.
Clinical question: What are the incidence, recurrence rate, and mortality rate of Clostridium difficile infection (CDI) in the U.S. in 2011?
Background: CDI has continued to change, and its impact on healthcare has continued to increase.
Study design: Cross-sectional analysis.
Setting: U.S.
Synopsis: The incidence, rate of recurrence, and rate of mortality of C. diff were estimated using 10 sites from the CDC Emerging Infections Program. C. diff incidence was estimated at 453,000 cases, with higher rates among females, whites, and those over age 65. One-third of the cases were community associated. There were an estimated 83,000 first-time recurrent infections and 29,300 estimated deaths within 30 days of diagnosis, with half of those deaths attributable to CDI itself.
This study was limited by the reliance of the case definition solely on positive test results and the trend of labs transitioning to nucleic acid amplification testing (NAAT), both of which can lead to inclusion of colonization (not just actual disease). Also, the recurrence and mortality rates were underestimated, because the study only included first-time recurrences and deaths that were documented in the medical record.
Bottom line: C. diff caused nearly half a million infections and was associated with roughly 29,000 deaths in the U.S. in 2011.
Citation: Less FC, Mu Y, Bamberg WM, et al. Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372:825-834.
CT Angiography Effect on Outcome for Patients with Symptomatic Chest Pain
Clinical question: Does CT angiography (CTA) improve clinical outcomes in patients with new-onset stable chest pain more than functional testing?
Background: Chest pain is a common clinical problem, and multiple noninvasive tests are available to detect coronary artery disease (CAD). CT angiography is more accurate than noninvasive testing and may decrease unnecessary invasive testing and improve outcomes in patients with new-onset stable chest pain.
Study design: Pragmatic, comparative-effectiveness design.
Setting: One hundred ninety-three North American sites.
Synopsis: Ten thousand three symptomatic outpatients, mean age 60 years, with at least one cardiovascular risk factor, were randomized to CTA or functional testing to detect CAD. Primary endpoints including death, myocardial infarction, hospitalization for unstable angina, or major procedural complication occurred in 3.3% of CTA patients and 3.0% of functional testing patients (adjusted hazard ratio, 1.04; 95% confidence interval, 0.83 to 1.29; P=0.75). CTA patients received fewer catheterizations showing nonobstructive CAD (3.4% of versus 4.3%, P=0.02).
More CTA patients underwent catheterization within 90 days after randomization (12.2% vs 8.1%), however. Patients in the CTA group had higher exposures to radiation overall, but, per patient, their mean cumulative radiation dose was lower than that of the functional testing group (10.0 mSv vs. 11.3 mSv).
Interestingly, 6.2% of CTA patients versus 3.2% of functional testing patients underwent revascularization, but the study was not powered to assess invasive catheterization or revascularization rates on outcomes.
This study is interesting because results are generalizable to real-world settings; CTA did not improve outcomes compared to functional testing in patients undergoing testing for CAD.
Bottom line: No improvement was seen in clinical outcomes for symptomatic patients undergoing evaluation for CAD with CTA compared with those receiving functional testing.
Citation: Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-1300.
Clinical question: Does CT angiography (CTA) improve clinical outcomes in patients with new-onset stable chest pain more than functional testing?
Background: Chest pain is a common clinical problem, and multiple noninvasive tests are available to detect coronary artery disease (CAD). CT angiography is more accurate than noninvasive testing and may decrease unnecessary invasive testing and improve outcomes in patients with new-onset stable chest pain.
Study design: Pragmatic, comparative-effectiveness design.
Setting: One hundred ninety-three North American sites.
Synopsis: Ten thousand three symptomatic outpatients, mean age 60 years, with at least one cardiovascular risk factor, were randomized to CTA or functional testing to detect CAD. Primary endpoints including death, myocardial infarction, hospitalization for unstable angina, or major procedural complication occurred in 3.3% of CTA patients and 3.0% of functional testing patients (adjusted hazard ratio, 1.04; 95% confidence interval, 0.83 to 1.29; P=0.75). CTA patients received fewer catheterizations showing nonobstructive CAD (3.4% of versus 4.3%, P=0.02).
More CTA patients underwent catheterization within 90 days after randomization (12.2% vs 8.1%), however. Patients in the CTA group had higher exposures to radiation overall, but, per patient, their mean cumulative radiation dose was lower than that of the functional testing group (10.0 mSv vs. 11.3 mSv).
Interestingly, 6.2% of CTA patients versus 3.2% of functional testing patients underwent revascularization, but the study was not powered to assess invasive catheterization or revascularization rates on outcomes.
This study is interesting because results are generalizable to real-world settings; CTA did not improve outcomes compared to functional testing in patients undergoing testing for CAD.
Bottom line: No improvement was seen in clinical outcomes for symptomatic patients undergoing evaluation for CAD with CTA compared with those receiving functional testing.
Citation: Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-1300.
Clinical question: Does CT angiography (CTA) improve clinical outcomes in patients with new-onset stable chest pain more than functional testing?
Background: Chest pain is a common clinical problem, and multiple noninvasive tests are available to detect coronary artery disease (CAD). CT angiography is more accurate than noninvasive testing and may decrease unnecessary invasive testing and improve outcomes in patients with new-onset stable chest pain.
Study design: Pragmatic, comparative-effectiveness design.
Setting: One hundred ninety-three North American sites.
Synopsis: Ten thousand three symptomatic outpatients, mean age 60 years, with at least one cardiovascular risk factor, were randomized to CTA or functional testing to detect CAD. Primary endpoints including death, myocardial infarction, hospitalization for unstable angina, or major procedural complication occurred in 3.3% of CTA patients and 3.0% of functional testing patients (adjusted hazard ratio, 1.04; 95% confidence interval, 0.83 to 1.29; P=0.75). CTA patients received fewer catheterizations showing nonobstructive CAD (3.4% of versus 4.3%, P=0.02).
More CTA patients underwent catheterization within 90 days after randomization (12.2% vs 8.1%), however. Patients in the CTA group had higher exposures to radiation overall, but, per patient, their mean cumulative radiation dose was lower than that of the functional testing group (10.0 mSv vs. 11.3 mSv).
Interestingly, 6.2% of CTA patients versus 3.2% of functional testing patients underwent revascularization, but the study was not powered to assess invasive catheterization or revascularization rates on outcomes.
This study is interesting because results are generalizable to real-world settings; CTA did not improve outcomes compared to functional testing in patients undergoing testing for CAD.
Bottom line: No improvement was seen in clinical outcomes for symptomatic patients undergoing evaluation for CAD with CTA compared with those receiving functional testing.
Citation: Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-1300.
Early Goal-Directed Therapy for Sepsis Offers No Benefit Over Usual Care
Background: Recent trials (ARISE, ProCESS) showed EGDT provided no mortality benefit over usual care. Questions remain about the effectiveness of intensive monitoring protocols, as well as the evolution of what constitutes usual care. The ProMISe trial seeks to test the hypothesis that EGDT impacts mortality in a cost-effective way.
Study design: Pragmatic, open, multicenter, parallel group RCT.
Setting: English National Health Service hospitals that did not routinely use EGDT that included continuous ScvO2 monitoring.
Synopsis: The authors enrolled 1,260 adult patients with early severe sepsis or septic shock; they randomized patients to either usual care or EGDT for six hours. Data was collected prospectively on the EGDT group and retrospectively on the usual care group.
By intention-to-treat analysis, all-cause mortality at 90 days was not significantly different (unadjusted RR 1.01, 95% CI, 0.85-1.20; adjusted OR 0.95, 95% CI, 0.74-1.24, P=0.73). EGDT patients received more intensive therapy, their quality of life scores were similar, and their average costs were higher, though not statistically significant. The probability that EGDT was cost effective was calculated to be below 20%.
Usual care patients had lower-than-expected mortality (29% vs. 40%), limiting the treatment effect of EGDT and limiting extrapolation to groups with higher mortality. Comparison to older studies is limited by the evolution in usual care for sepsis, with earlier recognition and antibiotic administration and greater use of vasoactive drugs. This study adds significant information about quality of life and cost to the discussion about EGDT.
Bottom line: The ProMISe study completes a powerful trio of papers suggesting that EGDT might be an expensive option that offers no clinical benefit over usual care.
Citation: Mouncey PR, Osborn TM, Power GS, et al. Trial of early, goal-directed resuscitation for septic shock. N Engl J Med. 2015;372:1301-1311.
Background: Recent trials (ARISE, ProCESS) showed EGDT provided no mortality benefit over usual care. Questions remain about the effectiveness of intensive monitoring protocols, as well as the evolution of what constitutes usual care. The ProMISe trial seeks to test the hypothesis that EGDT impacts mortality in a cost-effective way.
Study design: Pragmatic, open, multicenter, parallel group RCT.
Setting: English National Health Service hospitals that did not routinely use EGDT that included continuous ScvO2 monitoring.
Synopsis: The authors enrolled 1,260 adult patients with early severe sepsis or septic shock; they randomized patients to either usual care or EGDT for six hours. Data was collected prospectively on the EGDT group and retrospectively on the usual care group.
By intention-to-treat analysis, all-cause mortality at 90 days was not significantly different (unadjusted RR 1.01, 95% CI, 0.85-1.20; adjusted OR 0.95, 95% CI, 0.74-1.24, P=0.73). EGDT patients received more intensive therapy, their quality of life scores were similar, and their average costs were higher, though not statistically significant. The probability that EGDT was cost effective was calculated to be below 20%.
Usual care patients had lower-than-expected mortality (29% vs. 40%), limiting the treatment effect of EGDT and limiting extrapolation to groups with higher mortality. Comparison to older studies is limited by the evolution in usual care for sepsis, with earlier recognition and antibiotic administration and greater use of vasoactive drugs. This study adds significant information about quality of life and cost to the discussion about EGDT.
Bottom line: The ProMISe study completes a powerful trio of papers suggesting that EGDT might be an expensive option that offers no clinical benefit over usual care.
Citation: Mouncey PR, Osborn TM, Power GS, et al. Trial of early, goal-directed resuscitation for septic shock. N Engl J Med. 2015;372:1301-1311.
Background: Recent trials (ARISE, ProCESS) showed EGDT provided no mortality benefit over usual care. Questions remain about the effectiveness of intensive monitoring protocols, as well as the evolution of what constitutes usual care. The ProMISe trial seeks to test the hypothesis that EGDT impacts mortality in a cost-effective way.
Study design: Pragmatic, open, multicenter, parallel group RCT.
Setting: English National Health Service hospitals that did not routinely use EGDT that included continuous ScvO2 monitoring.
Synopsis: The authors enrolled 1,260 adult patients with early severe sepsis or septic shock; they randomized patients to either usual care or EGDT for six hours. Data was collected prospectively on the EGDT group and retrospectively on the usual care group.
By intention-to-treat analysis, all-cause mortality at 90 days was not significantly different (unadjusted RR 1.01, 95% CI, 0.85-1.20; adjusted OR 0.95, 95% CI, 0.74-1.24, P=0.73). EGDT patients received more intensive therapy, their quality of life scores were similar, and their average costs were higher, though not statistically significant. The probability that EGDT was cost effective was calculated to be below 20%.
Usual care patients had lower-than-expected mortality (29% vs. 40%), limiting the treatment effect of EGDT and limiting extrapolation to groups with higher mortality. Comparison to older studies is limited by the evolution in usual care for sepsis, with earlier recognition and antibiotic administration and greater use of vasoactive drugs. This study adds significant information about quality of life and cost to the discussion about EGDT.
Bottom line: The ProMISe study completes a powerful trio of papers suggesting that EGDT might be an expensive option that offers no clinical benefit over usual care.
Citation: Mouncey PR, Osborn TM, Power GS, et al. Trial of early, goal-directed resuscitation for septic shock. N Engl J Med. 2015;372:1301-1311.
NSAID Use by Patients on Antithrombotic Therapy Has Bleeding, Cardiovascular Risks
Clinical question: Is there increased risk of bleeding or cardiovascular events when using NSAIDs while on antithrombotic therapy for secondary cardiovascular prevention?
Background: NSAIDs are among the most commonly used medications, despite the fact that individual NSAIDs have been associated with increased cardiovascular risk, and despite guidelines recommending against the use of NSAIDs in patients with cardiovascular disease. The risk of using NSAIDs with antithrombotic medications after first MI has not yet been examined.
Study design: Retrospective registry study.
Setting: Patients registered in official medical, pharmacy, and civil databases in Denmark, with unique individual identifier numbers allowing for database cross-reference.
Synopsis: The authors enrolled 61,971 patients of a possible 88,662 who were 30 years or older and admitted for a first-time MI starting 30 days following discharge, and tracked them for endpoint events and prescriptions. NSAID prescriptions were identified for 20,931 patients. Patients were placed in cohorts by their specific antithrombotic regimen (monotherapy, or combination therapy with aspirin, clopidogrel, or vitamin K antagonist) and specific NSAID use, accounting for changes in prescription combinations for a given individual.
Antithrombotic use between the NSAID and non-NSAID groups was equal. NSAID use, regardless of duration, was associated with increased risk of admission or death from bleeding (HR 2.02, 95% CI 1.81-2.26). NSAID use was also associated with increased cardiovascular endpoints (HR 1.40, 95% CI 1.30-1.49), including with the most common antithrombotic regimens.
This study is limited by its observational design, lack of more detailed database information, and use of prescription data. Differences in mortality were not separately presented. This study implies that even short exposures to NSAIDs while on antithrombotic therapy may be problematic.
Bottom line: NSAID use is associated with significant bleeding and cardiovascular events in patients who are on antithrombotic medications following their first MI.
Citation: Schjerning Olsen AM, Gislason GH, McGettigan P, et al. Association of NSAID use with risk of bleeding and cardiovascular events in patients receiving antithrombotic therapy after myocardial infarction. JAMA. 2015;313(8):805-814.
Clinical question: Is there increased risk of bleeding or cardiovascular events when using NSAIDs while on antithrombotic therapy for secondary cardiovascular prevention?
Background: NSAIDs are among the most commonly used medications, despite the fact that individual NSAIDs have been associated with increased cardiovascular risk, and despite guidelines recommending against the use of NSAIDs in patients with cardiovascular disease. The risk of using NSAIDs with antithrombotic medications after first MI has not yet been examined.
Study design: Retrospective registry study.
Setting: Patients registered in official medical, pharmacy, and civil databases in Denmark, with unique individual identifier numbers allowing for database cross-reference.
Synopsis: The authors enrolled 61,971 patients of a possible 88,662 who were 30 years or older and admitted for a first-time MI starting 30 days following discharge, and tracked them for endpoint events and prescriptions. NSAID prescriptions were identified for 20,931 patients. Patients were placed in cohorts by their specific antithrombotic regimen (monotherapy, or combination therapy with aspirin, clopidogrel, or vitamin K antagonist) and specific NSAID use, accounting for changes in prescription combinations for a given individual.
Antithrombotic use between the NSAID and non-NSAID groups was equal. NSAID use, regardless of duration, was associated with increased risk of admission or death from bleeding (HR 2.02, 95% CI 1.81-2.26). NSAID use was also associated with increased cardiovascular endpoints (HR 1.40, 95% CI 1.30-1.49), including with the most common antithrombotic regimens.
This study is limited by its observational design, lack of more detailed database information, and use of prescription data. Differences in mortality were not separately presented. This study implies that even short exposures to NSAIDs while on antithrombotic therapy may be problematic.
Bottom line: NSAID use is associated with significant bleeding and cardiovascular events in patients who are on antithrombotic medications following their first MI.
Citation: Schjerning Olsen AM, Gislason GH, McGettigan P, et al. Association of NSAID use with risk of bleeding and cardiovascular events in patients receiving antithrombotic therapy after myocardial infarction. JAMA. 2015;313(8):805-814.
Clinical question: Is there increased risk of bleeding or cardiovascular events when using NSAIDs while on antithrombotic therapy for secondary cardiovascular prevention?
Background: NSAIDs are among the most commonly used medications, despite the fact that individual NSAIDs have been associated with increased cardiovascular risk, and despite guidelines recommending against the use of NSAIDs in patients with cardiovascular disease. The risk of using NSAIDs with antithrombotic medications after first MI has not yet been examined.
Study design: Retrospective registry study.
Setting: Patients registered in official medical, pharmacy, and civil databases in Denmark, with unique individual identifier numbers allowing for database cross-reference.
Synopsis: The authors enrolled 61,971 patients of a possible 88,662 who were 30 years or older and admitted for a first-time MI starting 30 days following discharge, and tracked them for endpoint events and prescriptions. NSAID prescriptions were identified for 20,931 patients. Patients were placed in cohorts by their specific antithrombotic regimen (monotherapy, or combination therapy with aspirin, clopidogrel, or vitamin K antagonist) and specific NSAID use, accounting for changes in prescription combinations for a given individual.
Antithrombotic use between the NSAID and non-NSAID groups was equal. NSAID use, regardless of duration, was associated with increased risk of admission or death from bleeding (HR 2.02, 95% CI 1.81-2.26). NSAID use was also associated with increased cardiovascular endpoints (HR 1.40, 95% CI 1.30-1.49), including with the most common antithrombotic regimens.
This study is limited by its observational design, lack of more detailed database information, and use of prescription data. Differences in mortality were not separately presented. This study implies that even short exposures to NSAIDs while on antithrombotic therapy may be problematic.
Bottom line: NSAID use is associated with significant bleeding and cardiovascular events in patients who are on antithrombotic medications following their first MI.
Citation: Schjerning Olsen AM, Gislason GH, McGettigan P, et al. Association of NSAID use with risk of bleeding and cardiovascular events in patients receiving antithrombotic therapy after myocardial infarction. JAMA. 2015;313(8):805-814.
Treatment of Patients with Atrial Fibrillation, Low CHA2DS2-VASc Scores
Clinical question: Is anticoagulation beneficial for patients with atrial fibrillation (Afib) and low CHA2DS2-VASc score (0 for men, 1 for women) or for those with one additional stroke risk factor?
Background: Guidelines nearly universally recommend anticoagulation for patients with a CHA2DS2-VASc of >2, but differ on recommendation for patients with a CHA2DS2-VASc of 1.
Study design: Cohort study.
Setting: Multiple national registries in Denmark.
Synopsis: Based on analysis, patients with very low stroke risk using the CHA2DS2-VASc score (0 for men, 1 for women) had particularly low stroke risk and did not appear to benefit from additional therapy with aspirin or warfarin, both at one year and at full follow-up (mean 5.9 years).
The addition of one stroke risk factor increased stroke risk without treatment significantly (three-fold increase). Hazard ratios favored treatment with warfarin in these patients, most notably with a reduction in all-cause mortality (though this was more significant at one year than at full follow-up).
Bottom line: Although guidelines differ on treatment strategy for patients with Afib and one stroke risk factor (i.e., CHA2DS2-VASc score of 1 for men, 2 for women), this study supports treatment with warfarin.
Citation: Lip GY, Skjöth F, Rasmussen LH, Larsen TB. Oral anticoagulation, aspirin, or no therapy in patients with nonvalvular AF with 0 or 1 stroke risk factor based on the CHA2DS2-VASc score. J Am Coll Cardiol. 2015;65(14):1385-1394.
Clinical question: Is anticoagulation beneficial for patients with atrial fibrillation (Afib) and low CHA2DS2-VASc score (0 for men, 1 for women) or for those with one additional stroke risk factor?
Background: Guidelines nearly universally recommend anticoagulation for patients with a CHA2DS2-VASc of >2, but differ on recommendation for patients with a CHA2DS2-VASc of 1.
Study design: Cohort study.
Setting: Multiple national registries in Denmark.
Synopsis: Based on analysis, patients with very low stroke risk using the CHA2DS2-VASc score (0 for men, 1 for women) had particularly low stroke risk and did not appear to benefit from additional therapy with aspirin or warfarin, both at one year and at full follow-up (mean 5.9 years).
The addition of one stroke risk factor increased stroke risk without treatment significantly (three-fold increase). Hazard ratios favored treatment with warfarin in these patients, most notably with a reduction in all-cause mortality (though this was more significant at one year than at full follow-up).
Bottom line: Although guidelines differ on treatment strategy for patients with Afib and one stroke risk factor (i.e., CHA2DS2-VASc score of 1 for men, 2 for women), this study supports treatment with warfarin.
Citation: Lip GY, Skjöth F, Rasmussen LH, Larsen TB. Oral anticoagulation, aspirin, or no therapy in patients with nonvalvular AF with 0 or 1 stroke risk factor based on the CHA2DS2-VASc score. J Am Coll Cardiol. 2015;65(14):1385-1394.
Clinical question: Is anticoagulation beneficial for patients with atrial fibrillation (Afib) and low CHA2DS2-VASc score (0 for men, 1 for women) or for those with one additional stroke risk factor?
Background: Guidelines nearly universally recommend anticoagulation for patients with a CHA2DS2-VASc of >2, but differ on recommendation for patients with a CHA2DS2-VASc of 1.
Study design: Cohort study.
Setting: Multiple national registries in Denmark.
Synopsis: Based on analysis, patients with very low stroke risk using the CHA2DS2-VASc score (0 for men, 1 for women) had particularly low stroke risk and did not appear to benefit from additional therapy with aspirin or warfarin, both at one year and at full follow-up (mean 5.9 years).
The addition of one stroke risk factor increased stroke risk without treatment significantly (three-fold increase). Hazard ratios favored treatment with warfarin in these patients, most notably with a reduction in all-cause mortality (though this was more significant at one year than at full follow-up).
Bottom line: Although guidelines differ on treatment strategy for patients with Afib and one stroke risk factor (i.e., CHA2DS2-VASc score of 1 for men, 2 for women), this study supports treatment with warfarin.
Citation: Lip GY, Skjöth F, Rasmussen LH, Larsen TB. Oral anticoagulation, aspirin, or no therapy in patients with nonvalvular AF with 0 or 1 stroke risk factor based on the CHA2DS2-VASc score. J Am Coll Cardiol. 2015;65(14):1385-1394.
Research Review: Ticagrelor for Post-Myocardial Infarction
Clinical question: Is ticagrelor for secondary prevention indicated for more than one year after myocardial infarction (MI)?
Background: The efficacy and safety of ticagrelor combined with low-dose aspirin more than one year after MI for secondary prevention has not previously been established.
Study design: Randomized, double-blinded, placebo-controlled, clinical trial.
Setting: Multi-center across 31 countries.
Synopsis: Investigators randomized 21,162 patients one to three years after first MI to a 90 mg, twice daily dose; a 60 mg, twice daily dose; or placebo. Patients also received low-dose aspirin (75 mg to 100 mg). Interestingly, a number of patients had been off dual antiplatelet therapy prior to the start of the trial, because most patients were enrolled closer to two years after primary MI. The manufacturer of ticagrelor sponsored the trial.
The study authors’ analysis showed that treating 10,000 patients with the 90 mg dose would prevent 40 cardiac events (cardiovascular death, MI, or stroke), while the 60 mg dose would prevent 42 events; however, the 90 mg dose would cause 41 major bleeding events and the 60 mg dose 31 major bleeding events. Fatal bleeding was less than 1% in all groups, though patients with increased bleeding risk were excluded.
In addition, patients on either dose of ticagrelor had a significantly higher rate of dyspnea, which resulted in increases in drug discontinuation.
Bottom line: Use of ticagrelor with aspirin for secondary prevention greater than one year after myocardial infarction reduced rates of cardiovascular death, MI, and stroke but increased the risk of major bleeding.
Citation: Bonaca MP, Bhatt DL, Cohen M, et al. Long-term use of ticagrelor in patients with prior myocardial infarction. N Engl J Med. 2015;372:1791-1800.
Clinical question: Is ticagrelor for secondary prevention indicated for more than one year after myocardial infarction (MI)?
Background: The efficacy and safety of ticagrelor combined with low-dose aspirin more than one year after MI for secondary prevention has not previously been established.
Study design: Randomized, double-blinded, placebo-controlled, clinical trial.
Setting: Multi-center across 31 countries.
Synopsis: Investigators randomized 21,162 patients one to three years after first MI to a 90 mg, twice daily dose; a 60 mg, twice daily dose; or placebo. Patients also received low-dose aspirin (75 mg to 100 mg). Interestingly, a number of patients had been off dual antiplatelet therapy prior to the start of the trial, because most patients were enrolled closer to two years after primary MI. The manufacturer of ticagrelor sponsored the trial.
The study authors’ analysis showed that treating 10,000 patients with the 90 mg dose would prevent 40 cardiac events (cardiovascular death, MI, or stroke), while the 60 mg dose would prevent 42 events; however, the 90 mg dose would cause 41 major bleeding events and the 60 mg dose 31 major bleeding events. Fatal bleeding was less than 1% in all groups, though patients with increased bleeding risk were excluded.
In addition, patients on either dose of ticagrelor had a significantly higher rate of dyspnea, which resulted in increases in drug discontinuation.
Bottom line: Use of ticagrelor with aspirin for secondary prevention greater than one year after myocardial infarction reduced rates of cardiovascular death, MI, and stroke but increased the risk of major bleeding.
Citation: Bonaca MP, Bhatt DL, Cohen M, et al. Long-term use of ticagrelor in patients with prior myocardial infarction. N Engl J Med. 2015;372:1791-1800.
Clinical question: Is ticagrelor for secondary prevention indicated for more than one year after myocardial infarction (MI)?
Background: The efficacy and safety of ticagrelor combined with low-dose aspirin more than one year after MI for secondary prevention has not previously been established.
Study design: Randomized, double-blinded, placebo-controlled, clinical trial.
Setting: Multi-center across 31 countries.
Synopsis: Investigators randomized 21,162 patients one to three years after first MI to a 90 mg, twice daily dose; a 60 mg, twice daily dose; or placebo. Patients also received low-dose aspirin (75 mg to 100 mg). Interestingly, a number of patients had been off dual antiplatelet therapy prior to the start of the trial, because most patients were enrolled closer to two years after primary MI. The manufacturer of ticagrelor sponsored the trial.
The study authors’ analysis showed that treating 10,000 patients with the 90 mg dose would prevent 40 cardiac events (cardiovascular death, MI, or stroke), while the 60 mg dose would prevent 42 events; however, the 90 mg dose would cause 41 major bleeding events and the 60 mg dose 31 major bleeding events. Fatal bleeding was less than 1% in all groups, though patients with increased bleeding risk were excluded.
In addition, patients on either dose of ticagrelor had a significantly higher rate of dyspnea, which resulted in increases in drug discontinuation.
Bottom line: Use of ticagrelor with aspirin for secondary prevention greater than one year after myocardial infarction reduced rates of cardiovascular death, MI, and stroke but increased the risk of major bleeding.
Citation: Bonaca MP, Bhatt DL, Cohen M, et al. Long-term use of ticagrelor in patients with prior myocardial infarction. N Engl J Med. 2015;372:1791-1800.