User login
Remdesivir in Hospitalized Adults With Severe COVID-19: Lessons Learned From the First Randomized Trial
Study Overview
Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.
Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.
Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.
Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.
Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.
The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.
Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.
Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.
Commentary
Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.
The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.
Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.
While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.
Applications for Clinical Practice
The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.
–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY
–Fred Ko, MD
1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.
2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799.
3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.
4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020
5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.
Study Overview
Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.
Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.
Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.
Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.
Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.
The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.
Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.
Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.
Commentary
Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.
The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.
Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.
While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.
Applications for Clinical Practice
The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.
–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY
–Fred Ko, MD
Study Overview
Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.
Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.
Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.
Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.
Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.
The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.
Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.
Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.
Commentary
Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.
The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.
Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.
While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.
Applications for Clinical Practice
The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.
–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY
–Fred Ko, MD
1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.
2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799.
3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.
4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020
5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.
1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.
2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799.
3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.
4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020
5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.
Geriatric Assessment and Collaborative Medication Review for Older Adults With Polypharmacy
Study Overview
Objective. To examine the effect of clinical geriatric assessments and collaborative medication review by geriatricians and family physicians on quality of life and other patient outcomes in home-dwelling older adults with polypharmacy.
Design. The study was a single-blind, cluster randomized clinical trial enrolling home-dwelling adults aged 70 years and older who were taking 7 or more medications. Family physicians in Norway were recruited to participate in the trial with their patients. Randomization was at the family physician level to avoid contamination between intervention and control groups.
Setting and participants. The study was conducted in Akershus and Oslo, Norway. Family physicians were recruited to participate in the trial with their patients. A total of 84 family physicians were recruited, of which 70 were included in the trial and randomized to intervention versus control; 14 were excluded because they had no eligible patients. The cluster size of each family physician was limited to 5 patients per physician to avoid large variation in cluster sizes. Patients were eligible for enrollment if they were home-dwelling, aged 70 years or older, and were taking 7 or more systemic medications regularly and had medications administered by the home nursing service. Patients were excluded if they were expected to die or be institutionalized within 6 months, or if they were discouraged from participation by their family physician. A total of 174 patients were recruited, with 87 patients in each group (34 family physicians were in the control group and 36 in the intervention group).
Intervention. The intervention included a geriatric assessment performed by a physician trained in geriatric medicine and supervised by a senior consultant. The geriatric assessment consisted of review of medical history; systematic screening for current problems; clinical examination; supplementary tests, if indicated; and review of each medication being used. The review of medication included the indication for each medication, dosage, adverse effects, and interactions. The geriatric assessment consultation took 1 hour to complete, on average. After the geriatric assessment, the family physician and the geriatrician met to discuss each medication and to establish a collaborative plan for adjustments and follow-up; this meeting was approximately 15 minutes in duration. Lastly, clinical follow-up with the older adult was conducted by the geriatrician or the family physician, as agreed upon in the plan, with most follow-up conducted by the family physician. Participants randomized to the control group received usual care without any intervention.
Main outcome measures. Outcomes were assessed at 16-week and 24-week follow-up. The main study outcome measure was health-related quality of life (HRQoL), as measured by the 15D instrument, at 16 weeks. The quality-of-life measure included the following aspects, each rated on an ordinal scale of 5 levels: mobility, vision, hearing, breathing, sleeping, eating, speech, elimination, usual activities, mental function, discomfort or symptoms, depression, distress, vitality, and sexual activity. The index scale including all aspects is in the range of 0 to 1, with a higher score indicating better quality of life. A predetermined change of 0.015 or more is considered clinically important, and a positive change of 0.035 indicates much better HRQoL. Other outcomes included: appropriateness of medications measured by the Medication Appropriateness Index and the Assessment of Underutilization; physical function (short Physical Performance battery); gait speed; grip strength; cognitive functioning; physical and cognitive disability (Functional Independence Measure); caregiver burden (Relative Stress Scale); physical measures, including orthostatic blood pressure, falls, and weight; hospital admissions; use of home nursing service; incidence of institutionalization; and mortality.
Main results. The study included 174 patients with an average age of 83.3 years (SD, 7.3); 67.8% were women. Of those who were randomized to the intervention and control groups, 158 (90.8%) completed the trial. The average number of regularly used medications was 10.1 (SD, 2.7) in the intervention group and 9.5 (SD, 2.6) in the control group. At week 16 of follow-up, patients in the intervention group had an improved HRQoL score measured by the 15D instrument; the difference between the intervention group and control groups was 0.045 (95% confidence interval [CI], 0.004 -0.086; P = 0.03). Medication appropriateness was better in the intervention group, as compared with the control group at both 16 weeks and 24 weeks. Nearly all (99%) patients in the intervention group experienced medication changes, which included withdrawal of medications, dosage adjustment, or new drug regimens. There was a trend towards a higher rate of hospitalization during follow-up in the intervention group (adjusted risk ratio, 2.03; 95% CI, 0.98-4.24; P = 0.06). Other secondary outcomes were not substantially different between the intervention and control groups.
Conclusion. The study demonstrated that a clinical geriatric assessment and collaborative medication review by geriatrician and family physician led to improved HRQoL and improved medication use.
Commentary
The use of multiple medications in older adults is common, with almost 20% of older adults over age 65 taking 10 or more medications.1 Polypharmacy in older adults is associated with lower adherence rates and increases the potential for interactions between medications.2 Age-related changes, such as changes in absorption, metabolism, and excretion, affect pharmacokinetics of medications and potentiate adverse drug reactions, requiring adjustments in use and dosing to optimize safety and outcomes. Recognizing the potential effects of medications in older adults, evidence-based guidelines, such as the Beers criteria3 and START/STOPP criteria,4 have been developed to identify potentially inappropriate medications in older adults and to improve prescribing. Randomized trials using the START/STOPP criteria have demonstrated improved medication appropriateness, reduced polypharmacy, and reduced adverse drug reactions.5 Although this study did not use a criteria-based approach for improving medication use, it demonstrated that in a population of older adults with polypharmacy, medication review with geriatricians can lead to improved HRQoL while improving medication appropriateness. The collaborative approach between the family physician and geriatrician, rather than a consultative approach with recommendations from a geriatrician, may have contributed to increased uptake of medication changes. Such an approach may be a reasonable strategy to improve medication use in older adults.
A limitation of the study is that the improvement in HRQoL could have been the result of medication changes, but could also have been due to other changes in the plan of care that resulted from the geriatric assessment. As noted by the authors, the increase in hospital admissions, though not statistically significant, could have resulted from the medication modifications; however, it was also noted that the geriatric assessments could have identified severe illnesses that required hospitalization, as the timeline from geriatric assessment to hospitalization suggested was the case. Thus, the increase in hospitalization resulting from timely identification of severe illness was more likely a benefit than an adverse effect; however, further studies should be done to elucidate this.
Applications for Clinical Practice
Older adults with multiple chronic conditions and complex medication regimens are at risk for poor health outcomes, and a purposeful medication review to improve medication use, leading to the removal of unnecessary and potentially harmful medications, adjustment of dosages, and initiation of appropriate medications, may yield health benefits, such as improved HRQoL. The present study utilized an approach that could be scalable, which is important given the limited number of clinicians with geriatrics expertise. For health systems with geriatrics clinical expertise, it may be reasonable to consider adopting a similar collaborative approach in order to improve care for older adults most at risk. Further reports on how patients and family physicians perceive this intervention will enhance our understanding of whether it could be implemented widely.
–William W. Hung, MD, MPH
1. Steinman MA, Hanlon JT. Managing medications in clinically complex elders: “There’s got to be a happy medium”. JAMA. 2010;304:1592-1601.
2. Vik SA, Maxwell CJ, Hogan DB. Measurement, correlates, and health outcomes of medication adherence among seniors. Ann Pharmacother. 2004;38:303-312.
3. American Geriatrics Society 2015 Updated Beers criteria for potentially inappropriate medication use in older Adults. J Am Geriatr Soc. 2015;63:2227-2246.
4. Hill-Taylor B, Sketris I, Hayden J, et al. Application of the STOPP/START criteria: a systematic review of the prevalence of potentially inappropriate prescribing in older adults, and evidence of clinical, humanistic and economic impact. J Clin Pharm Ther. 2013;38:360-372.
5. O’Mahony D. STOPP/START criteria for potentially inappropriate medications/ potential prescribing omissions in older people: origin and progress. Expert Rev Clin Pharmacol. 2020;13:15-22.
Study Overview
Objective. To examine the effect of clinical geriatric assessments and collaborative medication review by geriatricians and family physicians on quality of life and other patient outcomes in home-dwelling older adults with polypharmacy.
Design. The study was a single-blind, cluster randomized clinical trial enrolling home-dwelling adults aged 70 years and older who were taking 7 or more medications. Family physicians in Norway were recruited to participate in the trial with their patients. Randomization was at the family physician level to avoid contamination between intervention and control groups.
Setting and participants. The study was conducted in Akershus and Oslo, Norway. Family physicians were recruited to participate in the trial with their patients. A total of 84 family physicians were recruited, of which 70 were included in the trial and randomized to intervention versus control; 14 were excluded because they had no eligible patients. The cluster size of each family physician was limited to 5 patients per physician to avoid large variation in cluster sizes. Patients were eligible for enrollment if they were home-dwelling, aged 70 years or older, and were taking 7 or more systemic medications regularly and had medications administered by the home nursing service. Patients were excluded if they were expected to die or be institutionalized within 6 months, or if they were discouraged from participation by their family physician. A total of 174 patients were recruited, with 87 patients in each group (34 family physicians were in the control group and 36 in the intervention group).
Intervention. The intervention included a geriatric assessment performed by a physician trained in geriatric medicine and supervised by a senior consultant. The geriatric assessment consisted of review of medical history; systematic screening for current problems; clinical examination; supplementary tests, if indicated; and review of each medication being used. The review of medication included the indication for each medication, dosage, adverse effects, and interactions. The geriatric assessment consultation took 1 hour to complete, on average. After the geriatric assessment, the family physician and the geriatrician met to discuss each medication and to establish a collaborative plan for adjustments and follow-up; this meeting was approximately 15 minutes in duration. Lastly, clinical follow-up with the older adult was conducted by the geriatrician or the family physician, as agreed upon in the plan, with most follow-up conducted by the family physician. Participants randomized to the control group received usual care without any intervention.
Main outcome measures. Outcomes were assessed at 16-week and 24-week follow-up. The main study outcome measure was health-related quality of life (HRQoL), as measured by the 15D instrument, at 16 weeks. The quality-of-life measure included the following aspects, each rated on an ordinal scale of 5 levels: mobility, vision, hearing, breathing, sleeping, eating, speech, elimination, usual activities, mental function, discomfort or symptoms, depression, distress, vitality, and sexual activity. The index scale including all aspects is in the range of 0 to 1, with a higher score indicating better quality of life. A predetermined change of 0.015 or more is considered clinically important, and a positive change of 0.035 indicates much better HRQoL. Other outcomes included: appropriateness of medications measured by the Medication Appropriateness Index and the Assessment of Underutilization; physical function (short Physical Performance battery); gait speed; grip strength; cognitive functioning; physical and cognitive disability (Functional Independence Measure); caregiver burden (Relative Stress Scale); physical measures, including orthostatic blood pressure, falls, and weight; hospital admissions; use of home nursing service; incidence of institutionalization; and mortality.
Main results. The study included 174 patients with an average age of 83.3 years (SD, 7.3); 67.8% were women. Of those who were randomized to the intervention and control groups, 158 (90.8%) completed the trial. The average number of regularly used medications was 10.1 (SD, 2.7) in the intervention group and 9.5 (SD, 2.6) in the control group. At week 16 of follow-up, patients in the intervention group had an improved HRQoL score measured by the 15D instrument; the difference between the intervention group and control groups was 0.045 (95% confidence interval [CI], 0.004 -0.086; P = 0.03). Medication appropriateness was better in the intervention group, as compared with the control group at both 16 weeks and 24 weeks. Nearly all (99%) patients in the intervention group experienced medication changes, which included withdrawal of medications, dosage adjustment, or new drug regimens. There was a trend towards a higher rate of hospitalization during follow-up in the intervention group (adjusted risk ratio, 2.03; 95% CI, 0.98-4.24; P = 0.06). Other secondary outcomes were not substantially different between the intervention and control groups.
Conclusion. The study demonstrated that a clinical geriatric assessment and collaborative medication review by geriatrician and family physician led to improved HRQoL and improved medication use.
Commentary
The use of multiple medications in older adults is common, with almost 20% of older adults over age 65 taking 10 or more medications.1 Polypharmacy in older adults is associated with lower adherence rates and increases the potential for interactions between medications.2 Age-related changes, such as changes in absorption, metabolism, and excretion, affect pharmacokinetics of medications and potentiate adverse drug reactions, requiring adjustments in use and dosing to optimize safety and outcomes. Recognizing the potential effects of medications in older adults, evidence-based guidelines, such as the Beers criteria3 and START/STOPP criteria,4 have been developed to identify potentially inappropriate medications in older adults and to improve prescribing. Randomized trials using the START/STOPP criteria have demonstrated improved medication appropriateness, reduced polypharmacy, and reduced adverse drug reactions.5 Although this study did not use a criteria-based approach for improving medication use, it demonstrated that in a population of older adults with polypharmacy, medication review with geriatricians can lead to improved HRQoL while improving medication appropriateness. The collaborative approach between the family physician and geriatrician, rather than a consultative approach with recommendations from a geriatrician, may have contributed to increased uptake of medication changes. Such an approach may be a reasonable strategy to improve medication use in older adults.
A limitation of the study is that the improvement in HRQoL could have been the result of medication changes, but could also have been due to other changes in the plan of care that resulted from the geriatric assessment. As noted by the authors, the increase in hospital admissions, though not statistically significant, could have resulted from the medication modifications; however, it was also noted that the geriatric assessments could have identified severe illnesses that required hospitalization, as the timeline from geriatric assessment to hospitalization suggested was the case. Thus, the increase in hospitalization resulting from timely identification of severe illness was more likely a benefit than an adverse effect; however, further studies should be done to elucidate this.
Applications for Clinical Practice
Older adults with multiple chronic conditions and complex medication regimens are at risk for poor health outcomes, and a purposeful medication review to improve medication use, leading to the removal of unnecessary and potentially harmful medications, adjustment of dosages, and initiation of appropriate medications, may yield health benefits, such as improved HRQoL. The present study utilized an approach that could be scalable, which is important given the limited number of clinicians with geriatrics expertise. For health systems with geriatrics clinical expertise, it may be reasonable to consider adopting a similar collaborative approach in order to improve care for older adults most at risk. Further reports on how patients and family physicians perceive this intervention will enhance our understanding of whether it could be implemented widely.
–William W. Hung, MD, MPH
Study Overview
Objective. To examine the effect of clinical geriatric assessments and collaborative medication review by geriatricians and family physicians on quality of life and other patient outcomes in home-dwelling older adults with polypharmacy.
Design. The study was a single-blind, cluster randomized clinical trial enrolling home-dwelling adults aged 70 years and older who were taking 7 or more medications. Family physicians in Norway were recruited to participate in the trial with their patients. Randomization was at the family physician level to avoid contamination between intervention and control groups.
Setting and participants. The study was conducted in Akershus and Oslo, Norway. Family physicians were recruited to participate in the trial with their patients. A total of 84 family physicians were recruited, of which 70 were included in the trial and randomized to intervention versus control; 14 were excluded because they had no eligible patients. The cluster size of each family physician was limited to 5 patients per physician to avoid large variation in cluster sizes. Patients were eligible for enrollment if they were home-dwelling, aged 70 years or older, and were taking 7 or more systemic medications regularly and had medications administered by the home nursing service. Patients were excluded if they were expected to die or be institutionalized within 6 months, or if they were discouraged from participation by their family physician. A total of 174 patients were recruited, with 87 patients in each group (34 family physicians were in the control group and 36 in the intervention group).
Intervention. The intervention included a geriatric assessment performed by a physician trained in geriatric medicine and supervised by a senior consultant. The geriatric assessment consisted of review of medical history; systematic screening for current problems; clinical examination; supplementary tests, if indicated; and review of each medication being used. The review of medication included the indication for each medication, dosage, adverse effects, and interactions. The geriatric assessment consultation took 1 hour to complete, on average. After the geriatric assessment, the family physician and the geriatrician met to discuss each medication and to establish a collaborative plan for adjustments and follow-up; this meeting was approximately 15 minutes in duration. Lastly, clinical follow-up with the older adult was conducted by the geriatrician or the family physician, as agreed upon in the plan, with most follow-up conducted by the family physician. Participants randomized to the control group received usual care without any intervention.
Main outcome measures. Outcomes were assessed at 16-week and 24-week follow-up. The main study outcome measure was health-related quality of life (HRQoL), as measured by the 15D instrument, at 16 weeks. The quality-of-life measure included the following aspects, each rated on an ordinal scale of 5 levels: mobility, vision, hearing, breathing, sleeping, eating, speech, elimination, usual activities, mental function, discomfort or symptoms, depression, distress, vitality, and sexual activity. The index scale including all aspects is in the range of 0 to 1, with a higher score indicating better quality of life. A predetermined change of 0.015 or more is considered clinically important, and a positive change of 0.035 indicates much better HRQoL. Other outcomes included: appropriateness of medications measured by the Medication Appropriateness Index and the Assessment of Underutilization; physical function (short Physical Performance battery); gait speed; grip strength; cognitive functioning; physical and cognitive disability (Functional Independence Measure); caregiver burden (Relative Stress Scale); physical measures, including orthostatic blood pressure, falls, and weight; hospital admissions; use of home nursing service; incidence of institutionalization; and mortality.
Main results. The study included 174 patients with an average age of 83.3 years (SD, 7.3); 67.8% were women. Of those who were randomized to the intervention and control groups, 158 (90.8%) completed the trial. The average number of regularly used medications was 10.1 (SD, 2.7) in the intervention group and 9.5 (SD, 2.6) in the control group. At week 16 of follow-up, patients in the intervention group had an improved HRQoL score measured by the 15D instrument; the difference between the intervention group and control groups was 0.045 (95% confidence interval [CI], 0.004 -0.086; P = 0.03). Medication appropriateness was better in the intervention group, as compared with the control group at both 16 weeks and 24 weeks. Nearly all (99%) patients in the intervention group experienced medication changes, which included withdrawal of medications, dosage adjustment, or new drug regimens. There was a trend towards a higher rate of hospitalization during follow-up in the intervention group (adjusted risk ratio, 2.03; 95% CI, 0.98-4.24; P = 0.06). Other secondary outcomes were not substantially different between the intervention and control groups.
Conclusion. The study demonstrated that a clinical geriatric assessment and collaborative medication review by geriatrician and family physician led to improved HRQoL and improved medication use.
Commentary
The use of multiple medications in older adults is common, with almost 20% of older adults over age 65 taking 10 or more medications.1 Polypharmacy in older adults is associated with lower adherence rates and increases the potential for interactions between medications.2 Age-related changes, such as changes in absorption, metabolism, and excretion, affect pharmacokinetics of medications and potentiate adverse drug reactions, requiring adjustments in use and dosing to optimize safety and outcomes. Recognizing the potential effects of medications in older adults, evidence-based guidelines, such as the Beers criteria3 and START/STOPP criteria,4 have been developed to identify potentially inappropriate medications in older adults and to improve prescribing. Randomized trials using the START/STOPP criteria have demonstrated improved medication appropriateness, reduced polypharmacy, and reduced adverse drug reactions.5 Although this study did not use a criteria-based approach for improving medication use, it demonstrated that in a population of older adults with polypharmacy, medication review with geriatricians can lead to improved HRQoL while improving medication appropriateness. The collaborative approach between the family physician and geriatrician, rather than a consultative approach with recommendations from a geriatrician, may have contributed to increased uptake of medication changes. Such an approach may be a reasonable strategy to improve medication use in older adults.
A limitation of the study is that the improvement in HRQoL could have been the result of medication changes, but could also have been due to other changes in the plan of care that resulted from the geriatric assessment. As noted by the authors, the increase in hospital admissions, though not statistically significant, could have resulted from the medication modifications; however, it was also noted that the geriatric assessments could have identified severe illnesses that required hospitalization, as the timeline from geriatric assessment to hospitalization suggested was the case. Thus, the increase in hospitalization resulting from timely identification of severe illness was more likely a benefit than an adverse effect; however, further studies should be done to elucidate this.
Applications for Clinical Practice
Older adults with multiple chronic conditions and complex medication regimens are at risk for poor health outcomes, and a purposeful medication review to improve medication use, leading to the removal of unnecessary and potentially harmful medications, adjustment of dosages, and initiation of appropriate medications, may yield health benefits, such as improved HRQoL. The present study utilized an approach that could be scalable, which is important given the limited number of clinicians with geriatrics expertise. For health systems with geriatrics clinical expertise, it may be reasonable to consider adopting a similar collaborative approach in order to improve care for older adults most at risk. Further reports on how patients and family physicians perceive this intervention will enhance our understanding of whether it could be implemented widely.
–William W. Hung, MD, MPH
1. Steinman MA, Hanlon JT. Managing medications in clinically complex elders: “There’s got to be a happy medium”. JAMA. 2010;304:1592-1601.
2. Vik SA, Maxwell CJ, Hogan DB. Measurement, correlates, and health outcomes of medication adherence among seniors. Ann Pharmacother. 2004;38:303-312.
3. American Geriatrics Society 2015 Updated Beers criteria for potentially inappropriate medication use in older Adults. J Am Geriatr Soc. 2015;63:2227-2246.
4. Hill-Taylor B, Sketris I, Hayden J, et al. Application of the STOPP/START criteria: a systematic review of the prevalence of potentially inappropriate prescribing in older adults, and evidence of clinical, humanistic and economic impact. J Clin Pharm Ther. 2013;38:360-372.
5. O’Mahony D. STOPP/START criteria for potentially inappropriate medications/ potential prescribing omissions in older people: origin and progress. Expert Rev Clin Pharmacol. 2020;13:15-22.
1. Steinman MA, Hanlon JT. Managing medications in clinically complex elders: “There’s got to be a happy medium”. JAMA. 2010;304:1592-1601.
2. Vik SA, Maxwell CJ, Hogan DB. Measurement, correlates, and health outcomes of medication adherence among seniors. Ann Pharmacother. 2004;38:303-312.
3. American Geriatrics Society 2015 Updated Beers criteria for potentially inappropriate medication use in older Adults. J Am Geriatr Soc. 2015;63:2227-2246.
4. Hill-Taylor B, Sketris I, Hayden J, et al. Application of the STOPP/START criteria: a systematic review of the prevalence of potentially inappropriate prescribing in older adults, and evidence of clinical, humanistic and economic impact. J Clin Pharm Ther. 2013;38:360-372.
5. O’Mahony D. STOPP/START criteria for potentially inappropriate medications/ potential prescribing omissions in older people: origin and progress. Expert Rev Clin Pharmacol. 2020;13:15-22.
Pembrolizumab Plus Neoadjuvant Chemotherapy Improves Pathologic Complete Response Rates in Triple-Negative Breast Cancer
Study Overview
Objective. To evaluate the efficacy and safety of pembrolizumab in combination with neoadjuvant chemotherapy followed by adjuvant pembrolizumab in early-stage triple-negative breast cancer.
Design. International, multicenter, randomized, double-blind, phase 3 trial.
Intervention. Patients were randomly assigned in a 2:1 fashion to receive either pembrolizumab or placebo. Patients received 4 cycles of neoadjuvant pembrolizumab or placebo once every 3 weeks, in addition to weekly paclitaxel 80 mg/m2 plus carboplatin AUC5 once every 3 weeks. This was followed by 4 cycles of pembrolizumab or placebo plus doxorubicin 60 mg/m2 or epirubicin 90 mg/m2 plus cyclophosphamide 600 mg/m2 once every 3 weeks. Patients then underwent definitive surgery 3 to 6 weeks after completion of neoadjuvant therapy. In the adjuvant setting, patients received pembrolizumab or placebo once every 3 weeks for up to 9 cycles. Adjuvant capecitabine was not allowed.
Setting and participants. A total of 1174 patients underwent randomization: 784 patients in the pembrolizumab/chemotherapy group and 390 patients in the placebo/chemotherapy group. Eligible patients had newly diagnosed, centrally confirmed triple-negative breast cancer (nonmetastatic: T1c, N1-2 or T2-4, N0-2). Patients were eligible regardless of PD-L1 status, and those with inflammatory breast cancer and multifocal primaries were eligible.
Main outcome measures. The primary endpoints of this study were pathologic complete response (pCR) rate (defined as ypT0/ypTis, ypN0) at the time of surgery and event-free survival (EFS) in the intention-to-treat population. Secondary endpoints included pCR in all patients, pCR among patients with PD-L1–positive tumors, EFS among patients with PD-L1–positive tumors, and overall survival among all patients and those with PD-L1–positive tumors. PD-L1 expression was assessed using the PD-L1 IHC 22C3 pharmDx assay (Agilent, Santa Clara, CA). Expression was characterized according to the combined positive score, with a score of 1% or greater being considered positive.
Results. The baseline characteristics were well balanced between the treatment arms. At the time of the second interim analysis, the median duration of follow-up was 15.5 months. The pCR rate among the first 602 patients who were randomized was 64.8% in the pembrolizumab/chemotherapy group and 51.2% in the placebo group (P < 0.001; 95% confidence interval, 5.4-21.8). The pCR rate in the PD-L1–positive population was 68.9% in the pembrolizumab/chemotherapy group, as compared to 54.9% in the placebo group. In the PD-L1–negative population, the pCR rate was 45.3% in the pembrolizumab/chemotherapy group, as compared to 30.3% in the placebo group. At the time of analysis, 104 events had occurred, and the estimated percentage of patients at 18 months who were alive without disease progression was 91% in the pembrolizumab group and 85% in the placebo group. The median was not reached in either group.
Grade 3 or higher adverse events in the neoadjuvant phase were seen in 76.8% and 72.2% of patients in the pembrolizumab and placebo arms, respectively. Serious treatment-related adverse events occurred in 32% of patients in the pembrolizumab group compared to 19% in the placebo group. Febrile neutropenia and anemia were the most common. Discontinuation of the trial drug due to adverse events occurred in 23% of patients in the pembrolizumab arm and in 12% in the placebo arm. The majority of treatment-related adverse events occurred in the neoadjuvant phase. In the adjuvant phase, treatment-related adverse events occurred in 48% and 43% of patients in the pembrolizumab and placebo groups, respectively.
Conclusion. The combination of neoadjuvant chemotherapy and pembrolizumab in patients with newly diagnosed, early-stage, triple-negative breast cancer yielded a higher percentage of patients achieving a pCR as compared with chemotherapy plus placebo.
Commentary
The current study adds to the growing body of literature outlining the efficacy of immune checkpoint inhibition in triple-negative breast cancer. The previously published IMpassion130 trial showed that the addition of the PD-L1 antibody atezolizumab to nab-paclitaxel improved progression-free survival in patients with PD-L1–positive (1% or greater), metastatic triple-negative breast cancer.1 Similarly, in the phase 2 I-SPY2 trial, the addition of pembrolizumab to standard neoadjuvant chemotherapy led to a near tripling of the pCR rates in triple-negative breast cancer.2 While the current study demonstrated improved pCR rates with pembrolizumab, no difference in EFS has yet been demonstrated; however, longer-term follow-up will be required. There certainly are numerous studies documenting an association between pCR and improved disease-free survival and possibly overall survival. Cortazar and colleagues performed a pooled analysis of 12 international trials, which demonstrated an association between pCR and improved EFS (hazard ratio [HR], 0.24) and overall survival (HR, 0.16) in patients with triple-negative breast cancer.3 The results of the current study will require longer-term follow-up to confirm such an association.
The current study appears to have demonstrated a benefit with the addition of pembrolizumab across treatment subgroups, particularly in the PD-L1–positive and PD-L1–negative populations. While this differs from the findings of the IMpassion130 trial, it is quite difficult to draw definitive conclusions because the 2 trials studied different antibodies, and thus used a different assay to define PD-L1 positivity. Notable differences exist in determination of PD-L1 status across assays, and it is important for providers to use the appropriate assay for each antibody. These differences highlight the need for more informative biomarkers to predict a benefit from immune checkpoint inhibition.
It is also noteworthy that the control arm in the current trial was a platinum-based regimen. Platinum-based neoadjuvant regimens previously have been shown to induce higher pCR rates in triple-negative breast cancer; however, the incorporation of carboplatin as standard of care remains a topic of debate.4 Nevertheless, a similar trial evaluating the efficacy of atezolizumab combined with platinum-based neoadjuvant chemotherapy in triple-negative breast cancer, NSABP B-59 (NCT03281954), is underway, with the control arm also incorporating carboplatin. The results of this study will also help validate the role of checkpoint inhibitors in the neoadjuvant setting in triple-negative breast cancer. Of note, this trial did not allow for the use of adjuvant capecitabine, which has been previously shown in the CREATE-X trial to prolong survival in this population.5 How the use of adjuvant capecitabine would impact these results is completely unknown.6 The incidence of grade 3 or higher toxicities in the current trial appeared to be similar in both groups. There did appear to be a higher incidence of infusion reactions and skin reactions in the pembrolizumab groups. Immune-related adverse events were consistent with prior pembrolizumab data.
Applications for Clinical Practice
KEYNOTE-522 adds to the growing evidence suggesting that incorporation of immune checkpoint inhibitors into neoadjuvant therapy in patients with triple-negative breast cancer can improve pCR rates; however, its use as a standard of care will require longer-term follow-up to ensure the noted findings translate into improvement in EFS and, ultimately, overall survival.
– Daniel Isaac, DO, MS
1. Schmid P, Adams S, Rugo HS, et al. Atezolizumab and nab-paclitaxel in advanced triple-negative breast cancer. N Engl J Med. 2018;379:2108-2121.
2. Nanda R, Liu MC, Yau C, et al. Pembrolizumab plus standard neoadjuvant therapy for high-risk breast cancer (BC): results from I-SPY 2. J Clin Oncol. 2017;35: Suppl:506. Abstract 506.
3. Cortazar P, Zhang L, Untch M, et al. Pathological complete response and long-term clinical benefit in breast cancer: the CTNeoBC pooled analysis. Lancet. 2014;384:164-172.
4. Sikov WM, Berry DA, Perou CM, et al. Impact of the addition of carboplatin and/or bevacizumab to neoadjuvant one-per-week paclitaxel followed by dose-dense doxorubicin and cyclophosphamide on pathologic complete response in stage II to III triple-negative breast cancer: CALGB 40603 (Alliance). J Clin Oncol. 2015;33:13-21.
5. Masuda N, Lee S-J, Ohtani S, et al. Adjuvant capecitabine for breast cancer after preoperative chemotherapy. N Engl J Med. 2017;376:2147-2159.
6. von Minckwitz G, Schneeweiss A, Loibl S, et al. Neoadjuvant carboplatin in patients with triple-negative and HER2-positive early breast cancer (GeparSixto; GBG 66): a randomised phase 2 trial. Lancet Oncol. 2014;15:747-756.
Study Overview
Objective. To evaluate the efficacy and safety of pembrolizumab in combination with neoadjuvant chemotherapy followed by adjuvant pembrolizumab in early-stage triple-negative breast cancer.
Design. International, multicenter, randomized, double-blind, phase 3 trial.
Intervention. Patients were randomly assigned in a 2:1 fashion to receive either pembrolizumab or placebo. Patients received 4 cycles of neoadjuvant pembrolizumab or placebo once every 3 weeks, in addition to weekly paclitaxel 80 mg/m2 plus carboplatin AUC5 once every 3 weeks. This was followed by 4 cycles of pembrolizumab or placebo plus doxorubicin 60 mg/m2 or epirubicin 90 mg/m2 plus cyclophosphamide 600 mg/m2 once every 3 weeks. Patients then underwent definitive surgery 3 to 6 weeks after completion of neoadjuvant therapy. In the adjuvant setting, patients received pembrolizumab or placebo once every 3 weeks for up to 9 cycles. Adjuvant capecitabine was not allowed.
Setting and participants. A total of 1174 patients underwent randomization: 784 patients in the pembrolizumab/chemotherapy group and 390 patients in the placebo/chemotherapy group. Eligible patients had newly diagnosed, centrally confirmed triple-negative breast cancer (nonmetastatic: T1c, N1-2 or T2-4, N0-2). Patients were eligible regardless of PD-L1 status, and those with inflammatory breast cancer and multifocal primaries were eligible.
Main outcome measures. The primary endpoints of this study were pathologic complete response (pCR) rate (defined as ypT0/ypTis, ypN0) at the time of surgery and event-free survival (EFS) in the intention-to-treat population. Secondary endpoints included pCR in all patients, pCR among patients with PD-L1–positive tumors, EFS among patients with PD-L1–positive tumors, and overall survival among all patients and those with PD-L1–positive tumors. PD-L1 expression was assessed using the PD-L1 IHC 22C3 pharmDx assay (Agilent, Santa Clara, CA). Expression was characterized according to the combined positive score, with a score of 1% or greater being considered positive.
Results. The baseline characteristics were well balanced between the treatment arms. At the time of the second interim analysis, the median duration of follow-up was 15.5 months. The pCR rate among the first 602 patients who were randomized was 64.8% in the pembrolizumab/chemotherapy group and 51.2% in the placebo group (P < 0.001; 95% confidence interval, 5.4-21.8). The pCR rate in the PD-L1–positive population was 68.9% in the pembrolizumab/chemotherapy group, as compared to 54.9% in the placebo group. In the PD-L1–negative population, the pCR rate was 45.3% in the pembrolizumab/chemotherapy group, as compared to 30.3% in the placebo group. At the time of analysis, 104 events had occurred, and the estimated percentage of patients at 18 months who were alive without disease progression was 91% in the pembrolizumab group and 85% in the placebo group. The median was not reached in either group.
Grade 3 or higher adverse events in the neoadjuvant phase were seen in 76.8% and 72.2% of patients in the pembrolizumab and placebo arms, respectively. Serious treatment-related adverse events occurred in 32% of patients in the pembrolizumab group compared to 19% in the placebo group. Febrile neutropenia and anemia were the most common. Discontinuation of the trial drug due to adverse events occurred in 23% of patients in the pembrolizumab arm and in 12% in the placebo arm. The majority of treatment-related adverse events occurred in the neoadjuvant phase. In the adjuvant phase, treatment-related adverse events occurred in 48% and 43% of patients in the pembrolizumab and placebo groups, respectively.
Conclusion. The combination of neoadjuvant chemotherapy and pembrolizumab in patients with newly diagnosed, early-stage, triple-negative breast cancer yielded a higher percentage of patients achieving a pCR as compared with chemotherapy plus placebo.
Commentary
The current study adds to the growing body of literature outlining the efficacy of immune checkpoint inhibition in triple-negative breast cancer. The previously published IMpassion130 trial showed that the addition of the PD-L1 antibody atezolizumab to nab-paclitaxel improved progression-free survival in patients with PD-L1–positive (1% or greater), metastatic triple-negative breast cancer.1 Similarly, in the phase 2 I-SPY2 trial, the addition of pembrolizumab to standard neoadjuvant chemotherapy led to a near tripling of the pCR rates in triple-negative breast cancer.2 While the current study demonstrated improved pCR rates with pembrolizumab, no difference in EFS has yet been demonstrated; however, longer-term follow-up will be required. There certainly are numerous studies documenting an association between pCR and improved disease-free survival and possibly overall survival. Cortazar and colleagues performed a pooled analysis of 12 international trials, which demonstrated an association between pCR and improved EFS (hazard ratio [HR], 0.24) and overall survival (HR, 0.16) in patients with triple-negative breast cancer.3 The results of the current study will require longer-term follow-up to confirm such an association.
The current study appears to have demonstrated a benefit with the addition of pembrolizumab across treatment subgroups, particularly in the PD-L1–positive and PD-L1–negative populations. While this differs from the findings of the IMpassion130 trial, it is quite difficult to draw definitive conclusions because the 2 trials studied different antibodies, and thus used a different assay to define PD-L1 positivity. Notable differences exist in determination of PD-L1 status across assays, and it is important for providers to use the appropriate assay for each antibody. These differences highlight the need for more informative biomarkers to predict a benefit from immune checkpoint inhibition.
It is also noteworthy that the control arm in the current trial was a platinum-based regimen. Platinum-based neoadjuvant regimens previously have been shown to induce higher pCR rates in triple-negative breast cancer; however, the incorporation of carboplatin as standard of care remains a topic of debate.4 Nevertheless, a similar trial evaluating the efficacy of atezolizumab combined with platinum-based neoadjuvant chemotherapy in triple-negative breast cancer, NSABP B-59 (NCT03281954), is underway, with the control arm also incorporating carboplatin. The results of this study will also help validate the role of checkpoint inhibitors in the neoadjuvant setting in triple-negative breast cancer. Of note, this trial did not allow for the use of adjuvant capecitabine, which has been previously shown in the CREATE-X trial to prolong survival in this population.5 How the use of adjuvant capecitabine would impact these results is completely unknown.6 The incidence of grade 3 or higher toxicities in the current trial appeared to be similar in both groups. There did appear to be a higher incidence of infusion reactions and skin reactions in the pembrolizumab groups. Immune-related adverse events were consistent with prior pembrolizumab data.
Applications for Clinical Practice
KEYNOTE-522 adds to the growing evidence suggesting that incorporation of immune checkpoint inhibitors into neoadjuvant therapy in patients with triple-negative breast cancer can improve pCR rates; however, its use as a standard of care will require longer-term follow-up to ensure the noted findings translate into improvement in EFS and, ultimately, overall survival.
– Daniel Isaac, DO, MS
Study Overview
Objective. To evaluate the efficacy and safety of pembrolizumab in combination with neoadjuvant chemotherapy followed by adjuvant pembrolizumab in early-stage triple-negative breast cancer.
Design. International, multicenter, randomized, double-blind, phase 3 trial.
Intervention. Patients were randomly assigned in a 2:1 fashion to receive either pembrolizumab or placebo. Patients received 4 cycles of neoadjuvant pembrolizumab or placebo once every 3 weeks, in addition to weekly paclitaxel 80 mg/m2 plus carboplatin AUC5 once every 3 weeks. This was followed by 4 cycles of pembrolizumab or placebo plus doxorubicin 60 mg/m2 or epirubicin 90 mg/m2 plus cyclophosphamide 600 mg/m2 once every 3 weeks. Patients then underwent definitive surgery 3 to 6 weeks after completion of neoadjuvant therapy. In the adjuvant setting, patients received pembrolizumab or placebo once every 3 weeks for up to 9 cycles. Adjuvant capecitabine was not allowed.
Setting and participants. A total of 1174 patients underwent randomization: 784 patients in the pembrolizumab/chemotherapy group and 390 patients in the placebo/chemotherapy group. Eligible patients had newly diagnosed, centrally confirmed triple-negative breast cancer (nonmetastatic: T1c, N1-2 or T2-4, N0-2). Patients were eligible regardless of PD-L1 status, and those with inflammatory breast cancer and multifocal primaries were eligible.
Main outcome measures. The primary endpoints of this study were pathologic complete response (pCR) rate (defined as ypT0/ypTis, ypN0) at the time of surgery and event-free survival (EFS) in the intention-to-treat population. Secondary endpoints included pCR in all patients, pCR among patients with PD-L1–positive tumors, EFS among patients with PD-L1–positive tumors, and overall survival among all patients and those with PD-L1–positive tumors. PD-L1 expression was assessed using the PD-L1 IHC 22C3 pharmDx assay (Agilent, Santa Clara, CA). Expression was characterized according to the combined positive score, with a score of 1% or greater being considered positive.
Results. The baseline characteristics were well balanced between the treatment arms. At the time of the second interim analysis, the median duration of follow-up was 15.5 months. The pCR rate among the first 602 patients who were randomized was 64.8% in the pembrolizumab/chemotherapy group and 51.2% in the placebo group (P < 0.001; 95% confidence interval, 5.4-21.8). The pCR rate in the PD-L1–positive population was 68.9% in the pembrolizumab/chemotherapy group, as compared to 54.9% in the placebo group. In the PD-L1–negative population, the pCR rate was 45.3% in the pembrolizumab/chemotherapy group, as compared to 30.3% in the placebo group. At the time of analysis, 104 events had occurred, and the estimated percentage of patients at 18 months who were alive without disease progression was 91% in the pembrolizumab group and 85% in the placebo group. The median was not reached in either group.
Grade 3 or higher adverse events in the neoadjuvant phase were seen in 76.8% and 72.2% of patients in the pembrolizumab and placebo arms, respectively. Serious treatment-related adverse events occurred in 32% of patients in the pembrolizumab group compared to 19% in the placebo group. Febrile neutropenia and anemia were the most common. Discontinuation of the trial drug due to adverse events occurred in 23% of patients in the pembrolizumab arm and in 12% in the placebo arm. The majority of treatment-related adverse events occurred in the neoadjuvant phase. In the adjuvant phase, treatment-related adverse events occurred in 48% and 43% of patients in the pembrolizumab and placebo groups, respectively.
Conclusion. The combination of neoadjuvant chemotherapy and pembrolizumab in patients with newly diagnosed, early-stage, triple-negative breast cancer yielded a higher percentage of patients achieving a pCR as compared with chemotherapy plus placebo.
Commentary
The current study adds to the growing body of literature outlining the efficacy of immune checkpoint inhibition in triple-negative breast cancer. The previously published IMpassion130 trial showed that the addition of the PD-L1 antibody atezolizumab to nab-paclitaxel improved progression-free survival in patients with PD-L1–positive (1% or greater), metastatic triple-negative breast cancer.1 Similarly, in the phase 2 I-SPY2 trial, the addition of pembrolizumab to standard neoadjuvant chemotherapy led to a near tripling of the pCR rates in triple-negative breast cancer.2 While the current study demonstrated improved pCR rates with pembrolizumab, no difference in EFS has yet been demonstrated; however, longer-term follow-up will be required. There certainly are numerous studies documenting an association between pCR and improved disease-free survival and possibly overall survival. Cortazar and colleagues performed a pooled analysis of 12 international trials, which demonstrated an association between pCR and improved EFS (hazard ratio [HR], 0.24) and overall survival (HR, 0.16) in patients with triple-negative breast cancer.3 The results of the current study will require longer-term follow-up to confirm such an association.
The current study appears to have demonstrated a benefit with the addition of pembrolizumab across treatment subgroups, particularly in the PD-L1–positive and PD-L1–negative populations. While this differs from the findings of the IMpassion130 trial, it is quite difficult to draw definitive conclusions because the 2 trials studied different antibodies, and thus used a different assay to define PD-L1 positivity. Notable differences exist in determination of PD-L1 status across assays, and it is important for providers to use the appropriate assay for each antibody. These differences highlight the need for more informative biomarkers to predict a benefit from immune checkpoint inhibition.
It is also noteworthy that the control arm in the current trial was a platinum-based regimen. Platinum-based neoadjuvant regimens previously have been shown to induce higher pCR rates in triple-negative breast cancer; however, the incorporation of carboplatin as standard of care remains a topic of debate.4 Nevertheless, a similar trial evaluating the efficacy of atezolizumab combined with platinum-based neoadjuvant chemotherapy in triple-negative breast cancer, NSABP B-59 (NCT03281954), is underway, with the control arm also incorporating carboplatin. The results of this study will also help validate the role of checkpoint inhibitors in the neoadjuvant setting in triple-negative breast cancer. Of note, this trial did not allow for the use of adjuvant capecitabine, which has been previously shown in the CREATE-X trial to prolong survival in this population.5 How the use of adjuvant capecitabine would impact these results is completely unknown.6 The incidence of grade 3 or higher toxicities in the current trial appeared to be similar in both groups. There did appear to be a higher incidence of infusion reactions and skin reactions in the pembrolizumab groups. Immune-related adverse events were consistent with prior pembrolizumab data.
Applications for Clinical Practice
KEYNOTE-522 adds to the growing evidence suggesting that incorporation of immune checkpoint inhibitors into neoadjuvant therapy in patients with triple-negative breast cancer can improve pCR rates; however, its use as a standard of care will require longer-term follow-up to ensure the noted findings translate into improvement in EFS and, ultimately, overall survival.
– Daniel Isaac, DO, MS
1. Schmid P, Adams S, Rugo HS, et al. Atezolizumab and nab-paclitaxel in advanced triple-negative breast cancer. N Engl J Med. 2018;379:2108-2121.
2. Nanda R, Liu MC, Yau C, et al. Pembrolizumab plus standard neoadjuvant therapy for high-risk breast cancer (BC): results from I-SPY 2. J Clin Oncol. 2017;35: Suppl:506. Abstract 506.
3. Cortazar P, Zhang L, Untch M, et al. Pathological complete response and long-term clinical benefit in breast cancer: the CTNeoBC pooled analysis. Lancet. 2014;384:164-172.
4. Sikov WM, Berry DA, Perou CM, et al. Impact of the addition of carboplatin and/or bevacizumab to neoadjuvant one-per-week paclitaxel followed by dose-dense doxorubicin and cyclophosphamide on pathologic complete response in stage II to III triple-negative breast cancer: CALGB 40603 (Alliance). J Clin Oncol. 2015;33:13-21.
5. Masuda N, Lee S-J, Ohtani S, et al. Adjuvant capecitabine for breast cancer after preoperative chemotherapy. N Engl J Med. 2017;376:2147-2159.
6. von Minckwitz G, Schneeweiss A, Loibl S, et al. Neoadjuvant carboplatin in patients with triple-negative and HER2-positive early breast cancer (GeparSixto; GBG 66): a randomised phase 2 trial. Lancet Oncol. 2014;15:747-756.
1. Schmid P, Adams S, Rugo HS, et al. Atezolizumab and nab-paclitaxel in advanced triple-negative breast cancer. N Engl J Med. 2018;379:2108-2121.
2. Nanda R, Liu MC, Yau C, et al. Pembrolizumab plus standard neoadjuvant therapy for high-risk breast cancer (BC): results from I-SPY 2. J Clin Oncol. 2017;35: Suppl:506. Abstract 506.
3. Cortazar P, Zhang L, Untch M, et al. Pathological complete response and long-term clinical benefit in breast cancer: the CTNeoBC pooled analysis. Lancet. 2014;384:164-172.
4. Sikov WM, Berry DA, Perou CM, et al. Impact of the addition of carboplatin and/or bevacizumab to neoadjuvant one-per-week paclitaxel followed by dose-dense doxorubicin and cyclophosphamide on pathologic complete response in stage II to III triple-negative breast cancer: CALGB 40603 (Alliance). J Clin Oncol. 2015;33:13-21.
5. Masuda N, Lee S-J, Ohtani S, et al. Adjuvant capecitabine for breast cancer after preoperative chemotherapy. N Engl J Med. 2017;376:2147-2159.
6. von Minckwitz G, Schneeweiss A, Loibl S, et al. Neoadjuvant carboplatin in patients with triple-negative and HER2-positive early breast cancer (GeparSixto; GBG 66): a randomised phase 2 trial. Lancet Oncol. 2014;15:747-756.
Cabazitaxel Improves Progression-Free and Overall Survival in Metastatic Prostate Cancer After Progression on Abiraterone or Enzalutamide
Study Overview
Objective. To evaluate the efficacy of cabazitaxel compared to androgen-signaling–targeted inhibitors (ASTIs) in patients with metastatic castration-resistant prostate cancer who have received docetaxel and have progressed within 12 months of treatment with either abiraterone or enzalutamide.
Design. The CARD trial was an international, randomized, open-label phase 3 trial conducted across 13 European countries.
Setting and participants. Eligible patients were 18 years of age or older; had metastatic castration-resistant prostate cancer previously treated with docetaxel; and had disease progression during 12 months of treatment with abiraterone or enzalutamide. All patients had histologically proven prostate cancer, castrate levels of serum testosterone, and disease progression, defined by at least 2 new bone lesions or rising prostate-specific antigen (PSA) level. A total of 255 patients underwent randomization between November 2015 and November 2018, with 129 assigned to receive cabazitaxel and 126 patients assigned to receive an ASTI, 58 of whom received abiraterone and 66 of whom received enzalutamide. Patients who had received an ASTI in the setting of castrate-sensitive metastatic prostate cancer were included.
Intervention. Patients were randomized in a 1:1 fashion to receive either cabazitaxel or abiraterone or enzalutamide. Patients receiving cabazitaxel 25 mg/m2 intravenously every 3 weeks also received oral prednisone daily and primary prophylactic granulocyte-colony stimulating factor. Patients assigned to receive an ASTI received abiraterone 1000 mg orally daily with prednisone 5 mg twice daily or enzalutamide 160 mg daily. Patients in the ASTI group who had progressed on abiraterone were assigned to enzalutamide, and alternatively, those on enzalutamide were assigned to abiraterone. Patients were treated until 1 of the following occurred: imaging-based disease progression, unacceptable toxicity, or advancing to an alternative therapy.
Main outcome measures. The primary endpoint was imaging-based progression-free survival, which was defined as the time from randomization until objective tumor progression, progression of bone lesions, or death. The secondary endpoints were overall survival, progression-free survival, PSA response, tumor and pain responses, a new symptomatic skeletal event, and safety.
Results. The median follow-up was 9.2 months. Imaging-based disease progression or death from any cause occurred in 95 (73.6%) participants in the cabazitaxel group, as compared to 101 (80.2%) who were assigned to receive an ASTI. The median imaging-based progression-free survival was 8.0 months in the cabazitaxel group and 3.7 months in the abiraterone/enzalutamide group. The median duration of treatment was longer in those receiving cabazitaxel (22 vs 12.5 weeks). The primary reason for treatment discontinuation was disease progression (in 43.7% of patients receiving cabazitaxel and 71% receiving an ASTI) or an adverse event (19.8% and 8.9%, respectively).
The trial’s secondary endpoints demonstrated improved outcomes in the cabazitaxel group compared to the abiraterone/enzalutamide group. There were 70 deaths (54.2%) in the cabazitaxel group and 83 (65.9%) in the ASTI group. Both the median overall survival (13.6 months in the cabazitaxel group and 11 months in the ASTI group) and the median progression-free survival (4.4 months and 2.7 months, respectively) were improved in those who received cabazitaxel. There was a 50% or greater reduction in the PSA level from baseline in 35.7% of the cabazitaxel group and 13.5% of the ASTI group.
Regarding the safety of the agents, the incidence of adverse events was similar in each group (38.9% in the cabazitaxel group and 38.7% in the ASTI group). Treatment discontinuation occurred more frequently in the cabazitaxel group (19.8%) compared to the ASTI group (8.9%). Adverse events of grade 3 or higher occurred more frequently with cabazitaxel; these were asthenia (4% vs 2.4%), diarrhea (3.2% vs 0), peripheral neuropathy (3.2% vs 0 patients), and febrile neutropenia (3.2% vs 0 patients).
Conclusion. Patients who had disease progression within 12 months on an ASTI and had previously been treated for metastatic castration-resistant prostate cancer with docetaxel had longer imaging-based progression-free survival and overall survival when treated with cabazitaxel compared to those treated with an alternative ASTI. Other clinical outcomes, including overall survival and progression-free survival, were also improved in the cabazitaxel group.
Commentary
Four ASTIs are approved for therapy in men with advanced prostate cancer. The next line of therapy following progression on an ASTI, whether to consider second-line androgen targeted inhibitors or proceed to taxane-based chemotherapy, has been unclear. The current CARD trial sought to answer this question and provides evidence that cabazitaxel is the next line of therapy for these patients. The trial’s primary endpoint, imaging-based disease progression, was reported in 73.6% of those who received cabazitaxel and in 80.2% of those who received abiraterone or enzalutamide. Patients treated with cabazitaxel had a longer imaging-based progression-free survival (8.0 months vs 3.7 months) and a longer duration of treatment (22 vs 12.5 weeks).
Because there is clinical evidence of cross-resistance between different ASTIs, the value of sequential therapy has been unclear. Emergence of androgen-receptor splice variant 7 (AR-V7) mutational status in circulating tumor cells is associated with poor outcomes with secondary androgen-signaling inhibitor therapy, and may be an indicator of resistance to subsequent androgen-signaling inhibitors.1,2 In the PROPHECY trial, the response rates to subsequent androgen targeted therapy in patients with AR-V7 mutations ranged from 30% to 40%.3 Understanding how AR-V7 mutational status may impact such outcomes will certainly help define whether a subgroup exists in whom use of second-line androgen signaling inhibitors may be considered.
The patients enrolled in the current study appear to represent a subgroup of patients with biologically aggressive disease or with inherent resistance to ASTIs. The patients included in this study progressed within 1 year of androgen targeted therapy, which is representative of a more aggressive population of patients who may be hormone insensitive and derive more benefit from chemotherapy. Initial androgen deprivation therapy was given for 13.7 and 12.6 months to the cabazitaxel and enzalutamide/abiraterone arms, respectively, prior to developing castrate-resistant prostate cancer. Patients enrolled in this study also previously received docetaxel, deselecting those who are taxane-resistant and therefore may be less likely to respond to additional taxane-based therapy. Detection of AR-V7 splice variant expression in circulating tumor cells, consideration of biomarker data, and sensitivity to taxanes may help guide decisions regarding the use of sequential androgen-targeted agents; however, there has been no clear data to guide such an approach. It is also important to consider that, because this is a European study, the approved dose given in this trial was 25 mg/m2. The PROSELICA trial previously demonstrated noninferiority of 20 mg/m2 compared with 25 mg/m2, with fewer adverse events, which is the dose now utilized in the United States.4
The adverse events of grade 3 or greater occurring in the cabazitaxel group should be discussed with patients, including fatigue, diarrhea, peripheral neuropathy, and febrile neutropenia.
The data from the CARD trial provide guidance regarding therapy sequencing in those with advanced prostate cancer after progression on first-line androgen targeted inhibitors and docetaxel; however, further work is needed to understand the universal application of this data in this cohort.
Applications in Clinical Practice
Patients with metastatic castration-resistant prostate cancer who have received docetaxel and progressed on an androgen-signaling inhibitor within 12 months should be considered for cabazitaxel over an alternative androgen-signaling inhibitor. This decision should be based on several factors, including AR-V7 mutational status, duration of androgen deprivation therapy, and hormone and taxane sensitivity in the past. Future studies are likely to incorporate genomic biomarkers rather than clinical criteria alone to make treatment decisions.
–Britni Souther, DO, and Daniel Isaac, DO, MS, Michigan State University, East Lansing, MI
1. Antonarakis ES, Lu C, Wang H, et al. AR-V7 and resistance to enzalutamide and abiraterone in prostate cancer. N Engl J Med. 2014;371:1028-1038.
2. Zhang T, Karsh LI, Nissenblatt MJ, et al. Androgen receptor splice variant, AR-V7, as a biomarker of resistance to androgen axis-targeted therapies in advanced prostate cancer. Clin Genitourin Cancer. 2019;18:1-10.
3. Armstrong AJ, Halabi S, Luo J, et al. Prospective multicenter validation of androgen receptor splice variant 7 and hormone therapy resistance in high-risk castration-resistant prostate cancer: the PROPHECY study. J Clin Oncol. 2019;37:1120-1129.
4. Eisenberger M, Hardy-Bessard AC, Kim CS, et al. Phase III study comparing a reduced dose of cabazitaxel (20 mg/m2) and the currently approved dose (25 mg/m2) in postdocetaxel patients with metastatic castration-resistant prostate cancer-PROSELICA. J Clin Oncol. 2017;35:3198-3206.
Study Overview
Objective. To evaluate the efficacy of cabazitaxel compared to androgen-signaling–targeted inhibitors (ASTIs) in patients with metastatic castration-resistant prostate cancer who have received docetaxel and have progressed within 12 months of treatment with either abiraterone or enzalutamide.
Design. The CARD trial was an international, randomized, open-label phase 3 trial conducted across 13 European countries.
Setting and participants. Eligible patients were 18 years of age or older; had metastatic castration-resistant prostate cancer previously treated with docetaxel; and had disease progression during 12 months of treatment with abiraterone or enzalutamide. All patients had histologically proven prostate cancer, castrate levels of serum testosterone, and disease progression, defined by at least 2 new bone lesions or rising prostate-specific antigen (PSA) level. A total of 255 patients underwent randomization between November 2015 and November 2018, with 129 assigned to receive cabazitaxel and 126 patients assigned to receive an ASTI, 58 of whom received abiraterone and 66 of whom received enzalutamide. Patients who had received an ASTI in the setting of castrate-sensitive metastatic prostate cancer were included.
Intervention. Patients were randomized in a 1:1 fashion to receive either cabazitaxel or abiraterone or enzalutamide. Patients receiving cabazitaxel 25 mg/m2 intravenously every 3 weeks also received oral prednisone daily and primary prophylactic granulocyte-colony stimulating factor. Patients assigned to receive an ASTI received abiraterone 1000 mg orally daily with prednisone 5 mg twice daily or enzalutamide 160 mg daily. Patients in the ASTI group who had progressed on abiraterone were assigned to enzalutamide, and alternatively, those on enzalutamide were assigned to abiraterone. Patients were treated until 1 of the following occurred: imaging-based disease progression, unacceptable toxicity, or advancing to an alternative therapy.
Main outcome measures. The primary endpoint was imaging-based progression-free survival, which was defined as the time from randomization until objective tumor progression, progression of bone lesions, or death. The secondary endpoints were overall survival, progression-free survival, PSA response, tumor and pain responses, a new symptomatic skeletal event, and safety.
Results. The median follow-up was 9.2 months. Imaging-based disease progression or death from any cause occurred in 95 (73.6%) participants in the cabazitaxel group, as compared to 101 (80.2%) who were assigned to receive an ASTI. The median imaging-based progression-free survival was 8.0 months in the cabazitaxel group and 3.7 months in the abiraterone/enzalutamide group. The median duration of treatment was longer in those receiving cabazitaxel (22 vs 12.5 weeks). The primary reason for treatment discontinuation was disease progression (in 43.7% of patients receiving cabazitaxel and 71% receiving an ASTI) or an adverse event (19.8% and 8.9%, respectively).
The trial’s secondary endpoints demonstrated improved outcomes in the cabazitaxel group compared to the abiraterone/enzalutamide group. There were 70 deaths (54.2%) in the cabazitaxel group and 83 (65.9%) in the ASTI group. Both the median overall survival (13.6 months in the cabazitaxel group and 11 months in the ASTI group) and the median progression-free survival (4.4 months and 2.7 months, respectively) were improved in those who received cabazitaxel. There was a 50% or greater reduction in the PSA level from baseline in 35.7% of the cabazitaxel group and 13.5% of the ASTI group.
Regarding the safety of the agents, the incidence of adverse events was similar in each group (38.9% in the cabazitaxel group and 38.7% in the ASTI group). Treatment discontinuation occurred more frequently in the cabazitaxel group (19.8%) compared to the ASTI group (8.9%). Adverse events of grade 3 or higher occurred more frequently with cabazitaxel; these were asthenia (4% vs 2.4%), diarrhea (3.2% vs 0), peripheral neuropathy (3.2% vs 0 patients), and febrile neutropenia (3.2% vs 0 patients).
Conclusion. Patients who had disease progression within 12 months on an ASTI and had previously been treated for metastatic castration-resistant prostate cancer with docetaxel had longer imaging-based progression-free survival and overall survival when treated with cabazitaxel compared to those treated with an alternative ASTI. Other clinical outcomes, including overall survival and progression-free survival, were also improved in the cabazitaxel group.
Commentary
Four ASTIs are approved for therapy in men with advanced prostate cancer. The next line of therapy following progression on an ASTI, whether to consider second-line androgen targeted inhibitors or proceed to taxane-based chemotherapy, has been unclear. The current CARD trial sought to answer this question and provides evidence that cabazitaxel is the next line of therapy for these patients. The trial’s primary endpoint, imaging-based disease progression, was reported in 73.6% of those who received cabazitaxel and in 80.2% of those who received abiraterone or enzalutamide. Patients treated with cabazitaxel had a longer imaging-based progression-free survival (8.0 months vs 3.7 months) and a longer duration of treatment (22 vs 12.5 weeks).
Because there is clinical evidence of cross-resistance between different ASTIs, the value of sequential therapy has been unclear. Emergence of androgen-receptor splice variant 7 (AR-V7) mutational status in circulating tumor cells is associated with poor outcomes with secondary androgen-signaling inhibitor therapy, and may be an indicator of resistance to subsequent androgen-signaling inhibitors.1,2 In the PROPHECY trial, the response rates to subsequent androgen targeted therapy in patients with AR-V7 mutations ranged from 30% to 40%.3 Understanding how AR-V7 mutational status may impact such outcomes will certainly help define whether a subgroup exists in whom use of second-line androgen signaling inhibitors may be considered.
The patients enrolled in the current study appear to represent a subgroup of patients with biologically aggressive disease or with inherent resistance to ASTIs. The patients included in this study progressed within 1 year of androgen targeted therapy, which is representative of a more aggressive population of patients who may be hormone insensitive and derive more benefit from chemotherapy. Initial androgen deprivation therapy was given for 13.7 and 12.6 months to the cabazitaxel and enzalutamide/abiraterone arms, respectively, prior to developing castrate-resistant prostate cancer. Patients enrolled in this study also previously received docetaxel, deselecting those who are taxane-resistant and therefore may be less likely to respond to additional taxane-based therapy. Detection of AR-V7 splice variant expression in circulating tumor cells, consideration of biomarker data, and sensitivity to taxanes may help guide decisions regarding the use of sequential androgen-targeted agents; however, there has been no clear data to guide such an approach. It is also important to consider that, because this is a European study, the approved dose given in this trial was 25 mg/m2. The PROSELICA trial previously demonstrated noninferiority of 20 mg/m2 compared with 25 mg/m2, with fewer adverse events, which is the dose now utilized in the United States.4
The adverse events of grade 3 or greater occurring in the cabazitaxel group should be discussed with patients, including fatigue, diarrhea, peripheral neuropathy, and febrile neutropenia.
The data from the CARD trial provide guidance regarding therapy sequencing in those with advanced prostate cancer after progression on first-line androgen targeted inhibitors and docetaxel; however, further work is needed to understand the universal application of this data in this cohort.
Applications in Clinical Practice
Patients with metastatic castration-resistant prostate cancer who have received docetaxel and progressed on an androgen-signaling inhibitor within 12 months should be considered for cabazitaxel over an alternative androgen-signaling inhibitor. This decision should be based on several factors, including AR-V7 mutational status, duration of androgen deprivation therapy, and hormone and taxane sensitivity in the past. Future studies are likely to incorporate genomic biomarkers rather than clinical criteria alone to make treatment decisions.
–Britni Souther, DO, and Daniel Isaac, DO, MS, Michigan State University, East Lansing, MI
Study Overview
Objective. To evaluate the efficacy of cabazitaxel compared to androgen-signaling–targeted inhibitors (ASTIs) in patients with metastatic castration-resistant prostate cancer who have received docetaxel and have progressed within 12 months of treatment with either abiraterone or enzalutamide.
Design. The CARD trial was an international, randomized, open-label phase 3 trial conducted across 13 European countries.
Setting and participants. Eligible patients were 18 years of age or older; had metastatic castration-resistant prostate cancer previously treated with docetaxel; and had disease progression during 12 months of treatment with abiraterone or enzalutamide. All patients had histologically proven prostate cancer, castrate levels of serum testosterone, and disease progression, defined by at least 2 new bone lesions or rising prostate-specific antigen (PSA) level. A total of 255 patients underwent randomization between November 2015 and November 2018, with 129 assigned to receive cabazitaxel and 126 patients assigned to receive an ASTI, 58 of whom received abiraterone and 66 of whom received enzalutamide. Patients who had received an ASTI in the setting of castrate-sensitive metastatic prostate cancer were included.
Intervention. Patients were randomized in a 1:1 fashion to receive either cabazitaxel or abiraterone or enzalutamide. Patients receiving cabazitaxel 25 mg/m2 intravenously every 3 weeks also received oral prednisone daily and primary prophylactic granulocyte-colony stimulating factor. Patients assigned to receive an ASTI received abiraterone 1000 mg orally daily with prednisone 5 mg twice daily or enzalutamide 160 mg daily. Patients in the ASTI group who had progressed on abiraterone were assigned to enzalutamide, and alternatively, those on enzalutamide were assigned to abiraterone. Patients were treated until 1 of the following occurred: imaging-based disease progression, unacceptable toxicity, or advancing to an alternative therapy.
Main outcome measures. The primary endpoint was imaging-based progression-free survival, which was defined as the time from randomization until objective tumor progression, progression of bone lesions, or death. The secondary endpoints were overall survival, progression-free survival, PSA response, tumor and pain responses, a new symptomatic skeletal event, and safety.
Results. The median follow-up was 9.2 months. Imaging-based disease progression or death from any cause occurred in 95 (73.6%) participants in the cabazitaxel group, as compared to 101 (80.2%) who were assigned to receive an ASTI. The median imaging-based progression-free survival was 8.0 months in the cabazitaxel group and 3.7 months in the abiraterone/enzalutamide group. The median duration of treatment was longer in those receiving cabazitaxel (22 vs 12.5 weeks). The primary reason for treatment discontinuation was disease progression (in 43.7% of patients receiving cabazitaxel and 71% receiving an ASTI) or an adverse event (19.8% and 8.9%, respectively).
The trial’s secondary endpoints demonstrated improved outcomes in the cabazitaxel group compared to the abiraterone/enzalutamide group. There were 70 deaths (54.2%) in the cabazitaxel group and 83 (65.9%) in the ASTI group. Both the median overall survival (13.6 months in the cabazitaxel group and 11 months in the ASTI group) and the median progression-free survival (4.4 months and 2.7 months, respectively) were improved in those who received cabazitaxel. There was a 50% or greater reduction in the PSA level from baseline in 35.7% of the cabazitaxel group and 13.5% of the ASTI group.
Regarding the safety of the agents, the incidence of adverse events was similar in each group (38.9% in the cabazitaxel group and 38.7% in the ASTI group). Treatment discontinuation occurred more frequently in the cabazitaxel group (19.8%) compared to the ASTI group (8.9%). Adverse events of grade 3 or higher occurred more frequently with cabazitaxel; these were asthenia (4% vs 2.4%), diarrhea (3.2% vs 0), peripheral neuropathy (3.2% vs 0 patients), and febrile neutropenia (3.2% vs 0 patients).
Conclusion. Patients who had disease progression within 12 months on an ASTI and had previously been treated for metastatic castration-resistant prostate cancer with docetaxel had longer imaging-based progression-free survival and overall survival when treated with cabazitaxel compared to those treated with an alternative ASTI. Other clinical outcomes, including overall survival and progression-free survival, were also improved in the cabazitaxel group.
Commentary
Four ASTIs are approved for therapy in men with advanced prostate cancer. The next line of therapy following progression on an ASTI, whether to consider second-line androgen targeted inhibitors or proceed to taxane-based chemotherapy, has been unclear. The current CARD trial sought to answer this question and provides evidence that cabazitaxel is the next line of therapy for these patients. The trial’s primary endpoint, imaging-based disease progression, was reported in 73.6% of those who received cabazitaxel and in 80.2% of those who received abiraterone or enzalutamide. Patients treated with cabazitaxel had a longer imaging-based progression-free survival (8.0 months vs 3.7 months) and a longer duration of treatment (22 vs 12.5 weeks).
Because there is clinical evidence of cross-resistance between different ASTIs, the value of sequential therapy has been unclear. Emergence of androgen-receptor splice variant 7 (AR-V7) mutational status in circulating tumor cells is associated with poor outcomes with secondary androgen-signaling inhibitor therapy, and may be an indicator of resistance to subsequent androgen-signaling inhibitors.1,2 In the PROPHECY trial, the response rates to subsequent androgen targeted therapy in patients with AR-V7 mutations ranged from 30% to 40%.3 Understanding how AR-V7 mutational status may impact such outcomes will certainly help define whether a subgroup exists in whom use of second-line androgen signaling inhibitors may be considered.
The patients enrolled in the current study appear to represent a subgroup of patients with biologically aggressive disease or with inherent resistance to ASTIs. The patients included in this study progressed within 1 year of androgen targeted therapy, which is representative of a more aggressive population of patients who may be hormone insensitive and derive more benefit from chemotherapy. Initial androgen deprivation therapy was given for 13.7 and 12.6 months to the cabazitaxel and enzalutamide/abiraterone arms, respectively, prior to developing castrate-resistant prostate cancer. Patients enrolled in this study also previously received docetaxel, deselecting those who are taxane-resistant and therefore may be less likely to respond to additional taxane-based therapy. Detection of AR-V7 splice variant expression in circulating tumor cells, consideration of biomarker data, and sensitivity to taxanes may help guide decisions regarding the use of sequential androgen-targeted agents; however, there has been no clear data to guide such an approach. It is also important to consider that, because this is a European study, the approved dose given in this trial was 25 mg/m2. The PROSELICA trial previously demonstrated noninferiority of 20 mg/m2 compared with 25 mg/m2, with fewer adverse events, which is the dose now utilized in the United States.4
The adverse events of grade 3 or greater occurring in the cabazitaxel group should be discussed with patients, including fatigue, diarrhea, peripheral neuropathy, and febrile neutropenia.
The data from the CARD trial provide guidance regarding therapy sequencing in those with advanced prostate cancer after progression on first-line androgen targeted inhibitors and docetaxel; however, further work is needed to understand the universal application of this data in this cohort.
Applications in Clinical Practice
Patients with metastatic castration-resistant prostate cancer who have received docetaxel and progressed on an androgen-signaling inhibitor within 12 months should be considered for cabazitaxel over an alternative androgen-signaling inhibitor. This decision should be based on several factors, including AR-V7 mutational status, duration of androgen deprivation therapy, and hormone and taxane sensitivity in the past. Future studies are likely to incorporate genomic biomarkers rather than clinical criteria alone to make treatment decisions.
–Britni Souther, DO, and Daniel Isaac, DO, MS, Michigan State University, East Lansing, MI
1. Antonarakis ES, Lu C, Wang H, et al. AR-V7 and resistance to enzalutamide and abiraterone in prostate cancer. N Engl J Med. 2014;371:1028-1038.
2. Zhang T, Karsh LI, Nissenblatt MJ, et al. Androgen receptor splice variant, AR-V7, as a biomarker of resistance to androgen axis-targeted therapies in advanced prostate cancer. Clin Genitourin Cancer. 2019;18:1-10.
3. Armstrong AJ, Halabi S, Luo J, et al. Prospective multicenter validation of androgen receptor splice variant 7 and hormone therapy resistance in high-risk castration-resistant prostate cancer: the PROPHECY study. J Clin Oncol. 2019;37:1120-1129.
4. Eisenberger M, Hardy-Bessard AC, Kim CS, et al. Phase III study comparing a reduced dose of cabazitaxel (20 mg/m2) and the currently approved dose (25 mg/m2) in postdocetaxel patients with metastatic castration-resistant prostate cancer-PROSELICA. J Clin Oncol. 2017;35:3198-3206.
1. Antonarakis ES, Lu C, Wang H, et al. AR-V7 and resistance to enzalutamide and abiraterone in prostate cancer. N Engl J Med. 2014;371:1028-1038.
2. Zhang T, Karsh LI, Nissenblatt MJ, et al. Androgen receptor splice variant, AR-V7, as a biomarker of resistance to androgen axis-targeted therapies in advanced prostate cancer. Clin Genitourin Cancer. 2019;18:1-10.
3. Armstrong AJ, Halabi S, Luo J, et al. Prospective multicenter validation of androgen receptor splice variant 7 and hormone therapy resistance in high-risk castration-resistant prostate cancer: the PROPHECY study. J Clin Oncol. 2019;37:1120-1129.
4. Eisenberger M, Hardy-Bessard AC, Kim CS, et al. Phase III study comparing a reduced dose of cabazitaxel (20 mg/m2) and the currently approved dose (25 mg/m2) in postdocetaxel patients with metastatic castration-resistant prostate cancer-PROSELICA. J Clin Oncol. 2017;35:3198-3206.
Collaborative Dementia Care via Telephone and Internet Improves Quality of Life and Reduces Caregiver Burden
Study Overview
Objective. To examine the effectiveness of a hub site–based care delivery system in delivering a dementia care management program to persons with dementia and their caregivers.
Design. Randomized pragmatic clinical trial enrolling dyads of persons with dementia and their caregiver. Study participants were randomly assigned to the dementia care management program and usual care in a 2:1 ratio.
Setting and participants. The study was conducted from 2 hub sites: the University of California, San Francisco, and the University of Nebraska Medical Center in Omaha. Each hub-site team served persons with dementia and their caregivers in California, Nebraska, and Iowa in both urban and rural areas. Participants were recruited through referral by treating providers or self-referral in response to advertising presented through a community outreach event, in the news, or on the internet. Eligibility requirements included: having a dementia diagnosis made by a treating provider; age older than 45 years; Medicare or Medicaid enrollment or eligibility; presence of a caregiver willing to enroll in the study; fluency in English, Spanish, or Cantonese; and residence in California, Nebraska, or Iowa. Exclusion criteria included residence in a nursing home. Out of 2585 referred dyads of persons with dementia and caregivers, 780 met inclusion criteria and were enrolled. A 2:1 randomization yielded 512 dyads in the intervention group and 268 dyads in the control group.
Intervention. The dementia care management program was implemented through the Care Ecosystem, a telephone- and internet-based supportive care intervention delivered by care team navigators. The navigators were unlicensed but trained dementia care guides working under the supervision of an advanced practice nurse, social worker, and pharmacist. The intervention consisted of telephone calls, monthly or at a frequency determined by needs and preferences, placed by navigators over a 12-month period; the content of the calls included response to immediate needs of persons with dementia and their caregiver, screening for common problems, and provision of support and education using care plan protocols. Caregivers and persons with dementia were encouraged to initiate contact through email, mail, or telephone for dementia-related questions. Additional support was provided by an advanced practice nurse, social worker, or pharmacist, as needed, and these health care professionals conducted further communication with the persons with dementia, caregiver, or outside professionals, such as physicians, for the persons with dementia, as needed. The average number of telephone calls over the 12-month period was 15.3 (standard deviation, 11.3). Participants assigned to usual care were offered contact information on dementia and aging-related organizations, including the Alzheimer’s Association and the Area Agencies on Aging, and also were sent a quarterly newsletter with general information about dementia.
Main outcome measures. The primary outcome measure was the Quality of Life in Alzheimer’s Disease score obtained by caregiver interview. This quality of life measure includes the following aspects, each rated on an ordinal scale of 1 to 4: physical health, energy level, mood, living situation, memory, family, closest relationship, friends, self, ability to do things for fun, finances, and life as a whole. The scores range from 13 to 52, with a higher score indicating better quality of life for persons with dementia. Other outcomes included frequency of emergency room visits, hospital use, and ambulance use; caregiver depression score from the Patient Health Questionnaire scale; caregiver burden score using the 12-item Zarit Burden Interview; caregiver self-efficacy; and caregiver satisfaction.
Main results. The study found that the quality of life for persons with dementia declined more in the usual care group than in the intervention group during the 12-month study period (difference of 0.53; 95% confidence interval, 0.25-1.3; P = 0.04). Persons with dementia also had fewer emergency room visits, with a number needed to treat to prevent 1 emergency room visit of 5. The intervention did not reduce ambulance use or hospital use. Caregivers in the intervention group had a greater decline in depression when compared to usual care; the frequency of moderate to severe depression decreased from 13.4% at baseline to 7.9% at 12 months (P = 0.004). Caregiver burden declined more in the intervention group than in the control group at 12 months (P = 0.046). In terms of caregiver satisfaction, 97% of caregivers surveyed in the intervention group said they would recommend the intervention to another caregiver; 45% indicated they were very satisfied, and 33% that they were satisfied.
Conclusion. Delivering dementia care via telephone and internet through a collaborative program with care navigators can improve caregiver burden and well-being and improve quality of life, emergency room utilization, and depression for persons with dementia. In addition, the program was well received.
Commentary
Dementia, including Alzheimer’s disease, primarily affects older adults and is characterized by declines in memory and cognitive function. It is often accompanied by neuropsychological symptoms such as agitation, wandering, and physical and verbal outbursts, which are debilitating for persons living with dementia and difficult to cope with for caregivers.1 These symptoms are often the source of caregiver stress, potentially leading to caregiver depression and eventual need for long-term institution-based care, such as nursing home placement.2
Prior literature has established the potential effect of support in improving caregiver outcomes, including caregiver stress and burden, through interventions such as enhancing resources for caregivers, teaching coping strategies to caregivers, and teaching caregivers how to manage support for their loved ones.3,4 However, wider adoption of these interventions may be limited if the interventions involve in-person meetings or activities that take caregivers away from caregiving; the scalability of these programs is also limited by their ability to reach persons with dementia and their caregivers. These barriers are particularly important for older adults living in rural areas, where the availability of resources and distance from access to quality care may be particularly limiting.5 Leveraging advances in technology and telecommunication, this study examined the effects of providing dementia care support via telephone and internet using a trained, unlicensed care navigator as the main point of contact. The results showed improved quality of life for persons with dementia, reduced need for emergency room visits, and reduced caregiver burden and depression. The intervention is promising as a scalable intervention that may impact dementia care nationwide.
Despite the promising results, there are several issues regarding the intervention’s applicability and impact that future studies may help to further clarify. Although the improvement in quality of life in persons with dementia is important to document, it is unclear whether this difference is clinically significant. Also, it may be important to examine whether the 12-month program has sustained impact beyond the study period, although the intervention could be conceived as a long-term care solution. If the intervention is sustained beyond 12 months, future studies may look at other clinical outcomes, such as incidence of institutionalization and perhaps time to institutionalization. The study population consisted of persons with dementia of various stages, half of whom had mild disease. Future studies may further clarify at which stage of dementia the intervention is most useful. Other changes that occurred during the study period, such as change in the use of paid home-based support services and referrals to other relevant evaluations and treatment, may provide further clues about how the dementia care intervention achieved its beneficial effects.
Applications for Clinical Practice
From the health systems perspective, dementia care accounts for significant resources, and these costs are expected to grow as the population ages and dementia prevalence increases. Identifying potentially scalable interventions that yield clinical benefits and are sustainable from a cost perspective is an important step forward in improving care for persons with dementia and their caregivers across the nation. The use of centralized hubs to deliver this intervention and the novel use of telecommunications advances make this intervention applicable across large areas. Policy makers should explore how an intervention such as this could be established and sustained in our health care system.
–William W. Hung, MD, MPH
1. Mega MS, Cummings JL, Fiorello T, Gornbein J. The spectrum of behavioral changes in Alzheimer’s disease. Neurology. 1996;46:130-135.
2. Gallagher-Thompson D, Brooks JO 3rd, Bliwise D, et al. The relations among caregiver stress, “sundowning” symptoms, and cognitive decline in Alzheimer’s disease. J Am Geriatr Soc. 1992;40:807-810.
3. Livingston G, Barber J, Rapaport P, et al. Clinical effectiveness of a manual based coping strategy programme (START, STrAtegies for RelaTives) in promoting the mental health of carers of family members with dementia: pragmatic randomised controlled trial. BMJ. 2013;347:f6276.
4. Belle SH, Burgio L, Burns R, et al; Resources for Enhancing Alzheimer’s Caregiver Health (REACH) II Investigators. Enhancing the quality of life of dementia caregivers from different ethnic or racial groups: a randomized, controlled trial. Ann Intern Med. 2006;145:727-738.
5. Goins RT, Williams KA, Carter MW, et al. Perceived barriers to health care access among rural older adults: a qualitative study. J Rural Health. 2005;21:206-213.
Study Overview
Objective. To examine the effectiveness of a hub site–based care delivery system in delivering a dementia care management program to persons with dementia and their caregivers.
Design. Randomized pragmatic clinical trial enrolling dyads of persons with dementia and their caregiver. Study participants were randomly assigned to the dementia care management program and usual care in a 2:1 ratio.
Setting and participants. The study was conducted from 2 hub sites: the University of California, San Francisco, and the University of Nebraska Medical Center in Omaha. Each hub-site team served persons with dementia and their caregivers in California, Nebraska, and Iowa in both urban and rural areas. Participants were recruited through referral by treating providers or self-referral in response to advertising presented through a community outreach event, in the news, or on the internet. Eligibility requirements included: having a dementia diagnosis made by a treating provider; age older than 45 years; Medicare or Medicaid enrollment or eligibility; presence of a caregiver willing to enroll in the study; fluency in English, Spanish, or Cantonese; and residence in California, Nebraska, or Iowa. Exclusion criteria included residence in a nursing home. Out of 2585 referred dyads of persons with dementia and caregivers, 780 met inclusion criteria and were enrolled. A 2:1 randomization yielded 512 dyads in the intervention group and 268 dyads in the control group.
Intervention. The dementia care management program was implemented through the Care Ecosystem, a telephone- and internet-based supportive care intervention delivered by care team navigators. The navigators were unlicensed but trained dementia care guides working under the supervision of an advanced practice nurse, social worker, and pharmacist. The intervention consisted of telephone calls, monthly or at a frequency determined by needs and preferences, placed by navigators over a 12-month period; the content of the calls included response to immediate needs of persons with dementia and their caregiver, screening for common problems, and provision of support and education using care plan protocols. Caregivers and persons with dementia were encouraged to initiate contact through email, mail, or telephone for dementia-related questions. Additional support was provided by an advanced practice nurse, social worker, or pharmacist, as needed, and these health care professionals conducted further communication with the persons with dementia, caregiver, or outside professionals, such as physicians, for the persons with dementia, as needed. The average number of telephone calls over the 12-month period was 15.3 (standard deviation, 11.3). Participants assigned to usual care were offered contact information on dementia and aging-related organizations, including the Alzheimer’s Association and the Area Agencies on Aging, and also were sent a quarterly newsletter with general information about dementia.
Main outcome measures. The primary outcome measure was the Quality of Life in Alzheimer’s Disease score obtained by caregiver interview. This quality of life measure includes the following aspects, each rated on an ordinal scale of 1 to 4: physical health, energy level, mood, living situation, memory, family, closest relationship, friends, self, ability to do things for fun, finances, and life as a whole. The scores range from 13 to 52, with a higher score indicating better quality of life for persons with dementia. Other outcomes included frequency of emergency room visits, hospital use, and ambulance use; caregiver depression score from the Patient Health Questionnaire scale; caregiver burden score using the 12-item Zarit Burden Interview; caregiver self-efficacy; and caregiver satisfaction.
Main results. The study found that the quality of life for persons with dementia declined more in the usual care group than in the intervention group during the 12-month study period (difference of 0.53; 95% confidence interval, 0.25-1.3; P = 0.04). Persons with dementia also had fewer emergency room visits, with a number needed to treat to prevent 1 emergency room visit of 5. The intervention did not reduce ambulance use or hospital use. Caregivers in the intervention group had a greater decline in depression when compared to usual care; the frequency of moderate to severe depression decreased from 13.4% at baseline to 7.9% at 12 months (P = 0.004). Caregiver burden declined more in the intervention group than in the control group at 12 months (P = 0.046). In terms of caregiver satisfaction, 97% of caregivers surveyed in the intervention group said they would recommend the intervention to another caregiver; 45% indicated they were very satisfied, and 33% that they were satisfied.
Conclusion. Delivering dementia care via telephone and internet through a collaborative program with care navigators can improve caregiver burden and well-being and improve quality of life, emergency room utilization, and depression for persons with dementia. In addition, the program was well received.
Commentary
Dementia, including Alzheimer’s disease, primarily affects older adults and is characterized by declines in memory and cognitive function. It is often accompanied by neuropsychological symptoms such as agitation, wandering, and physical and verbal outbursts, which are debilitating for persons living with dementia and difficult to cope with for caregivers.1 These symptoms are often the source of caregiver stress, potentially leading to caregiver depression and eventual need for long-term institution-based care, such as nursing home placement.2
Prior literature has established the potential effect of support in improving caregiver outcomes, including caregiver stress and burden, through interventions such as enhancing resources for caregivers, teaching coping strategies to caregivers, and teaching caregivers how to manage support for their loved ones.3,4 However, wider adoption of these interventions may be limited if the interventions involve in-person meetings or activities that take caregivers away from caregiving; the scalability of these programs is also limited by their ability to reach persons with dementia and their caregivers. These barriers are particularly important for older adults living in rural areas, where the availability of resources and distance from access to quality care may be particularly limiting.5 Leveraging advances in technology and telecommunication, this study examined the effects of providing dementia care support via telephone and internet using a trained, unlicensed care navigator as the main point of contact. The results showed improved quality of life for persons with dementia, reduced need for emergency room visits, and reduced caregiver burden and depression. The intervention is promising as a scalable intervention that may impact dementia care nationwide.
Despite the promising results, there are several issues regarding the intervention’s applicability and impact that future studies may help to further clarify. Although the improvement in quality of life in persons with dementia is important to document, it is unclear whether this difference is clinically significant. Also, it may be important to examine whether the 12-month program has sustained impact beyond the study period, although the intervention could be conceived as a long-term care solution. If the intervention is sustained beyond 12 months, future studies may look at other clinical outcomes, such as incidence of institutionalization and perhaps time to institutionalization. The study population consisted of persons with dementia of various stages, half of whom had mild disease. Future studies may further clarify at which stage of dementia the intervention is most useful. Other changes that occurred during the study period, such as change in the use of paid home-based support services and referrals to other relevant evaluations and treatment, may provide further clues about how the dementia care intervention achieved its beneficial effects.
Applications for Clinical Practice
From the health systems perspective, dementia care accounts for significant resources, and these costs are expected to grow as the population ages and dementia prevalence increases. Identifying potentially scalable interventions that yield clinical benefits and are sustainable from a cost perspective is an important step forward in improving care for persons with dementia and their caregivers across the nation. The use of centralized hubs to deliver this intervention and the novel use of telecommunications advances make this intervention applicable across large areas. Policy makers should explore how an intervention such as this could be established and sustained in our health care system.
–William W. Hung, MD, MPH
Study Overview
Objective. To examine the effectiveness of a hub site–based care delivery system in delivering a dementia care management program to persons with dementia and their caregivers.
Design. Randomized pragmatic clinical trial enrolling dyads of persons with dementia and their caregiver. Study participants were randomly assigned to the dementia care management program and usual care in a 2:1 ratio.
Setting and participants. The study was conducted from 2 hub sites: the University of California, San Francisco, and the University of Nebraska Medical Center in Omaha. Each hub-site team served persons with dementia and their caregivers in California, Nebraska, and Iowa in both urban and rural areas. Participants were recruited through referral by treating providers or self-referral in response to advertising presented through a community outreach event, in the news, or on the internet. Eligibility requirements included: having a dementia diagnosis made by a treating provider; age older than 45 years; Medicare or Medicaid enrollment or eligibility; presence of a caregiver willing to enroll in the study; fluency in English, Spanish, or Cantonese; and residence in California, Nebraska, or Iowa. Exclusion criteria included residence in a nursing home. Out of 2585 referred dyads of persons with dementia and caregivers, 780 met inclusion criteria and were enrolled. A 2:1 randomization yielded 512 dyads in the intervention group and 268 dyads in the control group.
Intervention. The dementia care management program was implemented through the Care Ecosystem, a telephone- and internet-based supportive care intervention delivered by care team navigators. The navigators were unlicensed but trained dementia care guides working under the supervision of an advanced practice nurse, social worker, and pharmacist. The intervention consisted of telephone calls, monthly or at a frequency determined by needs and preferences, placed by navigators over a 12-month period; the content of the calls included response to immediate needs of persons with dementia and their caregiver, screening for common problems, and provision of support and education using care plan protocols. Caregivers and persons with dementia were encouraged to initiate contact through email, mail, or telephone for dementia-related questions. Additional support was provided by an advanced practice nurse, social worker, or pharmacist, as needed, and these health care professionals conducted further communication with the persons with dementia, caregiver, or outside professionals, such as physicians, for the persons with dementia, as needed. The average number of telephone calls over the 12-month period was 15.3 (standard deviation, 11.3). Participants assigned to usual care were offered contact information on dementia and aging-related organizations, including the Alzheimer’s Association and the Area Agencies on Aging, and also were sent a quarterly newsletter with general information about dementia.
Main outcome measures. The primary outcome measure was the Quality of Life in Alzheimer’s Disease score obtained by caregiver interview. This quality of life measure includes the following aspects, each rated on an ordinal scale of 1 to 4: physical health, energy level, mood, living situation, memory, family, closest relationship, friends, self, ability to do things for fun, finances, and life as a whole. The scores range from 13 to 52, with a higher score indicating better quality of life for persons with dementia. Other outcomes included frequency of emergency room visits, hospital use, and ambulance use; caregiver depression score from the Patient Health Questionnaire scale; caregiver burden score using the 12-item Zarit Burden Interview; caregiver self-efficacy; and caregiver satisfaction.
Main results. The study found that the quality of life for persons with dementia declined more in the usual care group than in the intervention group during the 12-month study period (difference of 0.53; 95% confidence interval, 0.25-1.3; P = 0.04). Persons with dementia also had fewer emergency room visits, with a number needed to treat to prevent 1 emergency room visit of 5. The intervention did not reduce ambulance use or hospital use. Caregivers in the intervention group had a greater decline in depression when compared to usual care; the frequency of moderate to severe depression decreased from 13.4% at baseline to 7.9% at 12 months (P = 0.004). Caregiver burden declined more in the intervention group than in the control group at 12 months (P = 0.046). In terms of caregiver satisfaction, 97% of caregivers surveyed in the intervention group said they would recommend the intervention to another caregiver; 45% indicated they were very satisfied, and 33% that they were satisfied.
Conclusion. Delivering dementia care via telephone and internet through a collaborative program with care navigators can improve caregiver burden and well-being and improve quality of life, emergency room utilization, and depression for persons with dementia. In addition, the program was well received.
Commentary
Dementia, including Alzheimer’s disease, primarily affects older adults and is characterized by declines in memory and cognitive function. It is often accompanied by neuropsychological symptoms such as agitation, wandering, and physical and verbal outbursts, which are debilitating for persons living with dementia and difficult to cope with for caregivers.1 These symptoms are often the source of caregiver stress, potentially leading to caregiver depression and eventual need for long-term institution-based care, such as nursing home placement.2
Prior literature has established the potential effect of support in improving caregiver outcomes, including caregiver stress and burden, through interventions such as enhancing resources for caregivers, teaching coping strategies to caregivers, and teaching caregivers how to manage support for their loved ones.3,4 However, wider adoption of these interventions may be limited if the interventions involve in-person meetings or activities that take caregivers away from caregiving; the scalability of these programs is also limited by their ability to reach persons with dementia and their caregivers. These barriers are particularly important for older adults living in rural areas, where the availability of resources and distance from access to quality care may be particularly limiting.5 Leveraging advances in technology and telecommunication, this study examined the effects of providing dementia care support via telephone and internet using a trained, unlicensed care navigator as the main point of contact. The results showed improved quality of life for persons with dementia, reduced need for emergency room visits, and reduced caregiver burden and depression. The intervention is promising as a scalable intervention that may impact dementia care nationwide.
Despite the promising results, there are several issues regarding the intervention’s applicability and impact that future studies may help to further clarify. Although the improvement in quality of life in persons with dementia is important to document, it is unclear whether this difference is clinically significant. Also, it may be important to examine whether the 12-month program has sustained impact beyond the study period, although the intervention could be conceived as a long-term care solution. If the intervention is sustained beyond 12 months, future studies may look at other clinical outcomes, such as incidence of institutionalization and perhaps time to institutionalization. The study population consisted of persons with dementia of various stages, half of whom had mild disease. Future studies may further clarify at which stage of dementia the intervention is most useful. Other changes that occurred during the study period, such as change in the use of paid home-based support services and referrals to other relevant evaluations and treatment, may provide further clues about how the dementia care intervention achieved its beneficial effects.
Applications for Clinical Practice
From the health systems perspective, dementia care accounts for significant resources, and these costs are expected to grow as the population ages and dementia prevalence increases. Identifying potentially scalable interventions that yield clinical benefits and are sustainable from a cost perspective is an important step forward in improving care for persons with dementia and their caregivers across the nation. The use of centralized hubs to deliver this intervention and the novel use of telecommunications advances make this intervention applicable across large areas. Policy makers should explore how an intervention such as this could be established and sustained in our health care system.
–William W. Hung, MD, MPH
1. Mega MS, Cummings JL, Fiorello T, Gornbein J. The spectrum of behavioral changes in Alzheimer’s disease. Neurology. 1996;46:130-135.
2. Gallagher-Thompson D, Brooks JO 3rd, Bliwise D, et al. The relations among caregiver stress, “sundowning” symptoms, and cognitive decline in Alzheimer’s disease. J Am Geriatr Soc. 1992;40:807-810.
3. Livingston G, Barber J, Rapaport P, et al. Clinical effectiveness of a manual based coping strategy programme (START, STrAtegies for RelaTives) in promoting the mental health of carers of family members with dementia: pragmatic randomised controlled trial. BMJ. 2013;347:f6276.
4. Belle SH, Burgio L, Burns R, et al; Resources for Enhancing Alzheimer’s Caregiver Health (REACH) II Investigators. Enhancing the quality of life of dementia caregivers from different ethnic or racial groups: a randomized, controlled trial. Ann Intern Med. 2006;145:727-738.
5. Goins RT, Williams KA, Carter MW, et al. Perceived barriers to health care access among rural older adults: a qualitative study. J Rural Health. 2005;21:206-213.
1. Mega MS, Cummings JL, Fiorello T, Gornbein J. The spectrum of behavioral changes in Alzheimer’s disease. Neurology. 1996;46:130-135.
2. Gallagher-Thompson D, Brooks JO 3rd, Bliwise D, et al. The relations among caregiver stress, “sundowning” symptoms, and cognitive decline in Alzheimer’s disease. J Am Geriatr Soc. 1992;40:807-810.
3. Livingston G, Barber J, Rapaport P, et al. Clinical effectiveness of a manual based coping strategy programme (START, STrAtegies for RelaTives) in promoting the mental health of carers of family members with dementia: pragmatic randomised controlled trial. BMJ. 2013;347:f6276.
4. Belle SH, Burgio L, Burns R, et al; Resources for Enhancing Alzheimer’s Caregiver Health (REACH) II Investigators. Enhancing the quality of life of dementia caregivers from different ethnic or racial groups: a randomized, controlled trial. Ann Intern Med. 2006;145:727-738.
5. Goins RT, Williams KA, Carter MW, et al. Perceived barriers to health care access among rural older adults: a qualitative study. J Rural Health. 2005;21:206-213.
Switching from TDF- to TAF-Containing Antiretroviral Therapy: Impact on Bone Mineral Density in Older Patients Living With HIV
Study Overview
Objective. To evaluate the effect of changing from tenofovir disoproxil fumarate (TDF) –containing antiretroviral therapy (ART) to tenofovir alafenamide (TAF) –containing ART in patients ages 60 years and older living with HIV.
Design. Prospective, open-label, multicenter, randomized controlled trial.
Setting and participants. The study was completed across 36 European centers over 48 weeks. Patients were enrolled from December 12, 2015, to March 21, 2018, and were eligible to participate if they were diagnosed with HIV-1; virologically suppressed to < 50 copies/mL; on a TDF-containing ART regimen; and ≥ 60 years of age.
Intervention. Participants (n = 167) were randomly assigned in a 2:1 ratio to ART with TAF (10 mg), elvitegravir (EVG; 150 mg), cobicistat (COB; 150 mg), and emtricitabine (FTC; 200 mg) or to continued therapy with a TDF-containing ART regimen (300 mg TDF).
Main outcome measures. Primary outcome measures were the change in spine and hip bone mineral density from baseline at week 48. Secondary outcome measures included bone mineral density changes from baseline at week 24, HIV viral suppression and change in CD4 count at weeks 24 and 48, and the assessment of safety and tolerability of each ART regimen until week 48.
Main results. At 48 weeks, patients (n = 111) in the TAF+EVG+COB+FTC group had a mean 2.24% (SD, 3.27) increase in spine bone mineral density, while those in the TDF-containing group (n = 56) had a mean 0.10% decrease (SD, 3.39), a difference of 2.43% (95% confidence interval [CI], 1.34-3.52; P < 0.0001). In addition, at 48 weeks patients in the TAF+EVG+COB+FTC group had a mean 1.33% increase (SD, 2.20) in hip bone mineral density, as compared with a mean 0.73% decrease (SD, 3.21) in the TDF-containing group, a difference of 2.04% (95% CI, 1.17-2.90; P < 0.0001).
Similar results were seen in spine and hip bone mineral density in the TAF+EVG+COB+FTC group at week 24, with increases of 1.75% (P = 0.00080) and 1.35% (P = 0.00040), respectively. Both treatment groups maintained high virologic suppression. The TAF+EVG+COB+FTC group maintained 94.5% virologic suppression at week 24 and 93.6% at week 48, as compared with virologic suppression of 100% and 94.5% at weeks 24 and 48, respectively, in the TDF-containing group. However, the TAF+EVG+COB+FTC group had an increase in CD4 count from baseline (56 cells/µL), with no real change in the TDF-containing group (–1 cell/µL). Patients in the TAF+EVG+COB+FTC group had a mean 27.8 mg/g decrease in urine albumin-to-creatinine ratio (UACR) versus a 7.7 mg/g decrease in the TDF-containing group (P = 0.0042). In addition, patients in the TAF+EVG+COB+FTC group had a mean 49.8 mg/g decrease in urine protein-to-creatinine ratio (UPCR) versus a 3.8 mg/g decrease in the TDF-containing group (P = 0.0042).
Conclusion. Patients 60 years of age or older living with virologically suppressed HIV may benefit from improved bone mineral density by switching from a TDF-containing ART regimen to a TAF-containing regimen after 48 weeks, which, in turn, may help to reduce the risk for osteoporosis. Patients who were switched to a TAF-containing regimen also had favorable improvements in UACR and UPCR, which could indicate better renal function.
Commentary
The Centers for Disease Control and Prevention estimated that in 2018 nearly half of those living with HIV in the United States were older than 50 years.1 Today, the life expectancy of patients living with HIV on ART in developed countries is similar to that of patients not living with HIV. A meta-analysis published in 2017 estimated that patients diagnosed with HIV at age 20 beginning ART have a life expectancy of 63 years, and another study estimated that life expectancy in such patients is 89.1% of that of the general population in Canada.2,3 Overall, most people living with HIV infection are aging and at risk for medical conditions similar to persons without HIV disease. However, rates of osteoporosis in elderly patients with HIV are estimated to be 3 times greater than rates in persons without HIV.4 As a result, it is becoming increasingly important to find ways to decrease the risk of osteoporosis in these patients.
ART typically includes a nucleoside reverse transcriptase inhibitor (NRTI) combination and a third agent, such as an integrase strand inhibitor. Tenofovir is a commonly used backbone NRTI that comes in 2 forms, TDF (tenofovir disoproxil fumarate) and TAF (tenofovir alafenamide). Both are prodrugs that are converted to tenofovir diphosphate. TDF specifically is associated with an increased risk of bone loss and nephrotoxicity. The loss in bone mineral density is most similar to the bone loss seen with oral glucocorticoids.5 TDF has been shown to increase plasma levels of RANKL and tumor necrosis factor-α, leading to increased bone resorption.6 The long-term effects of TDF- versus TAF-containing ART on bone mineral density have, to our knowledge, not been compared previously in a randomized control study. The significance of demonstrating an increase in bone mineral density in the prevention of osteoporotic bone fracture in people living with HIV is less clear. A long-term cohort study completed in Japan looking at patients on TDF showed an increased risk of bone fractures in both older postmenopausal women and younger men.7 However, a retrospective cohort study looking at 1981 patients with HIV found no association between bone fractures and TDF.8
This randomized controlled trial used appropriate methods to measure the reported primary and secondary endpoints; however, it would be of benefit to continue following these patients to measure their true long-term risk of osteoporosis-related complications. In terms of the study’s secondary endpoints, it is notable that the patients maintained HIV viral suppression after the switch and CD4 counts remained stable (with a slight increase observed in the TAF-containing ART cohort).
In regard to the patient’s renal function, patients in the TAF group had significantly improved UACR and UPCR, which likely reflects improved glomerular filtration. Improved renal function is also increasingly important for patients with HIV, as up to 48.5% have some form of chronic kidney disease.9
Applications for Clinical Practice
This study shows that making the switch from TDF- to TAF-containing ART can lead to improved bone mineral density. We can extrapolate that switching may lead to a decreased risk of osteoporosis and osteoporosis-related complications, such as bone fracture, but this needs to be investigated in more detail. As demonstrated in this study, switching from a TDF- to a TAF-containing regimen can also lead to improved renal function while maintaining HIV viral suppression and CD4 counts.
Unfortunately, the regimen selected with TAF in this study (elvitegravir, cobicistat, and emtricitabine) includes cobicistat, which is no longer recommended as initial therapy due to its risk of drug-drug interactions, and elvitegravir, which has a lower barrier to resistance than other integrase strand inhibitors.10,11 The United States Department of Health and Human Services guidelines and the International Antiviral Society-USA Panel suggest using several other TAF-containing regimens for beginning or even switching therapy in older patients.10,11
When choosing between either a TAF- or a TDF-containing regimen to treat HIV infection in older patients, increasing evidence shows that using a TAF-containing ART regimen may be more beneficial for people living and aging with virologically suppressed HIV infection.
–Sean P. Bliven, and Norman L. Beatty, MD, University of Florida College of Medicine, Division of Infectious Diseases and Global Medicine, Gainesville, FL
1. Centers for Disease Control and Prevention. HIV among people aged 50 and over. 2018. https://www.cdc.gov/hiv/group/age/olderamericans/index.html. Accessed on November 22, 2019.
2. Teeraananchai S, Kerr S, Amin J, et al. Life expectancy of HIV-positive people after starting combination antiretroviral therapy: a meta-analysis. HIV Medicine. 2016;18:256-266.
3. Wandeler G, Johnson LF, Egger M. Trends in life expectancy of HIV-positive adults on antiretroviral therapy across the globe. Curr Opin HIV AIDS. 2016;11:492-500.
4. Brown TT, Qaqish RB. Antiretroviral therapy and the prevalence of osteopenia and osteoporosis: a meta-analytic review. AIDS. 2006;20:2165-2174.
5. Bolland MJ, Grey A, Reid IR. Skeletal health in adults with HIV infection. Lancet Diabetes Endocrinol. 2015;3:63-74.
6. Ofotokun I, Titanji K, Vunnava A, et al. Antiretroviral therapy induces a rapid increase in bone resorption that is positively associated with the magnitude of immune reconstitution in HIV infection. AIDS. 2016;30:405-414.
7. Komatsu A, Ikeda A, Kikuchi A, et al. Osteoporosis-related fractures in HIV-infected patients receiving long-term tenofovir disoproxil fumarate: an observational cohort study. Drug Saf. 2018;41:843-848.
8. Gediminas L, Wright EA, Dong Y, et al. Factors associated with fractures in HIV-infected persons: which factors matter? Osteoporos Int. 201728:239-244.
9. Naicker S, Rahmania, Kopp JB. HIV and chronic kidney disease. Clin Nephrol. 2015; 83(Suppl 1):S32-S38.
10. United States Department of Health and Human Services. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. https://aidsinfo.nih.gov/guidelines/html/1/adult-and-adolescent-arv/0. Accessed December 10, 2019.
11. Saag MS, Benson CA, Gandhi RT, et al. Antiretroviral drugs for treatment and prevention of HIV infection in adults: 2018 recommendations of the International Antiviral Society-USA Panel. JAMA. 2018;320:379-396.
Study Overview
Objective. To evaluate the effect of changing from tenofovir disoproxil fumarate (TDF) –containing antiretroviral therapy (ART) to tenofovir alafenamide (TAF) –containing ART in patients ages 60 years and older living with HIV.
Design. Prospective, open-label, multicenter, randomized controlled trial.
Setting and participants. The study was completed across 36 European centers over 48 weeks. Patients were enrolled from December 12, 2015, to March 21, 2018, and were eligible to participate if they were diagnosed with HIV-1; virologically suppressed to < 50 copies/mL; on a TDF-containing ART regimen; and ≥ 60 years of age.
Intervention. Participants (n = 167) were randomly assigned in a 2:1 ratio to ART with TAF (10 mg), elvitegravir (EVG; 150 mg), cobicistat (COB; 150 mg), and emtricitabine (FTC; 200 mg) or to continued therapy with a TDF-containing ART regimen (300 mg TDF).
Main outcome measures. Primary outcome measures were the change in spine and hip bone mineral density from baseline at week 48. Secondary outcome measures included bone mineral density changes from baseline at week 24, HIV viral suppression and change in CD4 count at weeks 24 and 48, and the assessment of safety and tolerability of each ART regimen until week 48.
Main results. At 48 weeks, patients (n = 111) in the TAF+EVG+COB+FTC group had a mean 2.24% (SD, 3.27) increase in spine bone mineral density, while those in the TDF-containing group (n = 56) had a mean 0.10% decrease (SD, 3.39), a difference of 2.43% (95% confidence interval [CI], 1.34-3.52; P < 0.0001). In addition, at 48 weeks patients in the TAF+EVG+COB+FTC group had a mean 1.33% increase (SD, 2.20) in hip bone mineral density, as compared with a mean 0.73% decrease (SD, 3.21) in the TDF-containing group, a difference of 2.04% (95% CI, 1.17-2.90; P < 0.0001).
Similar results were seen in spine and hip bone mineral density in the TAF+EVG+COB+FTC group at week 24, with increases of 1.75% (P = 0.00080) and 1.35% (P = 0.00040), respectively. Both treatment groups maintained high virologic suppression. The TAF+EVG+COB+FTC group maintained 94.5% virologic suppression at week 24 and 93.6% at week 48, as compared with virologic suppression of 100% and 94.5% at weeks 24 and 48, respectively, in the TDF-containing group. However, the TAF+EVG+COB+FTC group had an increase in CD4 count from baseline (56 cells/µL), with no real change in the TDF-containing group (–1 cell/µL). Patients in the TAF+EVG+COB+FTC group had a mean 27.8 mg/g decrease in urine albumin-to-creatinine ratio (UACR) versus a 7.7 mg/g decrease in the TDF-containing group (P = 0.0042). In addition, patients in the TAF+EVG+COB+FTC group had a mean 49.8 mg/g decrease in urine protein-to-creatinine ratio (UPCR) versus a 3.8 mg/g decrease in the TDF-containing group (P = 0.0042).
Conclusion. Patients 60 years of age or older living with virologically suppressed HIV may benefit from improved bone mineral density by switching from a TDF-containing ART regimen to a TAF-containing regimen after 48 weeks, which, in turn, may help to reduce the risk for osteoporosis. Patients who were switched to a TAF-containing regimen also had favorable improvements in UACR and UPCR, which could indicate better renal function.
Commentary
The Centers for Disease Control and Prevention estimated that in 2018 nearly half of those living with HIV in the United States were older than 50 years.1 Today, the life expectancy of patients living with HIV on ART in developed countries is similar to that of patients not living with HIV. A meta-analysis published in 2017 estimated that patients diagnosed with HIV at age 20 beginning ART have a life expectancy of 63 years, and another study estimated that life expectancy in such patients is 89.1% of that of the general population in Canada.2,3 Overall, most people living with HIV infection are aging and at risk for medical conditions similar to persons without HIV disease. However, rates of osteoporosis in elderly patients with HIV are estimated to be 3 times greater than rates in persons without HIV.4 As a result, it is becoming increasingly important to find ways to decrease the risk of osteoporosis in these patients.
ART typically includes a nucleoside reverse transcriptase inhibitor (NRTI) combination and a third agent, such as an integrase strand inhibitor. Tenofovir is a commonly used backbone NRTI that comes in 2 forms, TDF (tenofovir disoproxil fumarate) and TAF (tenofovir alafenamide). Both are prodrugs that are converted to tenofovir diphosphate. TDF specifically is associated with an increased risk of bone loss and nephrotoxicity. The loss in bone mineral density is most similar to the bone loss seen with oral glucocorticoids.5 TDF has been shown to increase plasma levels of RANKL and tumor necrosis factor-α, leading to increased bone resorption.6 The long-term effects of TDF- versus TAF-containing ART on bone mineral density have, to our knowledge, not been compared previously in a randomized control study. The significance of demonstrating an increase in bone mineral density in the prevention of osteoporotic bone fracture in people living with HIV is less clear. A long-term cohort study completed in Japan looking at patients on TDF showed an increased risk of bone fractures in both older postmenopausal women and younger men.7 However, a retrospective cohort study looking at 1981 patients with HIV found no association between bone fractures and TDF.8
This randomized controlled trial used appropriate methods to measure the reported primary and secondary endpoints; however, it would be of benefit to continue following these patients to measure their true long-term risk of osteoporosis-related complications. In terms of the study’s secondary endpoints, it is notable that the patients maintained HIV viral suppression after the switch and CD4 counts remained stable (with a slight increase observed in the TAF-containing ART cohort).
In regard to the patient’s renal function, patients in the TAF group had significantly improved UACR and UPCR, which likely reflects improved glomerular filtration. Improved renal function is also increasingly important for patients with HIV, as up to 48.5% have some form of chronic kidney disease.9
Applications for Clinical Practice
This study shows that making the switch from TDF- to TAF-containing ART can lead to improved bone mineral density. We can extrapolate that switching may lead to a decreased risk of osteoporosis and osteoporosis-related complications, such as bone fracture, but this needs to be investigated in more detail. As demonstrated in this study, switching from a TDF- to a TAF-containing regimen can also lead to improved renal function while maintaining HIV viral suppression and CD4 counts.
Unfortunately, the regimen selected with TAF in this study (elvitegravir, cobicistat, and emtricitabine) includes cobicistat, which is no longer recommended as initial therapy due to its risk of drug-drug interactions, and elvitegravir, which has a lower barrier to resistance than other integrase strand inhibitors.10,11 The United States Department of Health and Human Services guidelines and the International Antiviral Society-USA Panel suggest using several other TAF-containing regimens for beginning or even switching therapy in older patients.10,11
When choosing between either a TAF- or a TDF-containing regimen to treat HIV infection in older patients, increasing evidence shows that using a TAF-containing ART regimen may be more beneficial for people living and aging with virologically suppressed HIV infection.
–Sean P. Bliven, and Norman L. Beatty, MD, University of Florida College of Medicine, Division of Infectious Diseases and Global Medicine, Gainesville, FL
Study Overview
Objective. To evaluate the effect of changing from tenofovir disoproxil fumarate (TDF) –containing antiretroviral therapy (ART) to tenofovir alafenamide (TAF) –containing ART in patients ages 60 years and older living with HIV.
Design. Prospective, open-label, multicenter, randomized controlled trial.
Setting and participants. The study was completed across 36 European centers over 48 weeks. Patients were enrolled from December 12, 2015, to March 21, 2018, and were eligible to participate if they were diagnosed with HIV-1; virologically suppressed to < 50 copies/mL; on a TDF-containing ART regimen; and ≥ 60 years of age.
Intervention. Participants (n = 167) were randomly assigned in a 2:1 ratio to ART with TAF (10 mg), elvitegravir (EVG; 150 mg), cobicistat (COB; 150 mg), and emtricitabine (FTC; 200 mg) or to continued therapy with a TDF-containing ART regimen (300 mg TDF).
Main outcome measures. Primary outcome measures were the change in spine and hip bone mineral density from baseline at week 48. Secondary outcome measures included bone mineral density changes from baseline at week 24, HIV viral suppression and change in CD4 count at weeks 24 and 48, and the assessment of safety and tolerability of each ART regimen until week 48.
Main results. At 48 weeks, patients (n = 111) in the TAF+EVG+COB+FTC group had a mean 2.24% (SD, 3.27) increase in spine bone mineral density, while those in the TDF-containing group (n = 56) had a mean 0.10% decrease (SD, 3.39), a difference of 2.43% (95% confidence interval [CI], 1.34-3.52; P < 0.0001). In addition, at 48 weeks patients in the TAF+EVG+COB+FTC group had a mean 1.33% increase (SD, 2.20) in hip bone mineral density, as compared with a mean 0.73% decrease (SD, 3.21) in the TDF-containing group, a difference of 2.04% (95% CI, 1.17-2.90; P < 0.0001).
Similar results were seen in spine and hip bone mineral density in the TAF+EVG+COB+FTC group at week 24, with increases of 1.75% (P = 0.00080) and 1.35% (P = 0.00040), respectively. Both treatment groups maintained high virologic suppression. The TAF+EVG+COB+FTC group maintained 94.5% virologic suppression at week 24 and 93.6% at week 48, as compared with virologic suppression of 100% and 94.5% at weeks 24 and 48, respectively, in the TDF-containing group. However, the TAF+EVG+COB+FTC group had an increase in CD4 count from baseline (56 cells/µL), with no real change in the TDF-containing group (–1 cell/µL). Patients in the TAF+EVG+COB+FTC group had a mean 27.8 mg/g decrease in urine albumin-to-creatinine ratio (UACR) versus a 7.7 mg/g decrease in the TDF-containing group (P = 0.0042). In addition, patients in the TAF+EVG+COB+FTC group had a mean 49.8 mg/g decrease in urine protein-to-creatinine ratio (UPCR) versus a 3.8 mg/g decrease in the TDF-containing group (P = 0.0042).
Conclusion. Patients 60 years of age or older living with virologically suppressed HIV may benefit from improved bone mineral density by switching from a TDF-containing ART regimen to a TAF-containing regimen after 48 weeks, which, in turn, may help to reduce the risk for osteoporosis. Patients who were switched to a TAF-containing regimen also had favorable improvements in UACR and UPCR, which could indicate better renal function.
Commentary
The Centers for Disease Control and Prevention estimated that in 2018 nearly half of those living with HIV in the United States were older than 50 years.1 Today, the life expectancy of patients living with HIV on ART in developed countries is similar to that of patients not living with HIV. A meta-analysis published in 2017 estimated that patients diagnosed with HIV at age 20 beginning ART have a life expectancy of 63 years, and another study estimated that life expectancy in such patients is 89.1% of that of the general population in Canada.2,3 Overall, most people living with HIV infection are aging and at risk for medical conditions similar to persons without HIV disease. However, rates of osteoporosis in elderly patients with HIV are estimated to be 3 times greater than rates in persons without HIV.4 As a result, it is becoming increasingly important to find ways to decrease the risk of osteoporosis in these patients.
ART typically includes a nucleoside reverse transcriptase inhibitor (NRTI) combination and a third agent, such as an integrase strand inhibitor. Tenofovir is a commonly used backbone NRTI that comes in 2 forms, TDF (tenofovir disoproxil fumarate) and TAF (tenofovir alafenamide). Both are prodrugs that are converted to tenofovir diphosphate. TDF specifically is associated with an increased risk of bone loss and nephrotoxicity. The loss in bone mineral density is most similar to the bone loss seen with oral glucocorticoids.5 TDF has been shown to increase plasma levels of RANKL and tumor necrosis factor-α, leading to increased bone resorption.6 The long-term effects of TDF- versus TAF-containing ART on bone mineral density have, to our knowledge, not been compared previously in a randomized control study. The significance of demonstrating an increase in bone mineral density in the prevention of osteoporotic bone fracture in people living with HIV is less clear. A long-term cohort study completed in Japan looking at patients on TDF showed an increased risk of bone fractures in both older postmenopausal women and younger men.7 However, a retrospective cohort study looking at 1981 patients with HIV found no association between bone fractures and TDF.8
This randomized controlled trial used appropriate methods to measure the reported primary and secondary endpoints; however, it would be of benefit to continue following these patients to measure their true long-term risk of osteoporosis-related complications. In terms of the study’s secondary endpoints, it is notable that the patients maintained HIV viral suppression after the switch and CD4 counts remained stable (with a slight increase observed in the TAF-containing ART cohort).
In regard to the patient’s renal function, patients in the TAF group had significantly improved UACR and UPCR, which likely reflects improved glomerular filtration. Improved renal function is also increasingly important for patients with HIV, as up to 48.5% have some form of chronic kidney disease.9
Applications for Clinical Practice
This study shows that making the switch from TDF- to TAF-containing ART can lead to improved bone mineral density. We can extrapolate that switching may lead to a decreased risk of osteoporosis and osteoporosis-related complications, such as bone fracture, but this needs to be investigated in more detail. As demonstrated in this study, switching from a TDF- to a TAF-containing regimen can also lead to improved renal function while maintaining HIV viral suppression and CD4 counts.
Unfortunately, the regimen selected with TAF in this study (elvitegravir, cobicistat, and emtricitabine) includes cobicistat, which is no longer recommended as initial therapy due to its risk of drug-drug interactions, and elvitegravir, which has a lower barrier to resistance than other integrase strand inhibitors.10,11 The United States Department of Health and Human Services guidelines and the International Antiviral Society-USA Panel suggest using several other TAF-containing regimens for beginning or even switching therapy in older patients.10,11
When choosing between either a TAF- or a TDF-containing regimen to treat HIV infection in older patients, increasing evidence shows that using a TAF-containing ART regimen may be more beneficial for people living and aging with virologically suppressed HIV infection.
–Sean P. Bliven, and Norman L. Beatty, MD, University of Florida College of Medicine, Division of Infectious Diseases and Global Medicine, Gainesville, FL
1. Centers for Disease Control and Prevention. HIV among people aged 50 and over. 2018. https://www.cdc.gov/hiv/group/age/olderamericans/index.html. Accessed on November 22, 2019.
2. Teeraananchai S, Kerr S, Amin J, et al. Life expectancy of HIV-positive people after starting combination antiretroviral therapy: a meta-analysis. HIV Medicine. 2016;18:256-266.
3. Wandeler G, Johnson LF, Egger M. Trends in life expectancy of HIV-positive adults on antiretroviral therapy across the globe. Curr Opin HIV AIDS. 2016;11:492-500.
4. Brown TT, Qaqish RB. Antiretroviral therapy and the prevalence of osteopenia and osteoporosis: a meta-analytic review. AIDS. 2006;20:2165-2174.
5. Bolland MJ, Grey A, Reid IR. Skeletal health in adults with HIV infection. Lancet Diabetes Endocrinol. 2015;3:63-74.
6. Ofotokun I, Titanji K, Vunnava A, et al. Antiretroviral therapy induces a rapid increase in bone resorption that is positively associated with the magnitude of immune reconstitution in HIV infection. AIDS. 2016;30:405-414.
7. Komatsu A, Ikeda A, Kikuchi A, et al. Osteoporosis-related fractures in HIV-infected patients receiving long-term tenofovir disoproxil fumarate: an observational cohort study. Drug Saf. 2018;41:843-848.
8. Gediminas L, Wright EA, Dong Y, et al. Factors associated with fractures in HIV-infected persons: which factors matter? Osteoporos Int. 201728:239-244.
9. Naicker S, Rahmania, Kopp JB. HIV and chronic kidney disease. Clin Nephrol. 2015; 83(Suppl 1):S32-S38.
10. United States Department of Health and Human Services. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. https://aidsinfo.nih.gov/guidelines/html/1/adult-and-adolescent-arv/0. Accessed December 10, 2019.
11. Saag MS, Benson CA, Gandhi RT, et al. Antiretroviral drugs for treatment and prevention of HIV infection in adults: 2018 recommendations of the International Antiviral Society-USA Panel. JAMA. 2018;320:379-396.
1. Centers for Disease Control and Prevention. HIV among people aged 50 and over. 2018. https://www.cdc.gov/hiv/group/age/olderamericans/index.html. Accessed on November 22, 2019.
2. Teeraananchai S, Kerr S, Amin J, et al. Life expectancy of HIV-positive people after starting combination antiretroviral therapy: a meta-analysis. HIV Medicine. 2016;18:256-266.
3. Wandeler G, Johnson LF, Egger M. Trends in life expectancy of HIV-positive adults on antiretroviral therapy across the globe. Curr Opin HIV AIDS. 2016;11:492-500.
4. Brown TT, Qaqish RB. Antiretroviral therapy and the prevalence of osteopenia and osteoporosis: a meta-analytic review. AIDS. 2006;20:2165-2174.
5. Bolland MJ, Grey A, Reid IR. Skeletal health in adults with HIV infection. Lancet Diabetes Endocrinol. 2015;3:63-74.
6. Ofotokun I, Titanji K, Vunnava A, et al. Antiretroviral therapy induces a rapid increase in bone resorption that is positively associated with the magnitude of immune reconstitution in HIV infection. AIDS. 2016;30:405-414.
7. Komatsu A, Ikeda A, Kikuchi A, et al. Osteoporosis-related fractures in HIV-infected patients receiving long-term tenofovir disoproxil fumarate: an observational cohort study. Drug Saf. 2018;41:843-848.
8. Gediminas L, Wright EA, Dong Y, et al. Factors associated with fractures in HIV-infected persons: which factors matter? Osteoporos Int. 201728:239-244.
9. Naicker S, Rahmania, Kopp JB. HIV and chronic kidney disease. Clin Nephrol. 2015; 83(Suppl 1):S32-S38.
10. United States Department of Health and Human Services. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. https://aidsinfo.nih.gov/guidelines/html/1/adult-and-adolescent-arv/0. Accessed December 10, 2019.
11. Saag MS, Benson CA, Gandhi RT, et al. Antiretroviral drugs for treatment and prevention of HIV infection in adults: 2018 recommendations of the International Antiviral Society-USA Panel. JAMA. 2018;320:379-396.
Nonculprit Lesion PCI Strategies in Patients With STEMI Without Cardiogenic Shock
Study Overview
Objective. To determine whether percutaneous coronary intervention (PCI) of a nonculprit lesion in patients with ST-segment elevation myocardial infarction (STEMI) reduces the risk of cardiovascular death or myocardial infarction.
Design. International, multicenter, randomized controlled trial blinded to outcome.
Setting and participants. Patients with STEMI who had multivessel coronary disease and had undergone successful PCI to the culprit lesion.
Intervention. A total of 4041 patients were randomly assigned to either PCI of angiographically significant nonculprit lesions or optimal medical therapy without further revascularization. Randomization was stratified according to intended timing of nonculprit lesion PCI (either during or after the index hospitalization).
Main outcome measures. The first co-primary endpoint was the composite of cardiovascular death or myocardial infarction (MI). The second co-primary endpoint was the composite of cardiovascular death, MI or ischemia-driven revascularization.
Main results. At a median follow-up of 3 years, the composite of cardiovascular death or MI occurred in 158 of the 2016 patients (7.8%) in the nonculprit PCI group and in 213 of the 2025 patients (10.5%) in the culprit-lesion-only group (hazard ratio, 0.73; 95% confidence interval [CI], 0.60-0.91; P = 0.004). The second co-primary endpoint occurred in 179 patients (8.9%) in the nonculprit PCI group and in 339 patients (16.7%) in the culprit-lesion-only group (hazard ratio, 0.51; 95% CI, 0.43-0.61; P < 0.001).
Conclusion. Among patients with STEMI and multivessel disease, those who underwent complete revascularization with nonculprit lesion PCI had lower rates of cardiovascular death or MI compared to patients with culprit-lesion-only revascularization.
Commentary
Patients presenting with STEMI often have multivessel disease.1 Although it is known that mortality can be reduced by early revascularization of the culprit vessel,2 whether the nonculprit vessel should be revascularized at the time of presentation with STEMI remains controversial.
Recently, multiple studies have reported the benefit of nonculprit vessel revascularization in patients presenting with hemodynamically stable STEMI. Four trials (PRAMI, CvPRIT, DANAMI-PRIMULTI, and COMPARE ACUTE) investigated this clinical question with different designs, and all reported benefit of nonculprit vessel revascularization compared to a culprit-only strategy.3-6 However, the differences in the composite endpoints were mainly driven by the softer endpoints used in these trials, such as refractory angina and ischemia-driven revascularization, and none of these previous trials had adequate power to evaluate differences in hard outcomes, such as death or MI.
In this context, Mehta et al investigated whether achieving complete revascularization by performing PCI on nonculprit vessels would improve the composite of cardiovascular death or MI compared to the culprit-only strategy by conducting a well-designed randomized controlled study. At median follow-up of 3 years, patients who underwent nonculprit vessel PCI had a lower incidence of death or MI compared to those who received the culprit-only strategy (7.8% versus 10.5%). The second co-primary endpoint (composite of death, MI, or ischemia-driven revascularization) also occurred significantly less frequently in the nonculprit PCI group than in the culprit-only PCI group (8.9% versus 16.7%).
The current study has a number of strengths. First, this was a multicenter, international study, and a large number of patients were enrolled (> 4000), achieving adequate power to evaluate for the composite of death and MI. Second, the treatments the patients received reflect contemporary medical therapy and interventional practice: the third-generation thienopyridine ticagrelor, high-dose statins, and ACE inhibitors were prescribed at high rates, and radial access (> 80%) and current-generation drug-eluting stents were used at high rates as well. Third, all angiograms were reviewed by the core lab to evaluate for completeness of revascularization. Fourth, the trial mandated use of fractional flow reserve to assess lesion stenosis 50% to 69% before considering revascularization, ensuring that only ischemic or very-high-grade lesions were revascularized. Fifth, the crossover rate in each group was low compared to the previous studies (4.7% into the complete revascularization group, 3.9% into the lesion-only group). Finally, this study evaluated the timing of the nonculprit PCI. Randomization to each group was stratified according to the intended timing of the nonculprit PCI during the index hospitalization or after hospital discharge (within 45 days). They found that benefit was consistent regardless of when the nonculprit PCI was performed.
Although the COMPLETE study’s design has a number of strengths, it is important to note that patients enrolled in this trial represent a lower-risk STEMI population. Patients with complex anatomy likely were not included, as evidenced by a lower SYNTAX score (mean, 16). Furthermore, no patients who presented with STEMI complicated by cardiogenic shock were enrolled. In the recent CULPRIT SHOCK trial, which focused on patients who had multivessel disease, acute MI, and cardiogenic shock, patients who underwent the culprit-only strategy had a lower rate of death or renal replacement therapy, as compared to patients who underwent immediate complete revascularization.7 Therefore, whether the findings from the COMPLETE study can be extended to a sicker population requires further study.
In 2015, the results from the previous trials, such as PRAMI and CvPRIT, led to a focused update of US PCI guidelines.8 Recommendations for noninfarct-related artery PCI in hemodynamically stable patients presenting with acute MI were upgraded from class III to class IIb. The results from the COMPLETE trial will likely influence the future guidelines, with stronger recommendations toward complete revascularization in patients presenting with hemodynamically stable STEMI.
Applications for Clinical Practice
In patients presenting with hemodynamically stable STEMI, staged complete revascularization, including the nonculprit vessel, should be considered.
– Taishi Hirai, MD, University of Missouri, Columbia, MO, and John EA Blair, MD, University of Chicago Medical Center, Chicago, IL
1. Park DW, Clare RM, Schulte PJ, et al. Extent, location, and clinical significance of non-infarct-related coronary artery disease among patients with ST-elevation myocardial infarction. JAMA. 2014;312:2019-2027.
2. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should We Emergently Revascularize Occluded Coronaries for Cardiogenic Shock. N Engl J Med. 1999;341:625-634.
3. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med. 2013;369:1115-1123.
4. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol. 2015;65:963-972.
5. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revascularisation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet. 2015;386:665-671.
6. Smits PC, Abdel-Wahab M, Neumann FJ, et al. Fractional flow reserve-guided multivessel angioplasty in myocardial infarction. N Engl J Med. 2017;376:1234-1244.
7. Thiele H, Akin I, Sandri M, et al. PCI strategies in patients with acute myocardial infarction and cardiogenic shock. N Engl J Med. 2017;377:2419-2432.
8. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol. 2016;67:1235-1250.
Study Overview
Objective. To determine whether percutaneous coronary intervention (PCI) of a nonculprit lesion in patients with ST-segment elevation myocardial infarction (STEMI) reduces the risk of cardiovascular death or myocardial infarction.
Design. International, multicenter, randomized controlled trial blinded to outcome.
Setting and participants. Patients with STEMI who had multivessel coronary disease and had undergone successful PCI to the culprit lesion.
Intervention. A total of 4041 patients were randomly assigned to either PCI of angiographically significant nonculprit lesions or optimal medical therapy without further revascularization. Randomization was stratified according to intended timing of nonculprit lesion PCI (either during or after the index hospitalization).
Main outcome measures. The first co-primary endpoint was the composite of cardiovascular death or myocardial infarction (MI). The second co-primary endpoint was the composite of cardiovascular death, MI or ischemia-driven revascularization.
Main results. At a median follow-up of 3 years, the composite of cardiovascular death or MI occurred in 158 of the 2016 patients (7.8%) in the nonculprit PCI group and in 213 of the 2025 patients (10.5%) in the culprit-lesion-only group (hazard ratio, 0.73; 95% confidence interval [CI], 0.60-0.91; P = 0.004). The second co-primary endpoint occurred in 179 patients (8.9%) in the nonculprit PCI group and in 339 patients (16.7%) in the culprit-lesion-only group (hazard ratio, 0.51; 95% CI, 0.43-0.61; P < 0.001).
Conclusion. Among patients with STEMI and multivessel disease, those who underwent complete revascularization with nonculprit lesion PCI had lower rates of cardiovascular death or MI compared to patients with culprit-lesion-only revascularization.
Commentary
Patients presenting with STEMI often have multivessel disease.1 Although it is known that mortality can be reduced by early revascularization of the culprit vessel,2 whether the nonculprit vessel should be revascularized at the time of presentation with STEMI remains controversial.
Recently, multiple studies have reported the benefit of nonculprit vessel revascularization in patients presenting with hemodynamically stable STEMI. Four trials (PRAMI, CvPRIT, DANAMI-PRIMULTI, and COMPARE ACUTE) investigated this clinical question with different designs, and all reported benefit of nonculprit vessel revascularization compared to a culprit-only strategy.3-6 However, the differences in the composite endpoints were mainly driven by the softer endpoints used in these trials, such as refractory angina and ischemia-driven revascularization, and none of these previous trials had adequate power to evaluate differences in hard outcomes, such as death or MI.
In this context, Mehta et al investigated whether achieving complete revascularization by performing PCI on nonculprit vessels would improve the composite of cardiovascular death or MI compared to the culprit-only strategy by conducting a well-designed randomized controlled study. At median follow-up of 3 years, patients who underwent nonculprit vessel PCI had a lower incidence of death or MI compared to those who received the culprit-only strategy (7.8% versus 10.5%). The second co-primary endpoint (composite of death, MI, or ischemia-driven revascularization) also occurred significantly less frequently in the nonculprit PCI group than in the culprit-only PCI group (8.9% versus 16.7%).
The current study has a number of strengths. First, this was a multicenter, international study, and a large number of patients were enrolled (> 4000), achieving adequate power to evaluate for the composite of death and MI. Second, the treatments the patients received reflect contemporary medical therapy and interventional practice: the third-generation thienopyridine ticagrelor, high-dose statins, and ACE inhibitors were prescribed at high rates, and radial access (> 80%) and current-generation drug-eluting stents were used at high rates as well. Third, all angiograms were reviewed by the core lab to evaluate for completeness of revascularization. Fourth, the trial mandated use of fractional flow reserve to assess lesion stenosis 50% to 69% before considering revascularization, ensuring that only ischemic or very-high-grade lesions were revascularized. Fifth, the crossover rate in each group was low compared to the previous studies (4.7% into the complete revascularization group, 3.9% into the lesion-only group). Finally, this study evaluated the timing of the nonculprit PCI. Randomization to each group was stratified according to the intended timing of the nonculprit PCI during the index hospitalization or after hospital discharge (within 45 days). They found that benefit was consistent regardless of when the nonculprit PCI was performed.
Although the COMPLETE study’s design has a number of strengths, it is important to note that patients enrolled in this trial represent a lower-risk STEMI population. Patients with complex anatomy likely were not included, as evidenced by a lower SYNTAX score (mean, 16). Furthermore, no patients who presented with STEMI complicated by cardiogenic shock were enrolled. In the recent CULPRIT SHOCK trial, which focused on patients who had multivessel disease, acute MI, and cardiogenic shock, patients who underwent the culprit-only strategy had a lower rate of death or renal replacement therapy, as compared to patients who underwent immediate complete revascularization.7 Therefore, whether the findings from the COMPLETE study can be extended to a sicker population requires further study.
In 2015, the results from the previous trials, such as PRAMI and CvPRIT, led to a focused update of US PCI guidelines.8 Recommendations for noninfarct-related artery PCI in hemodynamically stable patients presenting with acute MI were upgraded from class III to class IIb. The results from the COMPLETE trial will likely influence the future guidelines, with stronger recommendations toward complete revascularization in patients presenting with hemodynamically stable STEMI.
Applications for Clinical Practice
In patients presenting with hemodynamically stable STEMI, staged complete revascularization, including the nonculprit vessel, should be considered.
– Taishi Hirai, MD, University of Missouri, Columbia, MO, and John EA Blair, MD, University of Chicago Medical Center, Chicago, IL
Study Overview
Objective. To determine whether percutaneous coronary intervention (PCI) of a nonculprit lesion in patients with ST-segment elevation myocardial infarction (STEMI) reduces the risk of cardiovascular death or myocardial infarction.
Design. International, multicenter, randomized controlled trial blinded to outcome.
Setting and participants. Patients with STEMI who had multivessel coronary disease and had undergone successful PCI to the culprit lesion.
Intervention. A total of 4041 patients were randomly assigned to either PCI of angiographically significant nonculprit lesions or optimal medical therapy without further revascularization. Randomization was stratified according to intended timing of nonculprit lesion PCI (either during or after the index hospitalization).
Main outcome measures. The first co-primary endpoint was the composite of cardiovascular death or myocardial infarction (MI). The second co-primary endpoint was the composite of cardiovascular death, MI or ischemia-driven revascularization.
Main results. At a median follow-up of 3 years, the composite of cardiovascular death or MI occurred in 158 of the 2016 patients (7.8%) in the nonculprit PCI group and in 213 of the 2025 patients (10.5%) in the culprit-lesion-only group (hazard ratio, 0.73; 95% confidence interval [CI], 0.60-0.91; P = 0.004). The second co-primary endpoint occurred in 179 patients (8.9%) in the nonculprit PCI group and in 339 patients (16.7%) in the culprit-lesion-only group (hazard ratio, 0.51; 95% CI, 0.43-0.61; P < 0.001).
Conclusion. Among patients with STEMI and multivessel disease, those who underwent complete revascularization with nonculprit lesion PCI had lower rates of cardiovascular death or MI compared to patients with culprit-lesion-only revascularization.
Commentary
Patients presenting with STEMI often have multivessel disease.1 Although it is known that mortality can be reduced by early revascularization of the culprit vessel,2 whether the nonculprit vessel should be revascularized at the time of presentation with STEMI remains controversial.
Recently, multiple studies have reported the benefit of nonculprit vessel revascularization in patients presenting with hemodynamically stable STEMI. Four trials (PRAMI, CvPRIT, DANAMI-PRIMULTI, and COMPARE ACUTE) investigated this clinical question with different designs, and all reported benefit of nonculprit vessel revascularization compared to a culprit-only strategy.3-6 However, the differences in the composite endpoints were mainly driven by the softer endpoints used in these trials, such as refractory angina and ischemia-driven revascularization, and none of these previous trials had adequate power to evaluate differences in hard outcomes, such as death or MI.
In this context, Mehta et al investigated whether achieving complete revascularization by performing PCI on nonculprit vessels would improve the composite of cardiovascular death or MI compared to the culprit-only strategy by conducting a well-designed randomized controlled study. At median follow-up of 3 years, patients who underwent nonculprit vessel PCI had a lower incidence of death or MI compared to those who received the culprit-only strategy (7.8% versus 10.5%). The second co-primary endpoint (composite of death, MI, or ischemia-driven revascularization) also occurred significantly less frequently in the nonculprit PCI group than in the culprit-only PCI group (8.9% versus 16.7%).
The current study has a number of strengths. First, this was a multicenter, international study, and a large number of patients were enrolled (> 4000), achieving adequate power to evaluate for the composite of death and MI. Second, the treatments the patients received reflect contemporary medical therapy and interventional practice: the third-generation thienopyridine ticagrelor, high-dose statins, and ACE inhibitors were prescribed at high rates, and radial access (> 80%) and current-generation drug-eluting stents were used at high rates as well. Third, all angiograms were reviewed by the core lab to evaluate for completeness of revascularization. Fourth, the trial mandated use of fractional flow reserve to assess lesion stenosis 50% to 69% before considering revascularization, ensuring that only ischemic or very-high-grade lesions were revascularized. Fifth, the crossover rate in each group was low compared to the previous studies (4.7% into the complete revascularization group, 3.9% into the lesion-only group). Finally, this study evaluated the timing of the nonculprit PCI. Randomization to each group was stratified according to the intended timing of the nonculprit PCI during the index hospitalization or after hospital discharge (within 45 days). They found that benefit was consistent regardless of when the nonculprit PCI was performed.
Although the COMPLETE study’s design has a number of strengths, it is important to note that patients enrolled in this trial represent a lower-risk STEMI population. Patients with complex anatomy likely were not included, as evidenced by a lower SYNTAX score (mean, 16). Furthermore, no patients who presented with STEMI complicated by cardiogenic shock were enrolled. In the recent CULPRIT SHOCK trial, which focused on patients who had multivessel disease, acute MI, and cardiogenic shock, patients who underwent the culprit-only strategy had a lower rate of death or renal replacement therapy, as compared to patients who underwent immediate complete revascularization.7 Therefore, whether the findings from the COMPLETE study can be extended to a sicker population requires further study.
In 2015, the results from the previous trials, such as PRAMI and CvPRIT, led to a focused update of US PCI guidelines.8 Recommendations for noninfarct-related artery PCI in hemodynamically stable patients presenting with acute MI were upgraded from class III to class IIb. The results from the COMPLETE trial will likely influence the future guidelines, with stronger recommendations toward complete revascularization in patients presenting with hemodynamically stable STEMI.
Applications for Clinical Practice
In patients presenting with hemodynamically stable STEMI, staged complete revascularization, including the nonculprit vessel, should be considered.
– Taishi Hirai, MD, University of Missouri, Columbia, MO, and John EA Blair, MD, University of Chicago Medical Center, Chicago, IL
1. Park DW, Clare RM, Schulte PJ, et al. Extent, location, and clinical significance of non-infarct-related coronary artery disease among patients with ST-elevation myocardial infarction. JAMA. 2014;312:2019-2027.
2. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should We Emergently Revascularize Occluded Coronaries for Cardiogenic Shock. N Engl J Med. 1999;341:625-634.
3. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med. 2013;369:1115-1123.
4. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol. 2015;65:963-972.
5. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revascularisation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet. 2015;386:665-671.
6. Smits PC, Abdel-Wahab M, Neumann FJ, et al. Fractional flow reserve-guided multivessel angioplasty in myocardial infarction. N Engl J Med. 2017;376:1234-1244.
7. Thiele H, Akin I, Sandri M, et al. PCI strategies in patients with acute myocardial infarction and cardiogenic shock. N Engl J Med. 2017;377:2419-2432.
8. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol. 2016;67:1235-1250.
1. Park DW, Clare RM, Schulte PJ, et al. Extent, location, and clinical significance of non-infarct-related coronary artery disease among patients with ST-elevation myocardial infarction. JAMA. 2014;312:2019-2027.
2. Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should We Emergently Revascularize Occluded Coronaries for Cardiogenic Shock. N Engl J Med. 1999;341:625-634.
3. Wald DS, Morris JK, Wald NJ, et al. Randomized trial of preventive angioplasty in myocardial infarction. N Engl J Med. 2013;369:1115-1123.
4. Gershlick AH, Khan JN, Kelly DJ, et al. Randomized trial of complete versus lesion-only revascularization in patients undergoing primary percutaneous coronary intervention for STEMI and multivessel disease: the CvLPRIT trial. J Am Coll Cardiol. 2015;65:963-972.
5. Engstrom T, Kelbaek H, Helqvist S, et al. Complete revascularisation versus treatment of the culprit lesion only in patients with ST-segment elevation myocardial infarction and multivessel disease (DANAMI-3-PRIMULTI): an open-label, randomised controlled trial. Lancet. 2015;386:665-671.
6. Smits PC, Abdel-Wahab M, Neumann FJ, et al. Fractional flow reserve-guided multivessel angioplasty in myocardial infarction. N Engl J Med. 2017;376:1234-1244.
7. Thiele H, Akin I, Sandri M, et al. PCI strategies in patients with acute myocardial infarction and cardiogenic shock. N Engl J Med. 2017;377:2419-2432.
8. Levine GN, Bates ER, Blankenship JC, et al. 2015 ACC/AHA/SCAI focused update on primary percutaneous coronary intervention for patients with ST-elevation myocardial infarction: an update of the 2011 ACCF/AHA/SCAI guideline for percutaneous coronary intervention and the 2013 ACCF/AHA guideline for the management of ST-elevation myocardial infarction. J Am Coll Cardiol. 2016;67:1235-1250.
Dual vs Triple Therapy Following ACS or PCI in Patients with Atrial Fibrillation
Study Overview
Objective. To compare the benefit of apixaban with a vitamin K antagonist and compare aspirin with placebo in patients with atrial fibrillation who had acute coronary syndrome or underwent percutaneous coronary intervention (PCI) and were planning to take a P2Y12 inhibitor.
Design. Multicenter, international, open-label, prospective randomized controlled trial with a 2-by-2 factorial design.
Setting and participants. 4614 patients who had an acute coronary syndrome or had undergone PCI and were planning to take a P2Y12 inhibitor.
Intervention. Patients were assigned by means of an interactive voice-response system to receive apixaban or a vitamin K antagonist and to receive aspirin or matching placebo for 6 months.
Main outcome measures. The primary outcome was major or clinically relevant nonmajor bleeding. Secondary outcomes included death or hospitalization and a composite of ischemic events.
Main results. At 6 months, major or clinically relevant nonmajor bleeding had occurred in 10.5% of the patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.58-0.81, P < 0.001 for both noninferiority and superiority), and in 16.1% of the patients receiving aspirin, as compared with 9.0% of those receiving placebo (HR 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR 0.83; 95% CI, 0.74-0.93; P = 0.002) and similar incidence of ischemic events.
Conclusion. Among patients with atrial fibrillation and recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, an antithrombotic regimen that included apixaban without aspirin resulted in less bleeding and fewer hospitalizations without significant differences in the incidence of ischemic events than the regimens that included a vitamin K antagonist, aspirin, or both.
Commentary
PCI is performed in about 20% of patients with atrial fibrillation. These patients require dual antiplatelet therapy to prevent ischemic events, combined with long-term anticoagulation to prevent stroke due to atrial fibrillation. Because the combination of anticoagulation and antiplatelet therapy is associated with a higher risk of bleeding, balancing the risk and benefit of dual antiplatelet therapy and anticoagulation in this population is crucial.
Previous studies have assessed the risk and benefit associated with anticoagulation and antiplatelet therapy. When warfarin plus clopidogrel (double therapy) was compared with warfarin, aspirin, and clopidogrel (triple therapy) in patients with acute coronary syndromes and stable ischemic coronary disease undergoing PCI, use of clopidogrel without aspirin (double therapy) was associated with a significant reduction in bleeding complications (19.4% versus 44.4%, HR, 0.36; 95% CI, 0.26-0.20; P < 0.0001) without increasing thrombotic events.1 Recent studies have compared triple therapy with warfarin to double therapy using direct oral anticoagulants (DOAC). The PIONEER AF-PCI study, which compared low-dose rivaroxaban (15 mg once daily) plus a P2Y12 inhibitor to vitamin K antagonist plus dual antiplatelet therapy, found that the rates of clinically significant bleeding were lower in the low-dose rivaroxaban group compared to the triple-therapy group with a vitamin K antagonist (16.8% versus 26.7%; HR, 0.59; 95% CI, 0.47-0.76; P < 0.001).2 Similarly, the RE-DUAL PCI studied dabigatran and showed that the dual therapy group with dabigatran had a lower incidence of major or clinically relevant nonmajor bleeding events during follow-up compared to triple therapy including a vitamin K antagonist (15.4% versus 26.9%; HR, 0.52; 95% CI, 0.42-0.63; P < 0.001).3
In this context, Lopes at al investigated the clinical question of dual therapy versus triple therapy by performing a well-designed randomized clinical trial. In this trial with a 2-by-2 factorial design, the authors studied the effect of apixaban compared to vitamin K antagonist and the effect of aspirin compared to placebo. Major or clinically relevant nonmajor bleeding occurred in 10.5% of patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (HR 0.69; 95% CI, 0.58-0.81; P < 0.001). The incidence of major or clinically relevant nonmajor bleeding was higher in patients receiving aspirin than in those receiving placebo (16.1% versus 9.0%; HR, 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR, 0.83; 95% CI, 0.74-0.93; P = 0.002). The incidence of ischemic events was similar between the apixaban group and vitamin K antagonist group and between the aspirin group and placebo group.
The strengths of this current study include the large number of patients it enrolled. Taking the results from the PIONEER-AF, RE-DUAL PCI, and AUGUSTUS trials, it is clear that DOAC reduces the risk of bleeding compared to vitamin K antagonist. In addition, the AUGUSTUS trial was the first that evaluated the effect of aspirin in patients treated with DOAC and antiplatelet therapy. Aspirin was associated with increased risk of bleeding, with a similar rate of ischemic events compared to placebo.
The AUGUSTUS trial has several limitations. Although the incidence of ischemic events was similar between the apixaban group and the vitamin K antagonist group, the study was not powered to evaluate for individual ischemic outcomes. However, there was no clear evidence of an increase in harm. Since more than 90% of P2Y12 inhibitors used were clopidogrel, the safety and efficacy of combining apixaban with ticagrelor or prasugrel will require further study.
Applications for Clinical Practice
In patients with atrial fibrillation and a recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, dual therapy with a P2Y12 inhibitor and DOAC should be favored over a regimen that includes a vitamin K antagonist and/or aspirin.
—Taishi Hirai, MD, University of Missouri Medical Center, and John Blair, MD, University of Chicago Medical Center
1. Dewilde WJM, Oirbans T, Verheugt FWA, et al. Use of clopidogrel with or without aspirin in patients taking oral anticoagulant therapy and undergoing percutaneous coronary intervention: an open-label, randomised, controlled trial. Lancet. 2013;381(9872):1107-1115.
2. Gibson CM, Mehran R, Bode C, et al. Prevention of bleeding in patients with atrial fibrillation undergoing PCI. N Engl J Med. 2016;375:2423-2434.
3. Cannon CP, Bhatt DL, Oldgren J, et al. Dual antithrombotic therapy with dabigatran after PCI in atrial fibrillation. N Engl J Med. 2017;377:1513-1524.
Study Overview
Objective. To compare the benefit of apixaban with a vitamin K antagonist and compare aspirin with placebo in patients with atrial fibrillation who had acute coronary syndrome or underwent percutaneous coronary intervention (PCI) and were planning to take a P2Y12 inhibitor.
Design. Multicenter, international, open-label, prospective randomized controlled trial with a 2-by-2 factorial design.
Setting and participants. 4614 patients who had an acute coronary syndrome or had undergone PCI and were planning to take a P2Y12 inhibitor.
Intervention. Patients were assigned by means of an interactive voice-response system to receive apixaban or a vitamin K antagonist and to receive aspirin or matching placebo for 6 months.
Main outcome measures. The primary outcome was major or clinically relevant nonmajor bleeding. Secondary outcomes included death or hospitalization and a composite of ischemic events.
Main results. At 6 months, major or clinically relevant nonmajor bleeding had occurred in 10.5% of the patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.58-0.81, P < 0.001 for both noninferiority and superiority), and in 16.1% of the patients receiving aspirin, as compared with 9.0% of those receiving placebo (HR 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR 0.83; 95% CI, 0.74-0.93; P = 0.002) and similar incidence of ischemic events.
Conclusion. Among patients with atrial fibrillation and recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, an antithrombotic regimen that included apixaban without aspirin resulted in less bleeding and fewer hospitalizations without significant differences in the incidence of ischemic events than the regimens that included a vitamin K antagonist, aspirin, or both.
Commentary
PCI is performed in about 20% of patients with atrial fibrillation. These patients require dual antiplatelet therapy to prevent ischemic events, combined with long-term anticoagulation to prevent stroke due to atrial fibrillation. Because the combination of anticoagulation and antiplatelet therapy is associated with a higher risk of bleeding, balancing the risk and benefit of dual antiplatelet therapy and anticoagulation in this population is crucial.
Previous studies have assessed the risk and benefit associated with anticoagulation and antiplatelet therapy. When warfarin plus clopidogrel (double therapy) was compared with warfarin, aspirin, and clopidogrel (triple therapy) in patients with acute coronary syndromes and stable ischemic coronary disease undergoing PCI, use of clopidogrel without aspirin (double therapy) was associated with a significant reduction in bleeding complications (19.4% versus 44.4%, HR, 0.36; 95% CI, 0.26-0.20; P < 0.0001) without increasing thrombotic events.1 Recent studies have compared triple therapy with warfarin to double therapy using direct oral anticoagulants (DOAC). The PIONEER AF-PCI study, which compared low-dose rivaroxaban (15 mg once daily) plus a P2Y12 inhibitor to vitamin K antagonist plus dual antiplatelet therapy, found that the rates of clinically significant bleeding were lower in the low-dose rivaroxaban group compared to the triple-therapy group with a vitamin K antagonist (16.8% versus 26.7%; HR, 0.59; 95% CI, 0.47-0.76; P < 0.001).2 Similarly, the RE-DUAL PCI studied dabigatran and showed that the dual therapy group with dabigatran had a lower incidence of major or clinically relevant nonmajor bleeding events during follow-up compared to triple therapy including a vitamin K antagonist (15.4% versus 26.9%; HR, 0.52; 95% CI, 0.42-0.63; P < 0.001).3
In this context, Lopes at al investigated the clinical question of dual therapy versus triple therapy by performing a well-designed randomized clinical trial. In this trial with a 2-by-2 factorial design, the authors studied the effect of apixaban compared to vitamin K antagonist and the effect of aspirin compared to placebo. Major or clinically relevant nonmajor bleeding occurred in 10.5% of patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (HR 0.69; 95% CI, 0.58-0.81; P < 0.001). The incidence of major or clinically relevant nonmajor bleeding was higher in patients receiving aspirin than in those receiving placebo (16.1% versus 9.0%; HR, 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR, 0.83; 95% CI, 0.74-0.93; P = 0.002). The incidence of ischemic events was similar between the apixaban group and vitamin K antagonist group and between the aspirin group and placebo group.
The strengths of this current study include the large number of patients it enrolled. Taking the results from the PIONEER-AF, RE-DUAL PCI, and AUGUSTUS trials, it is clear that DOAC reduces the risk of bleeding compared to vitamin K antagonist. In addition, the AUGUSTUS trial was the first that evaluated the effect of aspirin in patients treated with DOAC and antiplatelet therapy. Aspirin was associated with increased risk of bleeding, with a similar rate of ischemic events compared to placebo.
The AUGUSTUS trial has several limitations. Although the incidence of ischemic events was similar between the apixaban group and the vitamin K antagonist group, the study was not powered to evaluate for individual ischemic outcomes. However, there was no clear evidence of an increase in harm. Since more than 90% of P2Y12 inhibitors used were clopidogrel, the safety and efficacy of combining apixaban with ticagrelor or prasugrel will require further study.
Applications for Clinical Practice
In patients with atrial fibrillation and a recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, dual therapy with a P2Y12 inhibitor and DOAC should be favored over a regimen that includes a vitamin K antagonist and/or aspirin.
—Taishi Hirai, MD, University of Missouri Medical Center, and John Blair, MD, University of Chicago Medical Center
Study Overview
Objective. To compare the benefit of apixaban with a vitamin K antagonist and compare aspirin with placebo in patients with atrial fibrillation who had acute coronary syndrome or underwent percutaneous coronary intervention (PCI) and were planning to take a P2Y12 inhibitor.
Design. Multicenter, international, open-label, prospective randomized controlled trial with a 2-by-2 factorial design.
Setting and participants. 4614 patients who had an acute coronary syndrome or had undergone PCI and were planning to take a P2Y12 inhibitor.
Intervention. Patients were assigned by means of an interactive voice-response system to receive apixaban or a vitamin K antagonist and to receive aspirin or matching placebo for 6 months.
Main outcome measures. The primary outcome was major or clinically relevant nonmajor bleeding. Secondary outcomes included death or hospitalization and a composite of ischemic events.
Main results. At 6 months, major or clinically relevant nonmajor bleeding had occurred in 10.5% of the patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.58-0.81, P < 0.001 for both noninferiority and superiority), and in 16.1% of the patients receiving aspirin, as compared with 9.0% of those receiving placebo (HR 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR 0.83; 95% CI, 0.74-0.93; P = 0.002) and similar incidence of ischemic events.
Conclusion. Among patients with atrial fibrillation and recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, an antithrombotic regimen that included apixaban without aspirin resulted in less bleeding and fewer hospitalizations without significant differences in the incidence of ischemic events than the regimens that included a vitamin K antagonist, aspirin, or both.
Commentary
PCI is performed in about 20% of patients with atrial fibrillation. These patients require dual antiplatelet therapy to prevent ischemic events, combined with long-term anticoagulation to prevent stroke due to atrial fibrillation. Because the combination of anticoagulation and antiplatelet therapy is associated with a higher risk of bleeding, balancing the risk and benefit of dual antiplatelet therapy and anticoagulation in this population is crucial.
Previous studies have assessed the risk and benefit associated with anticoagulation and antiplatelet therapy. When warfarin plus clopidogrel (double therapy) was compared with warfarin, aspirin, and clopidogrel (triple therapy) in patients with acute coronary syndromes and stable ischemic coronary disease undergoing PCI, use of clopidogrel without aspirin (double therapy) was associated with a significant reduction in bleeding complications (19.4% versus 44.4%, HR, 0.36; 95% CI, 0.26-0.20; P < 0.0001) without increasing thrombotic events.1 Recent studies have compared triple therapy with warfarin to double therapy using direct oral anticoagulants (DOAC). The PIONEER AF-PCI study, which compared low-dose rivaroxaban (15 mg once daily) plus a P2Y12 inhibitor to vitamin K antagonist plus dual antiplatelet therapy, found that the rates of clinically significant bleeding were lower in the low-dose rivaroxaban group compared to the triple-therapy group with a vitamin K antagonist (16.8% versus 26.7%; HR, 0.59; 95% CI, 0.47-0.76; P < 0.001).2 Similarly, the RE-DUAL PCI studied dabigatran and showed that the dual therapy group with dabigatran had a lower incidence of major or clinically relevant nonmajor bleeding events during follow-up compared to triple therapy including a vitamin K antagonist (15.4% versus 26.9%; HR, 0.52; 95% CI, 0.42-0.63; P < 0.001).3
In this context, Lopes at al investigated the clinical question of dual therapy versus triple therapy by performing a well-designed randomized clinical trial. In this trial with a 2-by-2 factorial design, the authors studied the effect of apixaban compared to vitamin K antagonist and the effect of aspirin compared to placebo. Major or clinically relevant nonmajor bleeding occurred in 10.5% of patients receiving apixaban, as compared to 14.7% of those receiving a vitamin K antagonist (HR 0.69; 95% CI, 0.58-0.81; P < 0.001). The incidence of major or clinically relevant nonmajor bleeding was higher in patients receiving aspirin than in those receiving placebo (16.1% versus 9.0%; HR, 1.89; 95% CI, 1.59-2.24; P < 0.001). Patients in the apixaban group had a lower incidence of death or hospitalization than those in the vitamin K antagonist group (23.5% versus 27.4%; HR, 0.83; 95% CI, 0.74-0.93; P = 0.002). The incidence of ischemic events was similar between the apixaban group and vitamin K antagonist group and between the aspirin group and placebo group.
The strengths of this current study include the large number of patients it enrolled. Taking the results from the PIONEER-AF, RE-DUAL PCI, and AUGUSTUS trials, it is clear that DOAC reduces the risk of bleeding compared to vitamin K antagonist. In addition, the AUGUSTUS trial was the first that evaluated the effect of aspirin in patients treated with DOAC and antiplatelet therapy. Aspirin was associated with increased risk of bleeding, with a similar rate of ischemic events compared to placebo.
The AUGUSTUS trial has several limitations. Although the incidence of ischemic events was similar between the apixaban group and the vitamin K antagonist group, the study was not powered to evaluate for individual ischemic outcomes. However, there was no clear evidence of an increase in harm. Since more than 90% of P2Y12 inhibitors used were clopidogrel, the safety and efficacy of combining apixaban with ticagrelor or prasugrel will require further study.
Applications for Clinical Practice
In patients with atrial fibrillation and a recent acute coronary syndrome or PCI treated with a P2Y12 inhibitor, dual therapy with a P2Y12 inhibitor and DOAC should be favored over a regimen that includes a vitamin K antagonist and/or aspirin.
—Taishi Hirai, MD, University of Missouri Medical Center, and John Blair, MD, University of Chicago Medical Center
1. Dewilde WJM, Oirbans T, Verheugt FWA, et al. Use of clopidogrel with or without aspirin in patients taking oral anticoagulant therapy and undergoing percutaneous coronary intervention: an open-label, randomised, controlled trial. Lancet. 2013;381(9872):1107-1115.
2. Gibson CM, Mehran R, Bode C, et al. Prevention of bleeding in patients with atrial fibrillation undergoing PCI. N Engl J Med. 2016;375:2423-2434.
3. Cannon CP, Bhatt DL, Oldgren J, et al. Dual antithrombotic therapy with dabigatran after PCI in atrial fibrillation. N Engl J Med. 2017;377:1513-1524.
1. Dewilde WJM, Oirbans T, Verheugt FWA, et al. Use of clopidogrel with or without aspirin in patients taking oral anticoagulant therapy and undergoing percutaneous coronary intervention: an open-label, randomised, controlled trial. Lancet. 2013;381(9872):1107-1115.
2. Gibson CM, Mehran R, Bode C, et al. Prevention of bleeding in patients with atrial fibrillation undergoing PCI. N Engl J Med. 2016;375:2423-2434.
3. Cannon CP, Bhatt DL, Oldgren J, et al. Dual antithrombotic therapy with dabigatran after PCI in atrial fibrillation. N Engl J Med. 2017;377:1513-1524.
Cardioprotective Effect of Metformin in Patients with Decreased Renal Function
Study Overview
Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.
Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.
Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.
Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/m
Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.
Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.
For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).
Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.
Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.
Commentary
There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3
There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4
In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.
Applications for Clinical Practice
The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.
–Ka Ming Gordon Ngai, MD, MPH
1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.
2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.
3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.
4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.
Study Overview
Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.
Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.
Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.
Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/m
Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.
Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.
For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).
Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.
Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.
Commentary
There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3
There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4
In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.
Applications for Clinical Practice
The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.
–Ka Ming Gordon Ngai, MD, MPH
Study Overview
Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.
Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.
Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.
Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/m
Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.
Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.
For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).
Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.
Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.
Commentary
There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3
There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4
In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.
Applications for Clinical Practice
The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.
–Ka Ming Gordon Ngai, MD, MPH
1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.
2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.
3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.
4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.
1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.
2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.
3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.
4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.
Prasugrel Superior to Ticagrelor in Acute Coronary Syndromes
Study Overview
Objective. To assess the relative merits of ticagrelor compared to prasugrel in patients with acute coronary syndromes who will undergo invasive evaluation.
Design. Multicenter, open-label, prospective randomized controlled trial.
Setting and participants. A total of 4018 patients who presented with ACS with or without ST-segment elevation.
Intervention. Patients were randomly assigned to receive either ticagrelor or prasugrel.
Main outcome measures. The primary end point was the composite of death, myocardial infarction, or stroke at 1 year. The secondary end point was bleeding.
Main results. At 1 year, a primary end point event occurred in 184 of 2012 patients (9.3%) in the ticagrelor group and 137 of 2006 patients (6.9%) in the prasugrel group (hazard ratio [HR], 1.36; 95% confidence interval [CI], 1.09-1.70; P = 0.006). In the comparison between
Conclusion. In patients who presented with ACS with or without ST-segment elevation, the incidence of death, myocardial infarction, or stroke was significantly lower among those who received prasugrel as compared to those who received ticagrelor, and incidence of major bleeding was not significantly different.
Commentary
Dual antiplatelet therapy combining an adenosine disphosphate (ADP) receptor antagonist and aspirin is standard treatment for patients presenting with ACS. The limitation of clopidogrel has been its modest antiplatelet effect, with substantial interpatient variability. The newer generation thienopyridine prasugrel and the reversible direct-acting oral antagonist of the ADP receptor ticagrelor provide consistent and greater antiplatelet effect compared to clopidogrel. It has been previously reported that these agents are superior in reducing ischemic events when compared to clopidogrel.1,2 Therefore, current guidelines recommend ticagrelor and prasugrel in preference to clopidogrel.3,4 However, there has been no large randomized controlled study comparing the effect of ticagrelor and prasugrel. In this context, Shupke et al investigated this clinical question by performing a well-designed multicenter randomized controlled trial in patients presenting with ACS. At 12-month follow-up, the composite of death, myocardial infarction, and stroke occurred more frequently in the ticagrelor group compared to the prasugrel group (9.3% versus 6.9%; HR, 1.36; 95% CI, 1.09-1.70; P < 0.01). The incidence of major bleeding was not significantly different between the 2 groups (5.4% versus 4.8%; P = 0.46).
The strengths of this current study include the randomized design and the large number of patients enrolled, with adequate power to evaluate for superiority. This was a multicenter trial in Europe with 23 participating centers (21 from Germany). Furthermore, the interventional technique used by the operators reflects more contemporary technique compared to the previous studies comparing each agent to clopidogrel,1,2 with more frequent use of radial access (37%) and drug-eluting stents (90%) and reduced use of GPIIb/IIIa inhibitors (12%).
There are a few important points to consider due to the differences between the 2 agents compared in this study. First, the loading dose of ticagrelor and prasugrel was administered differently in patients presenting with ACS without ST elevation. Ticagrelor was administered as soon as possible prior to the coronary angiogram, but prasugrel was administered after the coronary anatomy was defined prior to the intervention, which is how this agent is administered in current clinical practice. Therefore, this trial was an open-label study that compared not only different medications, but different administration strategies. Second, ticagrelor and prasugrel have different side-effect profiles. The side effects unique to ticagrelor are dyspnea and bradycardia. On the other hand, a contraindication unique to prasugrel is patients with a history of transient ischemic attack or stroke due to increased risk of thrombotic and hemorrhagic stroke.1 In addition, prasugrel has increased bleeding risk in patients older than 75 years of age and those with low body weight (< 60 kg). In this study, the overall medication discontinuation rate was higher in the ticagrelor group specifically due to dyspnea, and the reduced dose of 5 mg of prasugrel was used in patients older than 75 years or with low body weight.
Since the timing of administration of ticagrelor (preloading prior to coronary angiography is recommended) is similar to that of clopidogrel, and given the theoretical benefit of reversible inhibition of the ADP receptor, ticagrelor has been used more commonly in clinical practice than prasugrel, and it has been implemented in the ACS protocol in many hospitals. In light of the results from this first head-to-head comparison utilizing more contemporary interventional techniques, these protocols may need to be adjusted in favor of prasugrel for patients presenting with ACS. However, given the difference in timing of administration and the difference in side-effect profile, operators must also tailor these agents depending on the patient profile.
Applications for Clinical Practice
In patients presenting with ACS, prasugrel was superior to ticagrelor, with a lower composite of death, myocardial infarction, and stroke at 12 months. Prasugrel should be considered as a first-line treatment for ACS.
– Taishi Hirai, MD, and Arun Kumar, MD, University of Missouri, Columbia, MO
1. Wiviott SD, Braunwald E, McCabe CH, et al. Prasugrel versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2007;357:2001-2015.
2. Wallentin L, Becker RC, Budaj A, et al. Ticagrelor versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2009;361:1045-1057.
3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2018;39:119-177.
4. Levine GN, Bates ER, Bittl JA, et al. 2016 ACC/AHA guideline focused update on duration of dual antiplatelet therapy in patients with coronary artery disease: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2016;68:1082-1115.
Study Overview
Objective. To assess the relative merits of ticagrelor compared to prasugrel in patients with acute coronary syndromes who will undergo invasive evaluation.
Design. Multicenter, open-label, prospective randomized controlled trial.
Setting and participants. A total of 4018 patients who presented with ACS with or without ST-segment elevation.
Intervention. Patients were randomly assigned to receive either ticagrelor or prasugrel.
Main outcome measures. The primary end point was the composite of death, myocardial infarction, or stroke at 1 year. The secondary end point was bleeding.
Main results. At 1 year, a primary end point event occurred in 184 of 2012 patients (9.3%) in the ticagrelor group and 137 of 2006 patients (6.9%) in the prasugrel group (hazard ratio [HR], 1.36; 95% confidence interval [CI], 1.09-1.70; P = 0.006). In the comparison between
Conclusion. In patients who presented with ACS with or without ST-segment elevation, the incidence of death, myocardial infarction, or stroke was significantly lower among those who received prasugrel as compared to those who received ticagrelor, and incidence of major bleeding was not significantly different.
Commentary
Dual antiplatelet therapy combining an adenosine disphosphate (ADP) receptor antagonist and aspirin is standard treatment for patients presenting with ACS. The limitation of clopidogrel has been its modest antiplatelet effect, with substantial interpatient variability. The newer generation thienopyridine prasugrel and the reversible direct-acting oral antagonist of the ADP receptor ticagrelor provide consistent and greater antiplatelet effect compared to clopidogrel. It has been previously reported that these agents are superior in reducing ischemic events when compared to clopidogrel.1,2 Therefore, current guidelines recommend ticagrelor and prasugrel in preference to clopidogrel.3,4 However, there has been no large randomized controlled study comparing the effect of ticagrelor and prasugrel. In this context, Shupke et al investigated this clinical question by performing a well-designed multicenter randomized controlled trial in patients presenting with ACS. At 12-month follow-up, the composite of death, myocardial infarction, and stroke occurred more frequently in the ticagrelor group compared to the prasugrel group (9.3% versus 6.9%; HR, 1.36; 95% CI, 1.09-1.70; P < 0.01). The incidence of major bleeding was not significantly different between the 2 groups (5.4% versus 4.8%; P = 0.46).
The strengths of this current study include the randomized design and the large number of patients enrolled, with adequate power to evaluate for superiority. This was a multicenter trial in Europe with 23 participating centers (21 from Germany). Furthermore, the interventional technique used by the operators reflects more contemporary technique compared to the previous studies comparing each agent to clopidogrel,1,2 with more frequent use of radial access (37%) and drug-eluting stents (90%) and reduced use of GPIIb/IIIa inhibitors (12%).
There are a few important points to consider due to the differences between the 2 agents compared in this study. First, the loading dose of ticagrelor and prasugrel was administered differently in patients presenting with ACS without ST elevation. Ticagrelor was administered as soon as possible prior to the coronary angiogram, but prasugrel was administered after the coronary anatomy was defined prior to the intervention, which is how this agent is administered in current clinical practice. Therefore, this trial was an open-label study that compared not only different medications, but different administration strategies. Second, ticagrelor and prasugrel have different side-effect profiles. The side effects unique to ticagrelor are dyspnea and bradycardia. On the other hand, a contraindication unique to prasugrel is patients with a history of transient ischemic attack or stroke due to increased risk of thrombotic and hemorrhagic stroke.1 In addition, prasugrel has increased bleeding risk in patients older than 75 years of age and those with low body weight (< 60 kg). In this study, the overall medication discontinuation rate was higher in the ticagrelor group specifically due to dyspnea, and the reduced dose of 5 mg of prasugrel was used in patients older than 75 years or with low body weight.
Since the timing of administration of ticagrelor (preloading prior to coronary angiography is recommended) is similar to that of clopidogrel, and given the theoretical benefit of reversible inhibition of the ADP receptor, ticagrelor has been used more commonly in clinical practice than prasugrel, and it has been implemented in the ACS protocol in many hospitals. In light of the results from this first head-to-head comparison utilizing more contemporary interventional techniques, these protocols may need to be adjusted in favor of prasugrel for patients presenting with ACS. However, given the difference in timing of administration and the difference in side-effect profile, operators must also tailor these agents depending on the patient profile.
Applications for Clinical Practice
In patients presenting with ACS, prasugrel was superior to ticagrelor, with a lower composite of death, myocardial infarction, and stroke at 12 months. Prasugrel should be considered as a first-line treatment for ACS.
– Taishi Hirai, MD, and Arun Kumar, MD, University of Missouri, Columbia, MO
Study Overview
Objective. To assess the relative merits of ticagrelor compared to prasugrel in patients with acute coronary syndromes who will undergo invasive evaluation.
Design. Multicenter, open-label, prospective randomized controlled trial.
Setting and participants. A total of 4018 patients who presented with ACS with or without ST-segment elevation.
Intervention. Patients were randomly assigned to receive either ticagrelor or prasugrel.
Main outcome measures. The primary end point was the composite of death, myocardial infarction, or stroke at 1 year. The secondary end point was bleeding.
Main results. At 1 year, a primary end point event occurred in 184 of 2012 patients (9.3%) in the ticagrelor group and 137 of 2006 patients (6.9%) in the prasugrel group (hazard ratio [HR], 1.36; 95% confidence interval [CI], 1.09-1.70; P = 0.006). In the comparison between
Conclusion. In patients who presented with ACS with or without ST-segment elevation, the incidence of death, myocardial infarction, or stroke was significantly lower among those who received prasugrel as compared to those who received ticagrelor, and incidence of major bleeding was not significantly different.
Commentary
Dual antiplatelet therapy combining an adenosine disphosphate (ADP) receptor antagonist and aspirin is standard treatment for patients presenting with ACS. The limitation of clopidogrel has been its modest antiplatelet effect, with substantial interpatient variability. The newer generation thienopyridine prasugrel and the reversible direct-acting oral antagonist of the ADP receptor ticagrelor provide consistent and greater antiplatelet effect compared to clopidogrel. It has been previously reported that these agents are superior in reducing ischemic events when compared to clopidogrel.1,2 Therefore, current guidelines recommend ticagrelor and prasugrel in preference to clopidogrel.3,4 However, there has been no large randomized controlled study comparing the effect of ticagrelor and prasugrel. In this context, Shupke et al investigated this clinical question by performing a well-designed multicenter randomized controlled trial in patients presenting with ACS. At 12-month follow-up, the composite of death, myocardial infarction, and stroke occurred more frequently in the ticagrelor group compared to the prasugrel group (9.3% versus 6.9%; HR, 1.36; 95% CI, 1.09-1.70; P < 0.01). The incidence of major bleeding was not significantly different between the 2 groups (5.4% versus 4.8%; P = 0.46).
The strengths of this current study include the randomized design and the large number of patients enrolled, with adequate power to evaluate for superiority. This was a multicenter trial in Europe with 23 participating centers (21 from Germany). Furthermore, the interventional technique used by the operators reflects more contemporary technique compared to the previous studies comparing each agent to clopidogrel,1,2 with more frequent use of radial access (37%) and drug-eluting stents (90%) and reduced use of GPIIb/IIIa inhibitors (12%).
There are a few important points to consider due to the differences between the 2 agents compared in this study. First, the loading dose of ticagrelor and prasugrel was administered differently in patients presenting with ACS without ST elevation. Ticagrelor was administered as soon as possible prior to the coronary angiogram, but prasugrel was administered after the coronary anatomy was defined prior to the intervention, which is how this agent is administered in current clinical practice. Therefore, this trial was an open-label study that compared not only different medications, but different administration strategies. Second, ticagrelor and prasugrel have different side-effect profiles. The side effects unique to ticagrelor are dyspnea and bradycardia. On the other hand, a contraindication unique to prasugrel is patients with a history of transient ischemic attack or stroke due to increased risk of thrombotic and hemorrhagic stroke.1 In addition, prasugrel has increased bleeding risk in patients older than 75 years of age and those with low body weight (< 60 kg). In this study, the overall medication discontinuation rate was higher in the ticagrelor group specifically due to dyspnea, and the reduced dose of 5 mg of prasugrel was used in patients older than 75 years or with low body weight.
Since the timing of administration of ticagrelor (preloading prior to coronary angiography is recommended) is similar to that of clopidogrel, and given the theoretical benefit of reversible inhibition of the ADP receptor, ticagrelor has been used more commonly in clinical practice than prasugrel, and it has been implemented in the ACS protocol in many hospitals. In light of the results from this first head-to-head comparison utilizing more contemporary interventional techniques, these protocols may need to be adjusted in favor of prasugrel for patients presenting with ACS. However, given the difference in timing of administration and the difference in side-effect profile, operators must also tailor these agents depending on the patient profile.
Applications for Clinical Practice
In patients presenting with ACS, prasugrel was superior to ticagrelor, with a lower composite of death, myocardial infarction, and stroke at 12 months. Prasugrel should be considered as a first-line treatment for ACS.
– Taishi Hirai, MD, and Arun Kumar, MD, University of Missouri, Columbia, MO
1. Wiviott SD, Braunwald E, McCabe CH, et al. Prasugrel versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2007;357:2001-2015.
2. Wallentin L, Becker RC, Budaj A, et al. Ticagrelor versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2009;361:1045-1057.
3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2018;39:119-177.
4. Levine GN, Bates ER, Bittl JA, et al. 2016 ACC/AHA guideline focused update on duration of dual antiplatelet therapy in patients with coronary artery disease: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2016;68:1082-1115.
1. Wiviott SD, Braunwald E, McCabe CH, et al. Prasugrel versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2007;357:2001-2015.
2. Wallentin L, Becker RC, Budaj A, et al. Ticagrelor versus clopidogrel in patients with acute coronary syndromes. N Engl J Med. 2009;361:1045-1057.
3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC). Eur Heart J. 2018;39:119-177.
4. Levine GN, Bates ER, Bittl JA, et al. 2016 ACC/AHA guideline focused update on duration of dual antiplatelet therapy in patients with coronary artery disease: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2016;68:1082-1115.