User login
TULIP trials show clinical benefit of anifrolumab for SLE
ATLANTA –
In TULIP-1, which compared intravenous anifrolumab at doses of 300 or 150 mg and placebo given every 4 weeks for 48 weeks, the primary endpoint of SLE Responder Index (SRI) in the 300 mg versus the placebo group was not met, but in post hoc analyses, numeric improvements at thresholds associated with clinical benefit were observed for several secondary outcomes, Richard A. Furie, MD, a professor of medicine at the Hofstra University/Northwell, Hempstead, N.Y., reported during a plenary session at the annual meeting of the American College of Rheumatology.
The findings were published online Nov. 11 in Lancet Rheumatology.
TULIP-2 compared IV anifrolumab at a dose of 300 mg versus placebo every 4 weeks for 48 weeks and demonstrated the superiority of anifrolumab for multiple efficacy endpoints, including the primary study endpoint of British Isles Lupus Assessment Group (BILAG)-based Composite Lupus Assessment (BICLA), Eric F. Morand, MD, PhD, reported during a late-breaking abstract session at the meeting.
The double-blind, phase 3 TULIP trials each enrolled seropositive SLE patients with moderate to severe active disease despite standard-of-care therapy (SOC). All patients met ACR criteria, had a SLE Disease Activity Index (SLEDAI)-2K of 6 or greater, and BILAG index scoring showing one or more organ systems with grade A involvement or two or more with grade B. Both trials required stable SOC therapy throughout the study except for mandatory attempts at oral corticosteroid (OCS) tapering for patients who were receiving 10 mg/day or more of prednisone or its equivalent at study entry.
The trials followed a phase 2 trial, reported by Dr. Furie at the 2015 ACR meeting and published in Arthritis & Rheumatology in 2017, which showed “very robust” efficacy of anifrolumab in this setting.
“The burning question for the last 20 years has been, ‘Can type 1 interferon inhibitors actually reduce lupus clinical activity?’ ” Dr. Furie said. “The problem here [is that] you can inhibit interferon-alpha, but there are four other subtypes capable of binding to the interferon receptor.”
Anifrolumab, which was first studied in scleroderma, inhibits the interferon (IFN) receptor, thereby providing broader inhibition than strategies that specifically target interferon-alpha, he explained.
In the phase 2 trial, the primary composite endpoint of SRI response at day 169 and sustained reduction of OCS dose between days 85 and 169 was met by 51.5% of patients receiving 300 mg of anifrolumab versus 26.6% of those receiving placebo.
TULIP-1
The TULIP-1 trial, however, failed to show a significant difference in the primary endpoint of week 52 SRI, although initial analyses showed some numeric benefit with respect to BICLA, OCS dose reductions, and other organ-specific endpoints.
The percentage of SRI responders at week 52 in the double-blind trial was 36.2% in 180 patients who received 300 mg anifrolumab vs. 40.4% in 184 who received placebo (nominal P value = .41), and in a subgroup of patients who had high IFN gene signature (IFNGS) test results, the rates were 35.9% and 39.3% (nominal P value = 0.55), respectively.
Sustained OCS reduction to 7.5 mg/day or less occurred in 41% of anifrolumab and 32.1% of placebo group patients, and a 50% or greater reduction in Cutaneous Lupus Erythematosus Disease Activity Severity Index (CLASI) activity from baseline to week 12 occurred in 41.9% and 24.9%, respectively. The annualized flare rate to week 52 was 0.72 for anifrolumab and 0.60 for placebo.
BICLA response at week 52 was 37.1% with anifrolumab versus 27% with placebo, and a 50% or greater reduction in active joints from baseline to week 52 occurred in 47% versus 32.5% of patients in the groups, respectively.
The 150-mg dose, which was included to provide dose-response data, did not show efficacy in secondary outcomes.
“We see a delta of about 10 percentage points [for BICLA], and about a 15-percentage point change [in swollen and tender joint count] in favor of anifrolumab,” Dr. Furie said. “So why the big difference between phase 2 results and phase 3 results? Well, that led to a year-long interrogation of all the data ... [which revealed that] about 8% of patients were misclassified as nonresponders for [NSAID] use.”
The medication rules in the study automatically required any patient who used a restricted drug, including NSAIDs, to be classified as a nonresponder. That means a patient who took an NSAID for a headache at the beginning of the study, for example, would have been considered a nonresponder regardless of their outcome, he explained.
“This led to a review of all the restricted medication classification rules, and after unblinding, a meeting was convened with SLE experts and the sponsors to actually revise the medication rules just to make them clinically more appropriate. The key analyses were repeated post hoc,” he said.
The difference between the treatment and placebo groups in terms of the week 52 SRI didn’t change much in the post hoc analysis (46.9% vs. 43% of treatment and placebo patients, respectively, met the endpoint). Similarly, SRI rates in the IFNGS test–high subgroup were 48.2% and 41.8%, respectively.
However, more pronounced “shifts to the right,” indicating larger differences favoring anifrolumab over placebo, were seen for OCS dose reduction (48.8% vs. 32.1%), CLASI response (43.6% vs. 24.9%), and BICLA response (48.1% vs. 29.8%).
“For BICLA response, we see a fairly significant change ... with what appears to be a clinically significant delta (about 16 percentage points), and as far as the change in active joints, also very significant in my eyes,” he said.
Also of note, the time to BICLA response sustained to week 52 was improved with anifrolumab (hazard ratio, 1.93), and CLASI response differences emerged early, at about 12 weeks, he said.
The type 1 IFNGS was reduced by a median of 88% to 90% in the anifrolumab groups vs. with placebo, and modest changes in serologies were also noted.
Serious adverse events occurred in 13.9% and 10.8% of patients in the anifrolumab 300- and 150-mg arms, compared with 16.3% in the placebo arm. Herpes zoster was more common in the anifrolumab groups (5.6% for 300 mg and 5.4% for 150 mg vs. 1.6% for placebo).
“But other than that, no major standouts as far as the safety profile,” Dr. Furie said.
The findings, particularly after the medication rules were amended, suggest efficacy of anifrolumab for corticosteroid reductions, skin activity, BICLA, and joint scores, he said, noting that corticosteroid dose reductions are very important for patients, and that BICLA is “actually a very rigorous composite.”
Importantly, the findings also underscore the importance and impact of medication rules, and the critical role that endpoint selection plays in SLE trials.
“We’ve been seeing discordance lately between the SRI and BICLA ... so [there is] still a lot to learn,” he said. “And I think it’s important in evaluating the drug effect to look at the totality of the data.”
TULIP-2
BICLA response, the primary endpoint of TULIP-2, was achieved by 47.8% of 180 patients who received anifrolumab, compared with 31.5% of 182 who received placebo, said Dr. Morand, professor and head of the School of Clinical Sciences at Monash University, Melbourne.
“The effect size was 16.3 percentage points with an adjusted p value of 0.001. Therefore, the primary outcome of this trial was attained,” said Dr. Morand, who also is head of the Monash Health Rheumatology Unit. “Separation between the treatment arms occurred early and was maintained across the progression of the trial.”
Anifrolumab was also superior to placebo for key secondary endpoints, including OCS dose reduction to 7.5 mg/day or less (51.5% vs. 30.2%) and CLASI response (49.0% vs. 25.0%).
“Joint responses did not show a significant difference between the anifrolumab and placebo arms,” he said, adding that the annualized flare rate also did not differ significantly between the groups, but was numerically lower in anifrolumab-treated patients (0.43 vs. 0.64; rate ratio, 0.67; P = .081).
Numeric differences also favored anifrolumab for multiple secondary endpoints, including SRI responses, time to onset of BICLA-sustained response, and time to first flare, he noted.
Further, in patients with high baseline IFNGS, anifrolumab induced neutralization of IFNGS by week 12, with a median suppression of 88.0%, which persisted for the duration of the study; no such effect was seen in the placebo arm.
Serum anti–double stranded DNA also trended toward normalization with anifrolumab.
The safety profile of anifrolumab was similar to that seen in previous trials, including TULIP-1, with herpes zoster occurring more often in those receiving anifrolumab (7.2% vs. 1.1% in the placebo group), Dr. Morand said, noting that “all herpes zoster episodes were cutaneous, all responded to antiviral therapy, and none required [treatment] discontinuation.”
Serious adverse events, including pneumonia and SLE worsening, occurred less frequently in the anifrolumab arm (8.3% vs. 17.0%, respectively), as did adverse events leading to treatment discontinuation (2.8% and 7.1%). One death occurred in the anifrolumab group from community-acquired pneumonia, and few patients (0.6%) developed antidrug antibodies.
No new safety signals were identified, he said, noting that “the findings add to cumulative evidence identifying anifrolumab as a potential new treatment option for SLE.”
“In conclusion, TULIP-2 was a positive phase 3 trial in lupus, and there aren’t many times that that sentence has been spoken,” he said.
The TULIP-1 and -2 trials were sponsored by AstraZeneca. Dr. Furie And Dr. Morand both reported grant/research support and consulting fees from AstraZeneca, as well as speaker’s bureau participation for AstraZeneca.
SOURCES: Furie RA et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 1763; Morand EF et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract L17.
ATLANTA –
In TULIP-1, which compared intravenous anifrolumab at doses of 300 or 150 mg and placebo given every 4 weeks for 48 weeks, the primary endpoint of SLE Responder Index (SRI) in the 300 mg versus the placebo group was not met, but in post hoc analyses, numeric improvements at thresholds associated with clinical benefit were observed for several secondary outcomes, Richard A. Furie, MD, a professor of medicine at the Hofstra University/Northwell, Hempstead, N.Y., reported during a plenary session at the annual meeting of the American College of Rheumatology.
The findings were published online Nov. 11 in Lancet Rheumatology.
TULIP-2 compared IV anifrolumab at a dose of 300 mg versus placebo every 4 weeks for 48 weeks and demonstrated the superiority of anifrolumab for multiple efficacy endpoints, including the primary study endpoint of British Isles Lupus Assessment Group (BILAG)-based Composite Lupus Assessment (BICLA), Eric F. Morand, MD, PhD, reported during a late-breaking abstract session at the meeting.
The double-blind, phase 3 TULIP trials each enrolled seropositive SLE patients with moderate to severe active disease despite standard-of-care therapy (SOC). All patients met ACR criteria, had a SLE Disease Activity Index (SLEDAI)-2K of 6 or greater, and BILAG index scoring showing one or more organ systems with grade A involvement or two or more with grade B. Both trials required stable SOC therapy throughout the study except for mandatory attempts at oral corticosteroid (OCS) tapering for patients who were receiving 10 mg/day or more of prednisone or its equivalent at study entry.
The trials followed a phase 2 trial, reported by Dr. Furie at the 2015 ACR meeting and published in Arthritis & Rheumatology in 2017, which showed “very robust” efficacy of anifrolumab in this setting.
“The burning question for the last 20 years has been, ‘Can type 1 interferon inhibitors actually reduce lupus clinical activity?’ ” Dr. Furie said. “The problem here [is that] you can inhibit interferon-alpha, but there are four other subtypes capable of binding to the interferon receptor.”
Anifrolumab, which was first studied in scleroderma, inhibits the interferon (IFN) receptor, thereby providing broader inhibition than strategies that specifically target interferon-alpha, he explained.
In the phase 2 trial, the primary composite endpoint of SRI response at day 169 and sustained reduction of OCS dose between days 85 and 169 was met by 51.5% of patients receiving 300 mg of anifrolumab versus 26.6% of those receiving placebo.
TULIP-1
The TULIP-1 trial, however, failed to show a significant difference in the primary endpoint of week 52 SRI, although initial analyses showed some numeric benefit with respect to BICLA, OCS dose reductions, and other organ-specific endpoints.
The percentage of SRI responders at week 52 in the double-blind trial was 36.2% in 180 patients who received 300 mg anifrolumab vs. 40.4% in 184 who received placebo (nominal P value = .41), and in a subgroup of patients who had high IFN gene signature (IFNGS) test results, the rates were 35.9% and 39.3% (nominal P value = 0.55), respectively.
Sustained OCS reduction to 7.5 mg/day or less occurred in 41% of anifrolumab and 32.1% of placebo group patients, and a 50% or greater reduction in Cutaneous Lupus Erythematosus Disease Activity Severity Index (CLASI) activity from baseline to week 12 occurred in 41.9% and 24.9%, respectively. The annualized flare rate to week 52 was 0.72 for anifrolumab and 0.60 for placebo.
BICLA response at week 52 was 37.1% with anifrolumab versus 27% with placebo, and a 50% or greater reduction in active joints from baseline to week 52 occurred in 47% versus 32.5% of patients in the groups, respectively.
The 150-mg dose, which was included to provide dose-response data, did not show efficacy in secondary outcomes.
“We see a delta of about 10 percentage points [for BICLA], and about a 15-percentage point change [in swollen and tender joint count] in favor of anifrolumab,” Dr. Furie said. “So why the big difference between phase 2 results and phase 3 results? Well, that led to a year-long interrogation of all the data ... [which revealed that] about 8% of patients were misclassified as nonresponders for [NSAID] use.”
The medication rules in the study automatically required any patient who used a restricted drug, including NSAIDs, to be classified as a nonresponder. That means a patient who took an NSAID for a headache at the beginning of the study, for example, would have been considered a nonresponder regardless of their outcome, he explained.
“This led to a review of all the restricted medication classification rules, and after unblinding, a meeting was convened with SLE experts and the sponsors to actually revise the medication rules just to make them clinically more appropriate. The key analyses were repeated post hoc,” he said.
The difference between the treatment and placebo groups in terms of the week 52 SRI didn’t change much in the post hoc analysis (46.9% vs. 43% of treatment and placebo patients, respectively, met the endpoint). Similarly, SRI rates in the IFNGS test–high subgroup were 48.2% and 41.8%, respectively.
However, more pronounced “shifts to the right,” indicating larger differences favoring anifrolumab over placebo, were seen for OCS dose reduction (48.8% vs. 32.1%), CLASI response (43.6% vs. 24.9%), and BICLA response (48.1% vs. 29.8%).
“For BICLA response, we see a fairly significant change ... with what appears to be a clinically significant delta (about 16 percentage points), and as far as the change in active joints, also very significant in my eyes,” he said.
Also of note, the time to BICLA response sustained to week 52 was improved with anifrolumab (hazard ratio, 1.93), and CLASI response differences emerged early, at about 12 weeks, he said.
The type 1 IFNGS was reduced by a median of 88% to 90% in the anifrolumab groups vs. with placebo, and modest changes in serologies were also noted.
Serious adverse events occurred in 13.9% and 10.8% of patients in the anifrolumab 300- and 150-mg arms, compared with 16.3% in the placebo arm. Herpes zoster was more common in the anifrolumab groups (5.6% for 300 mg and 5.4% for 150 mg vs. 1.6% for placebo).
“But other than that, no major standouts as far as the safety profile,” Dr. Furie said.
The findings, particularly after the medication rules were amended, suggest efficacy of anifrolumab for corticosteroid reductions, skin activity, BICLA, and joint scores, he said, noting that corticosteroid dose reductions are very important for patients, and that BICLA is “actually a very rigorous composite.”
Importantly, the findings also underscore the importance and impact of medication rules, and the critical role that endpoint selection plays in SLE trials.
“We’ve been seeing discordance lately between the SRI and BICLA ... so [there is] still a lot to learn,” he said. “And I think it’s important in evaluating the drug effect to look at the totality of the data.”
TULIP-2
BICLA response, the primary endpoint of TULIP-2, was achieved by 47.8% of 180 patients who received anifrolumab, compared with 31.5% of 182 who received placebo, said Dr. Morand, professor and head of the School of Clinical Sciences at Monash University, Melbourne.
“The effect size was 16.3 percentage points with an adjusted p value of 0.001. Therefore, the primary outcome of this trial was attained,” said Dr. Morand, who also is head of the Monash Health Rheumatology Unit. “Separation between the treatment arms occurred early and was maintained across the progression of the trial.”
Anifrolumab was also superior to placebo for key secondary endpoints, including OCS dose reduction to 7.5 mg/day or less (51.5% vs. 30.2%) and CLASI response (49.0% vs. 25.0%).
“Joint responses did not show a significant difference between the anifrolumab and placebo arms,” he said, adding that the annualized flare rate also did not differ significantly between the groups, but was numerically lower in anifrolumab-treated patients (0.43 vs. 0.64; rate ratio, 0.67; P = .081).
Numeric differences also favored anifrolumab for multiple secondary endpoints, including SRI responses, time to onset of BICLA-sustained response, and time to first flare, he noted.
Further, in patients with high baseline IFNGS, anifrolumab induced neutralization of IFNGS by week 12, with a median suppression of 88.0%, which persisted for the duration of the study; no such effect was seen in the placebo arm.
Serum anti–double stranded DNA also trended toward normalization with anifrolumab.
The safety profile of anifrolumab was similar to that seen in previous trials, including TULIP-1, with herpes zoster occurring more often in those receiving anifrolumab (7.2% vs. 1.1% in the placebo group), Dr. Morand said, noting that “all herpes zoster episodes were cutaneous, all responded to antiviral therapy, and none required [treatment] discontinuation.”
Serious adverse events, including pneumonia and SLE worsening, occurred less frequently in the anifrolumab arm (8.3% vs. 17.0%, respectively), as did adverse events leading to treatment discontinuation (2.8% and 7.1%). One death occurred in the anifrolumab group from community-acquired pneumonia, and few patients (0.6%) developed antidrug antibodies.
No new safety signals were identified, he said, noting that “the findings add to cumulative evidence identifying anifrolumab as a potential new treatment option for SLE.”
“In conclusion, TULIP-2 was a positive phase 3 trial in lupus, and there aren’t many times that that sentence has been spoken,” he said.
The TULIP-1 and -2 trials were sponsored by AstraZeneca. Dr. Furie And Dr. Morand both reported grant/research support and consulting fees from AstraZeneca, as well as speaker’s bureau participation for AstraZeneca.
SOURCES: Furie RA et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 1763; Morand EF et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract L17.
ATLANTA –
In TULIP-1, which compared intravenous anifrolumab at doses of 300 or 150 mg and placebo given every 4 weeks for 48 weeks, the primary endpoint of SLE Responder Index (SRI) in the 300 mg versus the placebo group was not met, but in post hoc analyses, numeric improvements at thresholds associated with clinical benefit were observed for several secondary outcomes, Richard A. Furie, MD, a professor of medicine at the Hofstra University/Northwell, Hempstead, N.Y., reported during a plenary session at the annual meeting of the American College of Rheumatology.
The findings were published online Nov. 11 in Lancet Rheumatology.
TULIP-2 compared IV anifrolumab at a dose of 300 mg versus placebo every 4 weeks for 48 weeks and demonstrated the superiority of anifrolumab for multiple efficacy endpoints, including the primary study endpoint of British Isles Lupus Assessment Group (BILAG)-based Composite Lupus Assessment (BICLA), Eric F. Morand, MD, PhD, reported during a late-breaking abstract session at the meeting.
The double-blind, phase 3 TULIP trials each enrolled seropositive SLE patients with moderate to severe active disease despite standard-of-care therapy (SOC). All patients met ACR criteria, had a SLE Disease Activity Index (SLEDAI)-2K of 6 or greater, and BILAG index scoring showing one or more organ systems with grade A involvement or two or more with grade B. Both trials required stable SOC therapy throughout the study except for mandatory attempts at oral corticosteroid (OCS) tapering for patients who were receiving 10 mg/day or more of prednisone or its equivalent at study entry.
The trials followed a phase 2 trial, reported by Dr. Furie at the 2015 ACR meeting and published in Arthritis & Rheumatology in 2017, which showed “very robust” efficacy of anifrolumab in this setting.
“The burning question for the last 20 years has been, ‘Can type 1 interferon inhibitors actually reduce lupus clinical activity?’ ” Dr. Furie said. “The problem here [is that] you can inhibit interferon-alpha, but there are four other subtypes capable of binding to the interferon receptor.”
Anifrolumab, which was first studied in scleroderma, inhibits the interferon (IFN) receptor, thereby providing broader inhibition than strategies that specifically target interferon-alpha, he explained.
In the phase 2 trial, the primary composite endpoint of SRI response at day 169 and sustained reduction of OCS dose between days 85 and 169 was met by 51.5% of patients receiving 300 mg of anifrolumab versus 26.6% of those receiving placebo.
TULIP-1
The TULIP-1 trial, however, failed to show a significant difference in the primary endpoint of week 52 SRI, although initial analyses showed some numeric benefit with respect to BICLA, OCS dose reductions, and other organ-specific endpoints.
The percentage of SRI responders at week 52 in the double-blind trial was 36.2% in 180 patients who received 300 mg anifrolumab vs. 40.4% in 184 who received placebo (nominal P value = .41), and in a subgroup of patients who had high IFN gene signature (IFNGS) test results, the rates were 35.9% and 39.3% (nominal P value = 0.55), respectively.
Sustained OCS reduction to 7.5 mg/day or less occurred in 41% of anifrolumab and 32.1% of placebo group patients, and a 50% or greater reduction in Cutaneous Lupus Erythematosus Disease Activity Severity Index (CLASI) activity from baseline to week 12 occurred in 41.9% and 24.9%, respectively. The annualized flare rate to week 52 was 0.72 for anifrolumab and 0.60 for placebo.
BICLA response at week 52 was 37.1% with anifrolumab versus 27% with placebo, and a 50% or greater reduction in active joints from baseline to week 52 occurred in 47% versus 32.5% of patients in the groups, respectively.
The 150-mg dose, which was included to provide dose-response data, did not show efficacy in secondary outcomes.
“We see a delta of about 10 percentage points [for BICLA], and about a 15-percentage point change [in swollen and tender joint count] in favor of anifrolumab,” Dr. Furie said. “So why the big difference between phase 2 results and phase 3 results? Well, that led to a year-long interrogation of all the data ... [which revealed that] about 8% of patients were misclassified as nonresponders for [NSAID] use.”
The medication rules in the study automatically required any patient who used a restricted drug, including NSAIDs, to be classified as a nonresponder. That means a patient who took an NSAID for a headache at the beginning of the study, for example, would have been considered a nonresponder regardless of their outcome, he explained.
“This led to a review of all the restricted medication classification rules, and after unblinding, a meeting was convened with SLE experts and the sponsors to actually revise the medication rules just to make them clinically more appropriate. The key analyses were repeated post hoc,” he said.
The difference between the treatment and placebo groups in terms of the week 52 SRI didn’t change much in the post hoc analysis (46.9% vs. 43% of treatment and placebo patients, respectively, met the endpoint). Similarly, SRI rates in the IFNGS test–high subgroup were 48.2% and 41.8%, respectively.
However, more pronounced “shifts to the right,” indicating larger differences favoring anifrolumab over placebo, were seen for OCS dose reduction (48.8% vs. 32.1%), CLASI response (43.6% vs. 24.9%), and BICLA response (48.1% vs. 29.8%).
“For BICLA response, we see a fairly significant change ... with what appears to be a clinically significant delta (about 16 percentage points), and as far as the change in active joints, also very significant in my eyes,” he said.
Also of note, the time to BICLA response sustained to week 52 was improved with anifrolumab (hazard ratio, 1.93), and CLASI response differences emerged early, at about 12 weeks, he said.
The type 1 IFNGS was reduced by a median of 88% to 90% in the anifrolumab groups vs. with placebo, and modest changes in serologies were also noted.
Serious adverse events occurred in 13.9% and 10.8% of patients in the anifrolumab 300- and 150-mg arms, compared with 16.3% in the placebo arm. Herpes zoster was more common in the anifrolumab groups (5.6% for 300 mg and 5.4% for 150 mg vs. 1.6% for placebo).
“But other than that, no major standouts as far as the safety profile,” Dr. Furie said.
The findings, particularly after the medication rules were amended, suggest efficacy of anifrolumab for corticosteroid reductions, skin activity, BICLA, and joint scores, he said, noting that corticosteroid dose reductions are very important for patients, and that BICLA is “actually a very rigorous composite.”
Importantly, the findings also underscore the importance and impact of medication rules, and the critical role that endpoint selection plays in SLE trials.
“We’ve been seeing discordance lately between the SRI and BICLA ... so [there is] still a lot to learn,” he said. “And I think it’s important in evaluating the drug effect to look at the totality of the data.”
TULIP-2
BICLA response, the primary endpoint of TULIP-2, was achieved by 47.8% of 180 patients who received anifrolumab, compared with 31.5% of 182 who received placebo, said Dr. Morand, professor and head of the School of Clinical Sciences at Monash University, Melbourne.
“The effect size was 16.3 percentage points with an adjusted p value of 0.001. Therefore, the primary outcome of this trial was attained,” said Dr. Morand, who also is head of the Monash Health Rheumatology Unit. “Separation between the treatment arms occurred early and was maintained across the progression of the trial.”
Anifrolumab was also superior to placebo for key secondary endpoints, including OCS dose reduction to 7.5 mg/day or less (51.5% vs. 30.2%) and CLASI response (49.0% vs. 25.0%).
“Joint responses did not show a significant difference between the anifrolumab and placebo arms,” he said, adding that the annualized flare rate also did not differ significantly between the groups, but was numerically lower in anifrolumab-treated patients (0.43 vs. 0.64; rate ratio, 0.67; P = .081).
Numeric differences also favored anifrolumab for multiple secondary endpoints, including SRI responses, time to onset of BICLA-sustained response, and time to first flare, he noted.
Further, in patients with high baseline IFNGS, anifrolumab induced neutralization of IFNGS by week 12, with a median suppression of 88.0%, which persisted for the duration of the study; no such effect was seen in the placebo arm.
Serum anti–double stranded DNA also trended toward normalization with anifrolumab.
The safety profile of anifrolumab was similar to that seen in previous trials, including TULIP-1, with herpes zoster occurring more often in those receiving anifrolumab (7.2% vs. 1.1% in the placebo group), Dr. Morand said, noting that “all herpes zoster episodes were cutaneous, all responded to antiviral therapy, and none required [treatment] discontinuation.”
Serious adverse events, including pneumonia and SLE worsening, occurred less frequently in the anifrolumab arm (8.3% vs. 17.0%, respectively), as did adverse events leading to treatment discontinuation (2.8% and 7.1%). One death occurred in the anifrolumab group from community-acquired pneumonia, and few patients (0.6%) developed antidrug antibodies.
No new safety signals were identified, he said, noting that “the findings add to cumulative evidence identifying anifrolumab as a potential new treatment option for SLE.”
“In conclusion, TULIP-2 was a positive phase 3 trial in lupus, and there aren’t many times that that sentence has been spoken,” he said.
The TULIP-1 and -2 trials were sponsored by AstraZeneca. Dr. Furie And Dr. Morand both reported grant/research support and consulting fees from AstraZeneca, as well as speaker’s bureau participation for AstraZeneca.
SOURCES: Furie RA et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 1763; Morand EF et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract L17.
REPORTING FROM ACR 2019
Health care: More uninsured as insurance costs grow faster
WASHINGTON – The number of uninsured grew in 2018 as the rate of health care spending grew, according to data from the Centers for Medicare & Medicaid Services.
A total of 30.7 million people in the United States were uninsured in 2018 – up 1 million from 2017. It was the second year in a row that the number of uninsured grew by that amount.
The newly uninsured came from the private insurance sector, which saw the number of insured decrease to 200.5 million in 2018 from 202.1 million in the previous year, partially offset by increases in Americans covered by Medicare and Medicaid.
The increase in uninsured people comes as the growth rate in health care spending rose to 4.6% in 2018 from 4.2% in 2017, though much of that growth in the rate of spending was attributed to the application of a health insurance tax in 2018 that Congress put a moratorium on in the previous year. The tax was part of the Affordable Care Act and was enacted in 2014.
“We see that health care spending reached $3.6 trillion, or $11,172 per person, and spending was faster,” Micah Hartman, statistician in the National Health Statistics Group in the CMS Office of the Actuary, said during a press conference to review the national health expenditure results. “The main reason for the acceleration was faster growth in the net cost of insurance, and that was particularly the case for private health insurance and also for Medicare.”
The net cost of insurance includes nonmedical expenses such as administration, taxes, and fees, as well as gains or losses for private health insurers. The ACA’s health insurance tax generated $14.3 billion in spending, according to Internal Revenue Service data.
Also contributing to the rise in the rate of growth was faster growth in medical prices, “and that was due to underlying economy-wide inflation, as well as the impacts of the tax,” Mr. Hartman said.
Despite this growth in the rate of spending, health care spending as a percentage of GDP fell slightly from 17.9% in 2017 in 17.7%, as the GDP grew faster than health care spending in 2018.
The faster growth in prices more than offset the slightly slower growth in the use and intensity of medical services, CMS reported.
The growth rate on spending on physician and clinical services slowed to 4.1% in 2018 from 4.7% in 2017. Overall spending on physician and clinical services in 2018 reached $725.6 billion and accounted for 20% of overall health care spending.
Spending on hospital services also slowed, but only slightly, dropping to a growth rate of 4.5% from 4.7% during this period. Hospital spending in 2018, at $1.2 trillion, accounted for 33% of overall health care spending.
On a personal level, overall growth in personal health care spending held steady with growth rate of 4.1% in 2018, the same as 2017, though individual components that feed into the figure varied. For example, growth rate in the spending on retail pharmaceuticals rose to 2.5% from 1.4% during this period. Spending on retail pharmaceuticals reached $335 billion and accounted for 9% of overall health care spending.
Another factor in the rising growth rate in spending came from employer-sponsored insurance.
“Growth in health spending by private business was due to faster growth in employer contributions to private health insurance premiums,” Anne B. Martin, economist in the National Health Statistics Group, said during the press conference. There also was faster growth in spending by the federal government, “driven mainly by faster growth in the federally funded portions of Medicare and Medicaid.”
Spending by private health insurance grew at a rate of 5.8% and reached $1.2 trillion in 2018. Medicare spending grew by 6.4% and reached $750.2 billion, while Medicaid spending grew 3.0%, reaching $597.4 billion.
SOURCE: Hartman M et al. Health Affairs. 2019. doi: 10.1377/hlthaff.2019.00451
WASHINGTON – The number of uninsured grew in 2018 as the rate of health care spending grew, according to data from the Centers for Medicare & Medicaid Services.
A total of 30.7 million people in the United States were uninsured in 2018 – up 1 million from 2017. It was the second year in a row that the number of uninsured grew by that amount.
The newly uninsured came from the private insurance sector, which saw the number of insured decrease to 200.5 million in 2018 from 202.1 million in the previous year, partially offset by increases in Americans covered by Medicare and Medicaid.
The increase in uninsured people comes as the growth rate in health care spending rose to 4.6% in 2018 from 4.2% in 2017, though much of that growth in the rate of spending was attributed to the application of a health insurance tax in 2018 that Congress put a moratorium on in the previous year. The tax was part of the Affordable Care Act and was enacted in 2014.
“We see that health care spending reached $3.6 trillion, or $11,172 per person, and spending was faster,” Micah Hartman, statistician in the National Health Statistics Group in the CMS Office of the Actuary, said during a press conference to review the national health expenditure results. “The main reason for the acceleration was faster growth in the net cost of insurance, and that was particularly the case for private health insurance and also for Medicare.”
The net cost of insurance includes nonmedical expenses such as administration, taxes, and fees, as well as gains or losses for private health insurers. The ACA’s health insurance tax generated $14.3 billion in spending, according to Internal Revenue Service data.
Also contributing to the rise in the rate of growth was faster growth in medical prices, “and that was due to underlying economy-wide inflation, as well as the impacts of the tax,” Mr. Hartman said.
Despite this growth in the rate of spending, health care spending as a percentage of GDP fell slightly from 17.9% in 2017 in 17.7%, as the GDP grew faster than health care spending in 2018.
The faster growth in prices more than offset the slightly slower growth in the use and intensity of medical services, CMS reported.
The growth rate on spending on physician and clinical services slowed to 4.1% in 2018 from 4.7% in 2017. Overall spending on physician and clinical services in 2018 reached $725.6 billion and accounted for 20% of overall health care spending.
Spending on hospital services also slowed, but only slightly, dropping to a growth rate of 4.5% from 4.7% during this period. Hospital spending in 2018, at $1.2 trillion, accounted for 33% of overall health care spending.
On a personal level, overall growth in personal health care spending held steady with growth rate of 4.1% in 2018, the same as 2017, though individual components that feed into the figure varied. For example, growth rate in the spending on retail pharmaceuticals rose to 2.5% from 1.4% during this period. Spending on retail pharmaceuticals reached $335 billion and accounted for 9% of overall health care spending.
Another factor in the rising growth rate in spending came from employer-sponsored insurance.
“Growth in health spending by private business was due to faster growth in employer contributions to private health insurance premiums,” Anne B. Martin, economist in the National Health Statistics Group, said during the press conference. There also was faster growth in spending by the federal government, “driven mainly by faster growth in the federally funded portions of Medicare and Medicaid.”
Spending by private health insurance grew at a rate of 5.8% and reached $1.2 trillion in 2018. Medicare spending grew by 6.4% and reached $750.2 billion, while Medicaid spending grew 3.0%, reaching $597.4 billion.
SOURCE: Hartman M et al. Health Affairs. 2019. doi: 10.1377/hlthaff.2019.00451
WASHINGTON – The number of uninsured grew in 2018 as the rate of health care spending grew, according to data from the Centers for Medicare & Medicaid Services.
A total of 30.7 million people in the United States were uninsured in 2018 – up 1 million from 2017. It was the second year in a row that the number of uninsured grew by that amount.
The newly uninsured came from the private insurance sector, which saw the number of insured decrease to 200.5 million in 2018 from 202.1 million in the previous year, partially offset by increases in Americans covered by Medicare and Medicaid.
The increase in uninsured people comes as the growth rate in health care spending rose to 4.6% in 2018 from 4.2% in 2017, though much of that growth in the rate of spending was attributed to the application of a health insurance tax in 2018 that Congress put a moratorium on in the previous year. The tax was part of the Affordable Care Act and was enacted in 2014.
“We see that health care spending reached $3.6 trillion, or $11,172 per person, and spending was faster,” Micah Hartman, statistician in the National Health Statistics Group in the CMS Office of the Actuary, said during a press conference to review the national health expenditure results. “The main reason for the acceleration was faster growth in the net cost of insurance, and that was particularly the case for private health insurance and also for Medicare.”
The net cost of insurance includes nonmedical expenses such as administration, taxes, and fees, as well as gains or losses for private health insurers. The ACA’s health insurance tax generated $14.3 billion in spending, according to Internal Revenue Service data.
Also contributing to the rise in the rate of growth was faster growth in medical prices, “and that was due to underlying economy-wide inflation, as well as the impacts of the tax,” Mr. Hartman said.
Despite this growth in the rate of spending, health care spending as a percentage of GDP fell slightly from 17.9% in 2017 in 17.7%, as the GDP grew faster than health care spending in 2018.
The faster growth in prices more than offset the slightly slower growth in the use and intensity of medical services, CMS reported.
The growth rate on spending on physician and clinical services slowed to 4.1% in 2018 from 4.7% in 2017. Overall spending on physician and clinical services in 2018 reached $725.6 billion and accounted for 20% of overall health care spending.
Spending on hospital services also slowed, but only slightly, dropping to a growth rate of 4.5% from 4.7% during this period. Hospital spending in 2018, at $1.2 trillion, accounted for 33% of overall health care spending.
On a personal level, overall growth in personal health care spending held steady with growth rate of 4.1% in 2018, the same as 2017, though individual components that feed into the figure varied. For example, growth rate in the spending on retail pharmaceuticals rose to 2.5% from 1.4% during this period. Spending on retail pharmaceuticals reached $335 billion and accounted for 9% of overall health care spending.
Another factor in the rising growth rate in spending came from employer-sponsored insurance.
“Growth in health spending by private business was due to faster growth in employer contributions to private health insurance premiums,” Anne B. Martin, economist in the National Health Statistics Group, said during the press conference. There also was faster growth in spending by the federal government, “driven mainly by faster growth in the federally funded portions of Medicare and Medicaid.”
Spending by private health insurance grew at a rate of 5.8% and reached $1.2 trillion in 2018. Medicare spending grew by 6.4% and reached $750.2 billion, while Medicaid spending grew 3.0%, reaching $597.4 billion.
SOURCE: Hartman M et al. Health Affairs. 2019. doi: 10.1377/hlthaff.2019.00451
Oral contraceptive use associated with smaller hypothalamic and pituitary volumes
CHICAGO – Women taking oral contraceptives had, on average, a hypothalamus that was 6% smaller than those who didn’t, in a small study that used magnetic resonance imaging. Pituitary volume was also smaller.
Though the sample size was relatively small, 50 women in total, it’s the only study to date that looks at the relationship between hypothalamic volume and oral contraceptive (OC) use, and the largest examining pituitary volume, according to Ke Xun (Kevin) Chen, MD, who presented the findings at the annual meeting of the Radiological Society of North America.
Using MRI, Dr. Chen and his colleagues found that hypothalamic volume was significantly smaller in women taking oral contraceptives than those who were naturally cycling (b value = –64.1; P = .006). The pituitary gland also was significantly smaller in those taking OCs (b = –92.8; P = .007).
“I was quite surprised [at the finding], because the magnitude of the effect is not small,” especially in the context of changes in volume of other brain structures, senior author Michael L. Lipton, MD, PhD, said in an interview. In Alzheimer’s disease, for example, a volume loss of 4% annually can be expected.
However, “it’s not shocking to me in a negative way at all. I can’t tell you what it means in terms of how it’s going to affect people,” since this is a cross-sectional study that only detected a correlation and can’t say anything about a causative relationship, he added. “We don’t even know that [OCs] cause this effect. ... It’s plausible that this is just a plasticity-related change that’s simply showing us the effect of the drug.
“We’re going to be much more careful to consider oral contraceptive use as a covariate in future research studies; that’s for sure,” he said.
Although OCs have been available since their 1960 Food and Drug Administration approval, and their effects in some areas of physiology and health have been well studied, there’s still not much known about how oral contraceptives affect brain function, said Dr. Lipton, professor of neuroradiology and psychiatry and behavioral sciences at Albert Einstein College of Medicine, in the Montefiore medical system, New York.
The spark for this study came from one of Dr. Lipton’s main areas of research – sex differences in susceptibility to and recovery from traumatic brain injury. “Women are more likely to exhibit changes in their brain [after injury] – and changes in their brain function – than men,” he said.
In the present study, “we went at this trying to understand the effect to which the hormone effect might be doing something in regular, healthy people that we need to consider as part of the bigger picture,” he said.
Dr. Lipton, Dr. Chen (then a radiology resident at Albert Einstein College of Medicine), and their coauthors constructed the study to look for differences in brain structure between women who were experiencing natural menstrual cycles and those who were taking exogenous hormones, to begin to learn how oral contraceptive use might modify risk and susceptibility for neurologic disease and injury.
It had already been established that global brain volume didn’t differ between naturally cycling women and those using OCs. However, some studies had shown differences in volume of some specific brain regions, and one study had shown smaller pituitary volume in OC users, according to the presentation by Dr. Chen, who is now a radiology fellow at Brigham and Women’s Hospital, Boston. Accurately measuring hypothalamic volume represents a technical challenge, and the effect of OCs on the structure’s volume hadn’t previously been studied.
Sex hormones, said Dr. Lipton, have known trophic effects on brain tissue and ovarian sex hormones cross the blood brain barrier, so the idea that there would be some plasticity in the brains of those taking OCs wasn’t completely surprising, especially since there are hormone receptors that lie within the central nervous system. However, he said he was “very surprised” by the effect size seen in the study.
The study included 21 healthy women taking combined oral contraceptives, and 29 naturally cycling women. Participants’ mean age was 23 years for the OC users, and 21 for the naturally cycling women. Body mass index and smoking history didn’t differ between groups. Women on OCs were significantly more likely to use alcohol and to drink more frequently than those not taking OCs (P = .001). Participants were included only if they were taking a combined estrogen-progestin pill; those on noncyclical contraceptives such as implants and hormone-emitting intrauterine devices were excluded, as were naturally cycling women with very long or irregular menstrual cycles.
After multivariable statistical analysis, the only two significant predictors of hypothalamic volume were total intracranial volume and OC use. For pituitary volume, body mass index and OC use remained significant.
In addition to the MRI scans, participants also completed neurobehavioral testing to assess mood and cognition. An exploratory analysis showed no correlation between hypothalamic volume and the cognitive testing battery results, which included assessments for verbal learning and memory, executive function, and working memory.
However, a moderate positive association was seen between hypothalamic volume and anger scores (r = 0.34; P = .02). The investigators found a “strong positive correlation of hypothalamic volume with depression,” said Dr. Chen (r = 0.25; P = .09).
The investigators found no menstrual cycle-related changes in hypothalamic and pituitary volume among naturally cycling women.
Hypothalamic volume was obtained using manual segmentation of the MRIs; a combined automated-manual approach was used to obtain pituitary volume. Reliability was tested by having 5 raters each assess volumes for a randomly selected subset of the scans; inter-rater reliability fell between 0.78 and 0.86, values considered to indicate “good” reliability.
In addition to the small sample size, Dr. Chen acknowledged several limitations to the study. These included the lack of accounting for details of OC use including duration, exact type of OC, and whether women were taking the placebo phase of their pill packs at the time of scanning. Additionally, women who were naturally cycling were not asked about prior history of OC use.
Also, women’s menstrual phase was estimated from the self-reported date of the last menstrual period, rather than obtained by direct measurement via serum hormone levels.
Dr. Lipton’s perspective adds a strong note of caution to avoid overinterpretation from the study. Dr. Chen and Dr. Lipton agreed, however, that OC use should be accounted for when brain structure and function are studied in female participants.
Dr. Chen, Dr. Lipton, and their coauthors reported that they had no conflicts of interest. The authors reported no outside sources of funding.
SOURCE: Chen K et al. RSNA 2019. Presentation SSM-1904.
CHICAGO – Women taking oral contraceptives had, on average, a hypothalamus that was 6% smaller than those who didn’t, in a small study that used magnetic resonance imaging. Pituitary volume was also smaller.
Though the sample size was relatively small, 50 women in total, it’s the only study to date that looks at the relationship between hypothalamic volume and oral contraceptive (OC) use, and the largest examining pituitary volume, according to Ke Xun (Kevin) Chen, MD, who presented the findings at the annual meeting of the Radiological Society of North America.
Using MRI, Dr. Chen and his colleagues found that hypothalamic volume was significantly smaller in women taking oral contraceptives than those who were naturally cycling (b value = –64.1; P = .006). The pituitary gland also was significantly smaller in those taking OCs (b = –92.8; P = .007).
“I was quite surprised [at the finding], because the magnitude of the effect is not small,” especially in the context of changes in volume of other brain structures, senior author Michael L. Lipton, MD, PhD, said in an interview. In Alzheimer’s disease, for example, a volume loss of 4% annually can be expected.
However, “it’s not shocking to me in a negative way at all. I can’t tell you what it means in terms of how it’s going to affect people,” since this is a cross-sectional study that only detected a correlation and can’t say anything about a causative relationship, he added. “We don’t even know that [OCs] cause this effect. ... It’s plausible that this is just a plasticity-related change that’s simply showing us the effect of the drug.
“We’re going to be much more careful to consider oral contraceptive use as a covariate in future research studies; that’s for sure,” he said.
Although OCs have been available since their 1960 Food and Drug Administration approval, and their effects in some areas of physiology and health have been well studied, there’s still not much known about how oral contraceptives affect brain function, said Dr. Lipton, professor of neuroradiology and psychiatry and behavioral sciences at Albert Einstein College of Medicine, in the Montefiore medical system, New York.
The spark for this study came from one of Dr. Lipton’s main areas of research – sex differences in susceptibility to and recovery from traumatic brain injury. “Women are more likely to exhibit changes in their brain [after injury] – and changes in their brain function – than men,” he said.
In the present study, “we went at this trying to understand the effect to which the hormone effect might be doing something in regular, healthy people that we need to consider as part of the bigger picture,” he said.
Dr. Lipton, Dr. Chen (then a radiology resident at Albert Einstein College of Medicine), and their coauthors constructed the study to look for differences in brain structure between women who were experiencing natural menstrual cycles and those who were taking exogenous hormones, to begin to learn how oral contraceptive use might modify risk and susceptibility for neurologic disease and injury.
It had already been established that global brain volume didn’t differ between naturally cycling women and those using OCs. However, some studies had shown differences in volume of some specific brain regions, and one study had shown smaller pituitary volume in OC users, according to the presentation by Dr. Chen, who is now a radiology fellow at Brigham and Women’s Hospital, Boston. Accurately measuring hypothalamic volume represents a technical challenge, and the effect of OCs on the structure’s volume hadn’t previously been studied.
Sex hormones, said Dr. Lipton, have known trophic effects on brain tissue and ovarian sex hormones cross the blood brain barrier, so the idea that there would be some plasticity in the brains of those taking OCs wasn’t completely surprising, especially since there are hormone receptors that lie within the central nervous system. However, he said he was “very surprised” by the effect size seen in the study.
The study included 21 healthy women taking combined oral contraceptives, and 29 naturally cycling women. Participants’ mean age was 23 years for the OC users, and 21 for the naturally cycling women. Body mass index and smoking history didn’t differ between groups. Women on OCs were significantly more likely to use alcohol and to drink more frequently than those not taking OCs (P = .001). Participants were included only if they were taking a combined estrogen-progestin pill; those on noncyclical contraceptives such as implants and hormone-emitting intrauterine devices were excluded, as were naturally cycling women with very long or irregular menstrual cycles.
After multivariable statistical analysis, the only two significant predictors of hypothalamic volume were total intracranial volume and OC use. For pituitary volume, body mass index and OC use remained significant.
In addition to the MRI scans, participants also completed neurobehavioral testing to assess mood and cognition. An exploratory analysis showed no correlation between hypothalamic volume and the cognitive testing battery results, which included assessments for verbal learning and memory, executive function, and working memory.
However, a moderate positive association was seen between hypothalamic volume and anger scores (r = 0.34; P = .02). The investigators found a “strong positive correlation of hypothalamic volume with depression,” said Dr. Chen (r = 0.25; P = .09).
The investigators found no menstrual cycle-related changes in hypothalamic and pituitary volume among naturally cycling women.
Hypothalamic volume was obtained using manual segmentation of the MRIs; a combined automated-manual approach was used to obtain pituitary volume. Reliability was tested by having 5 raters each assess volumes for a randomly selected subset of the scans; inter-rater reliability fell between 0.78 and 0.86, values considered to indicate “good” reliability.
In addition to the small sample size, Dr. Chen acknowledged several limitations to the study. These included the lack of accounting for details of OC use including duration, exact type of OC, and whether women were taking the placebo phase of their pill packs at the time of scanning. Additionally, women who were naturally cycling were not asked about prior history of OC use.
Also, women’s menstrual phase was estimated from the self-reported date of the last menstrual period, rather than obtained by direct measurement via serum hormone levels.
Dr. Lipton’s perspective adds a strong note of caution to avoid overinterpretation from the study. Dr. Chen and Dr. Lipton agreed, however, that OC use should be accounted for when brain structure and function are studied in female participants.
Dr. Chen, Dr. Lipton, and their coauthors reported that they had no conflicts of interest. The authors reported no outside sources of funding.
SOURCE: Chen K et al. RSNA 2019. Presentation SSM-1904.
CHICAGO – Women taking oral contraceptives had, on average, a hypothalamus that was 6% smaller than those who didn’t, in a small study that used magnetic resonance imaging. Pituitary volume was also smaller.
Though the sample size was relatively small, 50 women in total, it’s the only study to date that looks at the relationship between hypothalamic volume and oral contraceptive (OC) use, and the largest examining pituitary volume, according to Ke Xun (Kevin) Chen, MD, who presented the findings at the annual meeting of the Radiological Society of North America.
Using MRI, Dr. Chen and his colleagues found that hypothalamic volume was significantly smaller in women taking oral contraceptives than those who were naturally cycling (b value = –64.1; P = .006). The pituitary gland also was significantly smaller in those taking OCs (b = –92.8; P = .007).
“I was quite surprised [at the finding], because the magnitude of the effect is not small,” especially in the context of changes in volume of other brain structures, senior author Michael L. Lipton, MD, PhD, said in an interview. In Alzheimer’s disease, for example, a volume loss of 4% annually can be expected.
However, “it’s not shocking to me in a negative way at all. I can’t tell you what it means in terms of how it’s going to affect people,” since this is a cross-sectional study that only detected a correlation and can’t say anything about a causative relationship, he added. “We don’t even know that [OCs] cause this effect. ... It’s plausible that this is just a plasticity-related change that’s simply showing us the effect of the drug.
“We’re going to be much more careful to consider oral contraceptive use as a covariate in future research studies; that’s for sure,” he said.
Although OCs have been available since their 1960 Food and Drug Administration approval, and their effects in some areas of physiology and health have been well studied, there’s still not much known about how oral contraceptives affect brain function, said Dr. Lipton, professor of neuroradiology and psychiatry and behavioral sciences at Albert Einstein College of Medicine, in the Montefiore medical system, New York.
The spark for this study came from one of Dr. Lipton’s main areas of research – sex differences in susceptibility to and recovery from traumatic brain injury. “Women are more likely to exhibit changes in their brain [after injury] – and changes in their brain function – than men,” he said.
In the present study, “we went at this trying to understand the effect to which the hormone effect might be doing something in regular, healthy people that we need to consider as part of the bigger picture,” he said.
Dr. Lipton, Dr. Chen (then a radiology resident at Albert Einstein College of Medicine), and their coauthors constructed the study to look for differences in brain structure between women who were experiencing natural menstrual cycles and those who were taking exogenous hormones, to begin to learn how oral contraceptive use might modify risk and susceptibility for neurologic disease and injury.
It had already been established that global brain volume didn’t differ between naturally cycling women and those using OCs. However, some studies had shown differences in volume of some specific brain regions, and one study had shown smaller pituitary volume in OC users, according to the presentation by Dr. Chen, who is now a radiology fellow at Brigham and Women’s Hospital, Boston. Accurately measuring hypothalamic volume represents a technical challenge, and the effect of OCs on the structure’s volume hadn’t previously been studied.
Sex hormones, said Dr. Lipton, have known trophic effects on brain tissue and ovarian sex hormones cross the blood brain barrier, so the idea that there would be some plasticity in the brains of those taking OCs wasn’t completely surprising, especially since there are hormone receptors that lie within the central nervous system. However, he said he was “very surprised” by the effect size seen in the study.
The study included 21 healthy women taking combined oral contraceptives, and 29 naturally cycling women. Participants’ mean age was 23 years for the OC users, and 21 for the naturally cycling women. Body mass index and smoking history didn’t differ between groups. Women on OCs were significantly more likely to use alcohol and to drink more frequently than those not taking OCs (P = .001). Participants were included only if they were taking a combined estrogen-progestin pill; those on noncyclical contraceptives such as implants and hormone-emitting intrauterine devices were excluded, as were naturally cycling women with very long or irregular menstrual cycles.
After multivariable statistical analysis, the only two significant predictors of hypothalamic volume were total intracranial volume and OC use. For pituitary volume, body mass index and OC use remained significant.
In addition to the MRI scans, participants also completed neurobehavioral testing to assess mood and cognition. An exploratory analysis showed no correlation between hypothalamic volume and the cognitive testing battery results, which included assessments for verbal learning and memory, executive function, and working memory.
However, a moderate positive association was seen between hypothalamic volume and anger scores (r = 0.34; P = .02). The investigators found a “strong positive correlation of hypothalamic volume with depression,” said Dr. Chen (r = 0.25; P = .09).
The investigators found no menstrual cycle-related changes in hypothalamic and pituitary volume among naturally cycling women.
Hypothalamic volume was obtained using manual segmentation of the MRIs; a combined automated-manual approach was used to obtain pituitary volume. Reliability was tested by having 5 raters each assess volumes for a randomly selected subset of the scans; inter-rater reliability fell between 0.78 and 0.86, values considered to indicate “good” reliability.
In addition to the small sample size, Dr. Chen acknowledged several limitations to the study. These included the lack of accounting for details of OC use including duration, exact type of OC, and whether women were taking the placebo phase of their pill packs at the time of scanning. Additionally, women who were naturally cycling were not asked about prior history of OC use.
Also, women’s menstrual phase was estimated from the self-reported date of the last menstrual period, rather than obtained by direct measurement via serum hormone levels.
Dr. Lipton’s perspective adds a strong note of caution to avoid overinterpretation from the study. Dr. Chen and Dr. Lipton agreed, however, that OC use should be accounted for when brain structure and function are studied in female participants.
Dr. Chen, Dr. Lipton, and their coauthors reported that they had no conflicts of interest. The authors reported no outside sources of funding.
SOURCE: Chen K et al. RSNA 2019. Presentation SSM-1904.
REPORTING FROM RSNA 2019
2019 Update on bone health
Prior to last year, this column was titled “Update on osteoporosis.” My observation, however, is that too many ObGyn providers simply measure bone mass (known as bone mineral density, or BMD), label a patient as normal, osteopenic, or osteoporotic, and then consider pharmacotherapy. The FRAX fracture prediction algorithm, which incorporates age, weight, height, history of any previous fracture, family history of hip fracture, current smoking, use of glucocorticoid medications, and any history of rheumatoid arthritis, has refined the screening process somewhat, if and when it is utilized. As clinicians, we should never lose sight of our goal: to prevent fragility fractures. Having osteoporosis increases that risk, but not having osteoporosis does not eliminate it.
In this Update, I highlight various ways in which work published this past year may help us to improve our patients’ bone health and reduce fragility fractures.
Updated ISCD guidance emphasizes appropriate BMD testing, use of the
Z-score, and terminology
International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-ISCD-official-positions-adult.
Continue to: Indications for BMD testing...
Indications for BMD testing
The ISCD's indications for BMD testing remain for women age 65 and older. For postmenopausal women younger than age 65, a BMD test is indicated if they have a risk factor for low bone mass, such as 1) low body weight, 2) prior fracture, 3) high-risk medication use, or 4) a disease or condition associated with bone loss. A BMD test also is indicated for women during the menopausal transition with clinical risk factors for fracture, such as low body weight, prior fracture, or high-risk medication use. Interestingly, the ISCD recommendation for men is similar but uses age 70 for this group.
In addition, the ISCD recommends BMD testing in adults with a fragility fracture, with a disease or condition associated with low bone mass, or taking medications associated with low bone mass, as well as for anyone being considered for pharmacologic therapy, being treated (to monitor treatment effect), not receiving therapy in whom evidence of bone loss would lead to treatment, and in women discontinuing estrogen who should be considered for BMD testing according to the indications already mentioned.
Sites to assess for osteoporosis. The World Health Organization international reference standard for osteoporosis diagnosis is a T-score of -2.5 or less at the femoral neck. The reference standard, from which the T-score is calculated, is for white women aged 20 to 29 years of age from the database of the Third National Health and Nutrition Examination Survey. Osteoporosis also may be diagnosed in postmenopausal women if the T-score of the lumbar spine, total hip, or femoral neck is -2.5 or less. In certain circumstances, the 33% radius (also called the one-third radius) may be utilized. Other hip regions of interest, including Ward's area and the greater trochanter, should not be used for diagnosis.
The skeletal sites at which to measure BMD include the anteroposterior of the spine and hip in all patients. In terms of the spine, use L1-L4 for spine BMD measurement. However, exclude vertebrae that are affected by local structural changes or artifact. Use 3 vertebrae if 4 cannot be used, and 2 if 3 cannot be used. BMD-based diagnostic classification should not be made using a single vertebra. Anatomically abnormal vertebrae may be excluded from analysis if they are clearly abnormal and nonassessable within the resolution of the system, or if there is more than a 1.0 T-score difference between the vertebra in question and adjacent vertebrae. When vertebrae are excluded, the BMD of the remaining vertebrae are used to derive the T-score.
For BMD measurement at the hip, the femoral neck or total proximal femur—whichever is lowest—should be used. Either hip may be measured. Data are insufficient on whether mean T-scores for bilateral hip BMD should be used for diagnosis.
Terminology. While the ISCD retains the term osteopenia, the term low bone mass or low bone density is preferred. People with low bone mass or density are not necessarily at high fracture risk.
Concerning BMD reporting in women prior to menopause, Z-scores, not T-scores, are preferred. A Z-score of -2.0 or lower is defined as "below the expected range for age"; a Z-score above -2.0 is "within the expected range for age."
Use of serial BMD testing
Finally, regarding serial BMD measurements, such testing in combination with clinical assessment of fracture risk can be used to determine whether treatment should be initiated in untreated patients. Furthermore, serial BMD testing can monitor a patient's response to therapy by finding an increase or stability of bone density. It should be used to monitor individuals following cessation of osteoporosis drug therapy. Serial BMD testing can detect loss of bone density, indicating the need to assess treatment adherence, evaluate possible secondary causes of osteoporosis, and possibly re-evaluate therapeutic options.
Intervals between BMD testing should be determined according to each patient's clinical status. Typically, 1 year after initiating or changing therapy is appropriate, with longer intervals once therapeutic effect is established.
Patients commonly ask for BMD testing and ObGyn providers commonly order it. Understanding appropriate use of BMD testing in terms of who to scan, what sites to evaluate, when there may be spurious results of vertebrae due to artifacts, avoiding T-scores in premenopausal women in favor of Z-scores, understanding that low bone mass is a preferred term to osteopenia, and knowing how to order and use serial BMD testing will likely improve our role as the frontline providers to improving bone health in our patients.
Continue to: Dyspareunia drug has positive effects on bone...
Dyspareunia drug has positive effects on bone
de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
Previously, ospemifene effectively reduced bone loss in ovariectomized rats, with activity comparable to that of estradiol and raloxifene.3 Clinical data from 3 phase 1 or 2 clinical trials found that ospemifene 60 mg/day had a positive effect on biochemical markers for bone turnover in healthy postmenopausal women, with significant improvements relative to placebo and effects comparable to those of raloxifene.4
Effects on bone formation/resorption biomarkers
In a recent study, de Villiers and colleagues reported the first phase 3 trial that looked at markers of bone formation and bone resorption.5 A total of 316 women were randomly assigned to receive ospemifene, and 315 received placebo.
Demographic and baseline characteristics were similar between treatment groups. Participants' mean age was approximately 60 years, mean body mass index (BMI) was 27.2 kg/m2, and mean duration of VVA was 8 to 9 years. Serum levels of 9 bone biomarkers were similar between groups at baseline.
At week 12, all 5 markers of bone resorption improved with ospemifene treatment, and 3 of the 5 (NTX, CTX, and TRACP-5b) did so in a statistically significant fashion compared with placebo (P≤.02). In addition, at week 12, all 4 markers of bone formation improved with ospemifene treatment compared with placebo (P≤.008). Furthermore, lower bone resorption markers with ospemifene were observed regardless of time since menopause (≤ 5 years or
> 5 years) or baseline BMD, whether normal, osteopenic, or osteoporotic.
Interpret results cautiously
The authors caution that the data are limited to biochemical markers rather than fracture or BMD. It is known that there is good correlation between biochemical markers for bone turnover and the occurrence of fracture.6
Ospemifene is an oral SERM approved for the treatment of moderate to severe dyspareunia as well as dryness from VVA due to menopause. The preclinical animal data and human markers of bone turnover all support the antiresorptive action of ospemifene on bones. Thus, one may safely surmise that ospemifene's direction of activity in bone is virtually indisputable. The magnitude of that activity is, however, unstudied. Therefore, when choosing an agent to treat women with dyspareunia or vaginal dryness from VVA of menopause, determining any potential add-on benefit in bone may be appropriate for that particular patient, although one would not use it as a stand-alone agent for bone only.
Continue to: Sarcopenia adds to osteoporotic risk for fractures...
Sarcopenia adds to osteoporotic risk for fractures
Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
In 1989, the term sarcopenia was introduced to refer to the age-related decline in skeletal muscle mass.8 Currently, sarcopenia is defined as a progressive decline in muscle mass, strength, and physical function, thus increasing the risk for various adverse outcomes, including osteoporosis.9 Although muscle and bone tissues differ morphologically, their functioning is closely interconnected.
The sarcopenia-osteoporosis connection
Lima and colleagues sought to investigate the relationship between sarcopenia and osteoporosis.10 They measured women's fat free mass with dual-energy x-ray absorptiometry (DXA) scanning, muscle strength using a dynamometer to measure knee extension torque while participants were seated, and functional performance using the timed "up and go test" in which participants were timed as they got up from a chair, walked 3 meters around a cone, and returned to sit in the chair.10,11
The authors used definitions from the European Working Group on Sarcopenia in Older People (EWGSOP). Participants who had normal results in all 3 domains were considered nonsarcopenic. Presarcopenia was defined as having low fat free mass on DXA scanning but normal strength and function. Participants who had low fat free mass and either low strength or low function were labeled as having sarcopenia. Severe sarcopenia was defined as abnormal results in all 3 domains.
Two hundred thirty-four women (mean age, 68.3 years; range, 60-80) underwent BMD testing and were evaluated according to the 3 domains of possible sarcopenia. All were community dwelling and did not have cognitive impairment or functional dependency.
The rates of osteoporosis were 15.8%, 19.2%, 35.3%, and 46.2% for nonsarcopenia, presarcopenia, sarcopenia, and severe sarcopenia, respectively (P=.002). Whole-body and femoral neck BMD values were significantly lower among all sarcopenia stages when compared with nonsarcopenia (P<.05). The severe sarcopenia group showed the lowest lumbar spine T-scores (P<.05). When clustered, sarcopenia and severe sarcopenia presented a significantly higher risk for osteoporosis (odds ratio, 3.4; 95% confidence interval [CI], 1.5-7.8).
Consider sarcopenia a risk factor
The authors concluded that these "results provide support for the concept that a dose-response relationship exists between sarcopenia stages, BMD, and the presence of osteoporosis. These findings strengthen the clinical significance of the EWGSOP sarcopenia definitions and indicate that severe sarcopenia should be viewed with attention by healthcare professionals."
Osteoporotic fractures are defined as fragility fractures. While "frailty" has been a risk factor for such fractures in the past, increasing evidence now suggests that what we previously called frailty includes a significant component of loss of muscle mass, strength, and function—referred to as sarcopenia. While it is not likely that many ObGyns will perform objective testing for sarcopenia, conducting even a subjective assessment of such status should be considered in addition to BMD determinations in making decisions about pharmacotherapy.
Continue to: Certain characteristics may offset fracture risk in aromatase inhibitor users...
Certain characteristics may offset fracture risk in aromatase inhibitor users
Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
The use of AIs increases bone turnover and induces bone loss at trabecular-rich bone sites at an average rate of 1% to 3% per year, with reports of up to a threefold increased fracture incidence.13 By contrast, a large nationwide population-based cohort study using US Medicare data identified minimal fracture risk from AI use compared with tamoxifen use (11% higher for nonvertebral fractures, not significantly increased for hip fractures).14
An article published previously in this column reported that women on AIs treated with intravenous zoledronic acid had improvements in BMD, while women treated with denosumab had statistically significant fewer fractures compared with those receiving placebo, whether they had normal bone mass, osteopenia, or osteoporosis at
baseline.15-17
Data derived from a population-based BMD registry
In a recent cohort study, Leslie and colleagues offer the opinion that "observations in the clinical trial setting may differ from routine clinical practice."18 The authors examined fracture outcomes using a large clinical registry of BMD results from women in Manitoba, Canada. They identified women at least 40 years of age initiating AI therapy for breast cancer (n = 1,775), women with breast cancer not receiving AI therapy (n = 1,016), and women from the general population without breast cancer (n = 34,205).
Fracture outcomes were assessed after a mean of 6.2 years for the AI users, all of whom had at least 12 months of AI exposure. At baseline, AI users had higher BMI, higher BMD, lower osteoporosis prevalence, and fewer prior fractures than women from the general population or women with breast cancer without AI use (all P<.001). After adjusting for all covariates, AI users were not at significantly greater risk for major osteoporotic fractures (hazard ratio [HR], 1.15; 95% CI, 0.93-1.42), hip fracture (HR, 0.90; 95% CI, 0.56-1.43), or any fracture (HR, 1.06; 95% CI, 0.88-1.28) compared with the general population.
Results challenge prevailing view
Thus, the authors concluded that higher baseline BMI, BMD, and lower prevalence of prior fracture at baseline may offset the adverse effects of AI exposure. Although confirmatory data from large cohort studies are required, the authors stated that their findings challenge the view that all women with breast cancer initiating AI therapy should be considered at high risk for fracture.
It is well known that women with estrogen receptor-positive breast cancers tend to be more obese than noncancer patients and have higher levels of circulating estrogens. The study by Leslie and colleagues shows that such patients will have fewer previous fractures and better baseline bone mass values than the general population. This may prompt us to rethink whether all women initiating AI therapy need to be treated for fracture prevention, as some previous studies have suggested. Clearly, further study is necessary.
- International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-iscd-official-positions-adult. Accessed November 22, 2019.
- Goldstein SR, Neven P, Cummings S, et al. Postmenopausal evaluation and risk reduction with lasofoxifene (PEARL) trial: 5-year gynecological outcomes. Menopause. 2011;18:17-22.
- Kangas L, Unkila M. Tissue selectivity of ospemifene: pharmacologic profile and clinical implications. Steroids. 2013;78:1273-1280.
- Constantine GD, Kagan R, Miller PD. Effects of ospemifene on bone parameters including clinical biomarkers in postmenopausal women. Menopause. 2016;23:638-644.
- de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
- Gerdhem P, Ivaska KK, Alatalo SL, et al. Biochemical markers of bone metabolism and prediction of fracture in elderly women. J Bone Miner Res. 2004;19:386-393.
- Siris ES, Adler R, Bilezikian J, et al. The clinical diagnosis of osteoporosis: a position statement from the National Bone Health Alliance Working Group. Osteoporos Int. 2014;25:1439-1443.
- Epidemiologic and methodologic problems in determining nutritional status of older persons. Proceedings of a conference. Albuquerque, New Mexico, October 19-21, 1988. Am J Clin Nutr. 1989;50(5 suppl):1121-1235.
- Drey M, Sieber CC, Bertsch T, et al. Osteosarcopenia is more than sarcopenia and osteopenia alone. Aging Clin Exp Res. 2016;28:895-899.
- Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
- Mathias S, Nayak U, Isaacs B. Balance in elderly patients: the "get-up and go" test. Arch Phys Med Rehabil. 1986;67:387-389.
- Burstein HJ, Temin S, Anderson H, et al. Adjuvant endocrine therapy for women with hormone receptor-positive breast cancer: American Society of Clinical Oncology clinical practice guideline focused update. J Clin Oncol. 2014;32:2255-2269.
- Schmidt N, Jacob L, Coleman R, et al. The impact of treatment compliance on fracture risk in women with breast cancer treated with aromatase inhibitors in the United Kingdom. Breast Cancer Res Treat. 2016;155:151-157.
- Neuner JM, Shi Y, Kong AL, et al. Fractures in a nationwide population-based cohort of users of breast cancer hormonal therapy. J Cancer Surviv. 2018;12:268-275.
- Goldstein SR. 2015 Update on osteoporosis. OBG Manag. 2015;27:31-39.
- Majithia N, Atherton PJ, Lafky JM, et al. Zoledronic acid for treatment of osteopenia and osteoporosis in women with primary breast cancer undergoing adjuvant aromatase inhibitor therapy: a 5-year follow-up. Support Care Cancer. 2016;24:1219-1226.
- Gnant M, Pfeiler G, Dubsky PC, et al; Austrian Breast and Colorectal Cancer Study Group. Adjuvant denosumab in breast cancer (ABCSG-18): a multicenter, randomized, double-blind, placebo-controlled trial. Lancet. 2015;386:433-443.
- Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
Prior to last year, this column was titled “Update on osteoporosis.” My observation, however, is that too many ObGyn providers simply measure bone mass (known as bone mineral density, or BMD), label a patient as normal, osteopenic, or osteoporotic, and then consider pharmacotherapy. The FRAX fracture prediction algorithm, which incorporates age, weight, height, history of any previous fracture, family history of hip fracture, current smoking, use of glucocorticoid medications, and any history of rheumatoid arthritis, has refined the screening process somewhat, if and when it is utilized. As clinicians, we should never lose sight of our goal: to prevent fragility fractures. Having osteoporosis increases that risk, but not having osteoporosis does not eliminate it.
In this Update, I highlight various ways in which work published this past year may help us to improve our patients’ bone health and reduce fragility fractures.
Updated ISCD guidance emphasizes appropriate BMD testing, use of the
Z-score, and terminology
International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-ISCD-official-positions-adult.
Continue to: Indications for BMD testing...
Indications for BMD testing
The ISCD's indications for BMD testing remain for women age 65 and older. For postmenopausal women younger than age 65, a BMD test is indicated if they have a risk factor for low bone mass, such as 1) low body weight, 2) prior fracture, 3) high-risk medication use, or 4) a disease or condition associated with bone loss. A BMD test also is indicated for women during the menopausal transition with clinical risk factors for fracture, such as low body weight, prior fracture, or high-risk medication use. Interestingly, the ISCD recommendation for men is similar but uses age 70 for this group.
In addition, the ISCD recommends BMD testing in adults with a fragility fracture, with a disease or condition associated with low bone mass, or taking medications associated with low bone mass, as well as for anyone being considered for pharmacologic therapy, being treated (to monitor treatment effect), not receiving therapy in whom evidence of bone loss would lead to treatment, and in women discontinuing estrogen who should be considered for BMD testing according to the indications already mentioned.
Sites to assess for osteoporosis. The World Health Organization international reference standard for osteoporosis diagnosis is a T-score of -2.5 or less at the femoral neck. The reference standard, from which the T-score is calculated, is for white women aged 20 to 29 years of age from the database of the Third National Health and Nutrition Examination Survey. Osteoporosis also may be diagnosed in postmenopausal women if the T-score of the lumbar spine, total hip, or femoral neck is -2.5 or less. In certain circumstances, the 33% radius (also called the one-third radius) may be utilized. Other hip regions of interest, including Ward's area and the greater trochanter, should not be used for diagnosis.
The skeletal sites at which to measure BMD include the anteroposterior of the spine and hip in all patients. In terms of the spine, use L1-L4 for spine BMD measurement. However, exclude vertebrae that are affected by local structural changes or artifact. Use 3 vertebrae if 4 cannot be used, and 2 if 3 cannot be used. BMD-based diagnostic classification should not be made using a single vertebra. Anatomically abnormal vertebrae may be excluded from analysis if they are clearly abnormal and nonassessable within the resolution of the system, or if there is more than a 1.0 T-score difference between the vertebra in question and adjacent vertebrae. When vertebrae are excluded, the BMD of the remaining vertebrae are used to derive the T-score.
For BMD measurement at the hip, the femoral neck or total proximal femur—whichever is lowest—should be used. Either hip may be measured. Data are insufficient on whether mean T-scores for bilateral hip BMD should be used for diagnosis.
Terminology. While the ISCD retains the term osteopenia, the term low bone mass or low bone density is preferred. People with low bone mass or density are not necessarily at high fracture risk.
Concerning BMD reporting in women prior to menopause, Z-scores, not T-scores, are preferred. A Z-score of -2.0 or lower is defined as "below the expected range for age"; a Z-score above -2.0 is "within the expected range for age."
Use of serial BMD testing
Finally, regarding serial BMD measurements, such testing in combination with clinical assessment of fracture risk can be used to determine whether treatment should be initiated in untreated patients. Furthermore, serial BMD testing can monitor a patient's response to therapy by finding an increase or stability of bone density. It should be used to monitor individuals following cessation of osteoporosis drug therapy. Serial BMD testing can detect loss of bone density, indicating the need to assess treatment adherence, evaluate possible secondary causes of osteoporosis, and possibly re-evaluate therapeutic options.
Intervals between BMD testing should be determined according to each patient's clinical status. Typically, 1 year after initiating or changing therapy is appropriate, with longer intervals once therapeutic effect is established.
Patients commonly ask for BMD testing and ObGyn providers commonly order it. Understanding appropriate use of BMD testing in terms of who to scan, what sites to evaluate, when there may be spurious results of vertebrae due to artifacts, avoiding T-scores in premenopausal women in favor of Z-scores, understanding that low bone mass is a preferred term to osteopenia, and knowing how to order and use serial BMD testing will likely improve our role as the frontline providers to improving bone health in our patients.
Continue to: Dyspareunia drug has positive effects on bone...
Dyspareunia drug has positive effects on bone
de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
Previously, ospemifene effectively reduced bone loss in ovariectomized rats, with activity comparable to that of estradiol and raloxifene.3 Clinical data from 3 phase 1 or 2 clinical trials found that ospemifene 60 mg/day had a positive effect on biochemical markers for bone turnover in healthy postmenopausal women, with significant improvements relative to placebo and effects comparable to those of raloxifene.4
Effects on bone formation/resorption biomarkers
In a recent study, de Villiers and colleagues reported the first phase 3 trial that looked at markers of bone formation and bone resorption.5 A total of 316 women were randomly assigned to receive ospemifene, and 315 received placebo.
Demographic and baseline characteristics were similar between treatment groups. Participants' mean age was approximately 60 years, mean body mass index (BMI) was 27.2 kg/m2, and mean duration of VVA was 8 to 9 years. Serum levels of 9 bone biomarkers were similar between groups at baseline.
At week 12, all 5 markers of bone resorption improved with ospemifene treatment, and 3 of the 5 (NTX, CTX, and TRACP-5b) did so in a statistically significant fashion compared with placebo (P≤.02). In addition, at week 12, all 4 markers of bone formation improved with ospemifene treatment compared with placebo (P≤.008). Furthermore, lower bone resorption markers with ospemifene were observed regardless of time since menopause (≤ 5 years or
> 5 years) or baseline BMD, whether normal, osteopenic, or osteoporotic.
Interpret results cautiously
The authors caution that the data are limited to biochemical markers rather than fracture or BMD. It is known that there is good correlation between biochemical markers for bone turnover and the occurrence of fracture.6
Ospemifene is an oral SERM approved for the treatment of moderate to severe dyspareunia as well as dryness from VVA due to menopause. The preclinical animal data and human markers of bone turnover all support the antiresorptive action of ospemifene on bones. Thus, one may safely surmise that ospemifene's direction of activity in bone is virtually indisputable. The magnitude of that activity is, however, unstudied. Therefore, when choosing an agent to treat women with dyspareunia or vaginal dryness from VVA of menopause, determining any potential add-on benefit in bone may be appropriate for that particular patient, although one would not use it as a stand-alone agent for bone only.
Continue to: Sarcopenia adds to osteoporotic risk for fractures...
Sarcopenia adds to osteoporotic risk for fractures
Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
In 1989, the term sarcopenia was introduced to refer to the age-related decline in skeletal muscle mass.8 Currently, sarcopenia is defined as a progressive decline in muscle mass, strength, and physical function, thus increasing the risk for various adverse outcomes, including osteoporosis.9 Although muscle and bone tissues differ morphologically, their functioning is closely interconnected.
The sarcopenia-osteoporosis connection
Lima and colleagues sought to investigate the relationship between sarcopenia and osteoporosis.10 They measured women's fat free mass with dual-energy x-ray absorptiometry (DXA) scanning, muscle strength using a dynamometer to measure knee extension torque while participants were seated, and functional performance using the timed "up and go test" in which participants were timed as they got up from a chair, walked 3 meters around a cone, and returned to sit in the chair.10,11
The authors used definitions from the European Working Group on Sarcopenia in Older People (EWGSOP). Participants who had normal results in all 3 domains were considered nonsarcopenic. Presarcopenia was defined as having low fat free mass on DXA scanning but normal strength and function. Participants who had low fat free mass and either low strength or low function were labeled as having sarcopenia. Severe sarcopenia was defined as abnormal results in all 3 domains.
Two hundred thirty-four women (mean age, 68.3 years; range, 60-80) underwent BMD testing and were evaluated according to the 3 domains of possible sarcopenia. All were community dwelling and did not have cognitive impairment or functional dependency.
The rates of osteoporosis were 15.8%, 19.2%, 35.3%, and 46.2% for nonsarcopenia, presarcopenia, sarcopenia, and severe sarcopenia, respectively (P=.002). Whole-body and femoral neck BMD values were significantly lower among all sarcopenia stages when compared with nonsarcopenia (P<.05). The severe sarcopenia group showed the lowest lumbar spine T-scores (P<.05). When clustered, sarcopenia and severe sarcopenia presented a significantly higher risk for osteoporosis (odds ratio, 3.4; 95% confidence interval [CI], 1.5-7.8).
Consider sarcopenia a risk factor
The authors concluded that these "results provide support for the concept that a dose-response relationship exists between sarcopenia stages, BMD, and the presence of osteoporosis. These findings strengthen the clinical significance of the EWGSOP sarcopenia definitions and indicate that severe sarcopenia should be viewed with attention by healthcare professionals."
Osteoporotic fractures are defined as fragility fractures. While "frailty" has been a risk factor for such fractures in the past, increasing evidence now suggests that what we previously called frailty includes a significant component of loss of muscle mass, strength, and function—referred to as sarcopenia. While it is not likely that many ObGyns will perform objective testing for sarcopenia, conducting even a subjective assessment of such status should be considered in addition to BMD determinations in making decisions about pharmacotherapy.
Continue to: Certain characteristics may offset fracture risk in aromatase inhibitor users...
Certain characteristics may offset fracture risk in aromatase inhibitor users
Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
The use of AIs increases bone turnover and induces bone loss at trabecular-rich bone sites at an average rate of 1% to 3% per year, with reports of up to a threefold increased fracture incidence.13 By contrast, a large nationwide population-based cohort study using US Medicare data identified minimal fracture risk from AI use compared with tamoxifen use (11% higher for nonvertebral fractures, not significantly increased for hip fractures).14
An article published previously in this column reported that women on AIs treated with intravenous zoledronic acid had improvements in BMD, while women treated with denosumab had statistically significant fewer fractures compared with those receiving placebo, whether they had normal bone mass, osteopenia, or osteoporosis at
baseline.15-17
Data derived from a population-based BMD registry
In a recent cohort study, Leslie and colleagues offer the opinion that "observations in the clinical trial setting may differ from routine clinical practice."18 The authors examined fracture outcomes using a large clinical registry of BMD results from women in Manitoba, Canada. They identified women at least 40 years of age initiating AI therapy for breast cancer (n = 1,775), women with breast cancer not receiving AI therapy (n = 1,016), and women from the general population without breast cancer (n = 34,205).
Fracture outcomes were assessed after a mean of 6.2 years for the AI users, all of whom had at least 12 months of AI exposure. At baseline, AI users had higher BMI, higher BMD, lower osteoporosis prevalence, and fewer prior fractures than women from the general population or women with breast cancer without AI use (all P<.001). After adjusting for all covariates, AI users were not at significantly greater risk for major osteoporotic fractures (hazard ratio [HR], 1.15; 95% CI, 0.93-1.42), hip fracture (HR, 0.90; 95% CI, 0.56-1.43), or any fracture (HR, 1.06; 95% CI, 0.88-1.28) compared with the general population.
Results challenge prevailing view
Thus, the authors concluded that higher baseline BMI, BMD, and lower prevalence of prior fracture at baseline may offset the adverse effects of AI exposure. Although confirmatory data from large cohort studies are required, the authors stated that their findings challenge the view that all women with breast cancer initiating AI therapy should be considered at high risk for fracture.
It is well known that women with estrogen receptor-positive breast cancers tend to be more obese than noncancer patients and have higher levels of circulating estrogens. The study by Leslie and colleagues shows that such patients will have fewer previous fractures and better baseline bone mass values than the general population. This may prompt us to rethink whether all women initiating AI therapy need to be treated for fracture prevention, as some previous studies have suggested. Clearly, further study is necessary.
Prior to last year, this column was titled “Update on osteoporosis.” My observation, however, is that too many ObGyn providers simply measure bone mass (known as bone mineral density, or BMD), label a patient as normal, osteopenic, or osteoporotic, and then consider pharmacotherapy. The FRAX fracture prediction algorithm, which incorporates age, weight, height, history of any previous fracture, family history of hip fracture, current smoking, use of glucocorticoid medications, and any history of rheumatoid arthritis, has refined the screening process somewhat, if and when it is utilized. As clinicians, we should never lose sight of our goal: to prevent fragility fractures. Having osteoporosis increases that risk, but not having osteoporosis does not eliminate it.
In this Update, I highlight various ways in which work published this past year may help us to improve our patients’ bone health and reduce fragility fractures.
Updated ISCD guidance emphasizes appropriate BMD testing, use of the
Z-score, and terminology
International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-ISCD-official-positions-adult.
Continue to: Indications for BMD testing...
Indications for BMD testing
The ISCD's indications for BMD testing remain for women age 65 and older. For postmenopausal women younger than age 65, a BMD test is indicated if they have a risk factor for low bone mass, such as 1) low body weight, 2) prior fracture, 3) high-risk medication use, or 4) a disease or condition associated with bone loss. A BMD test also is indicated for women during the menopausal transition with clinical risk factors for fracture, such as low body weight, prior fracture, or high-risk medication use. Interestingly, the ISCD recommendation for men is similar but uses age 70 for this group.
In addition, the ISCD recommends BMD testing in adults with a fragility fracture, with a disease or condition associated with low bone mass, or taking medications associated with low bone mass, as well as for anyone being considered for pharmacologic therapy, being treated (to monitor treatment effect), not receiving therapy in whom evidence of bone loss would lead to treatment, and in women discontinuing estrogen who should be considered for BMD testing according to the indications already mentioned.
Sites to assess for osteoporosis. The World Health Organization international reference standard for osteoporosis diagnosis is a T-score of -2.5 or less at the femoral neck. The reference standard, from which the T-score is calculated, is for white women aged 20 to 29 years of age from the database of the Third National Health and Nutrition Examination Survey. Osteoporosis also may be diagnosed in postmenopausal women if the T-score of the lumbar spine, total hip, or femoral neck is -2.5 or less. In certain circumstances, the 33% radius (also called the one-third radius) may be utilized. Other hip regions of interest, including Ward's area and the greater trochanter, should not be used for diagnosis.
The skeletal sites at which to measure BMD include the anteroposterior of the spine and hip in all patients. In terms of the spine, use L1-L4 for spine BMD measurement. However, exclude vertebrae that are affected by local structural changes or artifact. Use 3 vertebrae if 4 cannot be used, and 2 if 3 cannot be used. BMD-based diagnostic classification should not be made using a single vertebra. Anatomically abnormal vertebrae may be excluded from analysis if they are clearly abnormal and nonassessable within the resolution of the system, or if there is more than a 1.0 T-score difference between the vertebra in question and adjacent vertebrae. When vertebrae are excluded, the BMD of the remaining vertebrae are used to derive the T-score.
For BMD measurement at the hip, the femoral neck or total proximal femur—whichever is lowest—should be used. Either hip may be measured. Data are insufficient on whether mean T-scores for bilateral hip BMD should be used for diagnosis.
Terminology. While the ISCD retains the term osteopenia, the term low bone mass or low bone density is preferred. People with low bone mass or density are not necessarily at high fracture risk.
Concerning BMD reporting in women prior to menopause, Z-scores, not T-scores, are preferred. A Z-score of -2.0 or lower is defined as "below the expected range for age"; a Z-score above -2.0 is "within the expected range for age."
Use of serial BMD testing
Finally, regarding serial BMD measurements, such testing in combination with clinical assessment of fracture risk can be used to determine whether treatment should be initiated in untreated patients. Furthermore, serial BMD testing can monitor a patient's response to therapy by finding an increase or stability of bone density. It should be used to monitor individuals following cessation of osteoporosis drug therapy. Serial BMD testing can detect loss of bone density, indicating the need to assess treatment adherence, evaluate possible secondary causes of osteoporosis, and possibly re-evaluate therapeutic options.
Intervals between BMD testing should be determined according to each patient's clinical status. Typically, 1 year after initiating or changing therapy is appropriate, with longer intervals once therapeutic effect is established.
Patients commonly ask for BMD testing and ObGyn providers commonly order it. Understanding appropriate use of BMD testing in terms of who to scan, what sites to evaluate, when there may be spurious results of vertebrae due to artifacts, avoiding T-scores in premenopausal women in favor of Z-scores, understanding that low bone mass is a preferred term to osteopenia, and knowing how to order and use serial BMD testing will likely improve our role as the frontline providers to improving bone health in our patients.
Continue to: Dyspareunia drug has positive effects on bone...
Dyspareunia drug has positive effects on bone
de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
Previously, ospemifene effectively reduced bone loss in ovariectomized rats, with activity comparable to that of estradiol and raloxifene.3 Clinical data from 3 phase 1 or 2 clinical trials found that ospemifene 60 mg/day had a positive effect on biochemical markers for bone turnover in healthy postmenopausal women, with significant improvements relative to placebo and effects comparable to those of raloxifene.4
Effects on bone formation/resorption biomarkers
In a recent study, de Villiers and colleagues reported the first phase 3 trial that looked at markers of bone formation and bone resorption.5 A total of 316 women were randomly assigned to receive ospemifene, and 315 received placebo.
Demographic and baseline characteristics were similar between treatment groups. Participants' mean age was approximately 60 years, mean body mass index (BMI) was 27.2 kg/m2, and mean duration of VVA was 8 to 9 years. Serum levels of 9 bone biomarkers were similar between groups at baseline.
At week 12, all 5 markers of bone resorption improved with ospemifene treatment, and 3 of the 5 (NTX, CTX, and TRACP-5b) did so in a statistically significant fashion compared with placebo (P≤.02). In addition, at week 12, all 4 markers of bone formation improved with ospemifene treatment compared with placebo (P≤.008). Furthermore, lower bone resorption markers with ospemifene were observed regardless of time since menopause (≤ 5 years or
> 5 years) or baseline BMD, whether normal, osteopenic, or osteoporotic.
Interpret results cautiously
The authors caution that the data are limited to biochemical markers rather than fracture or BMD. It is known that there is good correlation between biochemical markers for bone turnover and the occurrence of fracture.6
Ospemifene is an oral SERM approved for the treatment of moderate to severe dyspareunia as well as dryness from VVA due to menopause. The preclinical animal data and human markers of bone turnover all support the antiresorptive action of ospemifene on bones. Thus, one may safely surmise that ospemifene's direction of activity in bone is virtually indisputable. The magnitude of that activity is, however, unstudied. Therefore, when choosing an agent to treat women with dyspareunia or vaginal dryness from VVA of menopause, determining any potential add-on benefit in bone may be appropriate for that particular patient, although one would not use it as a stand-alone agent for bone only.
Continue to: Sarcopenia adds to osteoporotic risk for fractures...
Sarcopenia adds to osteoporotic risk for fractures
Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
In 1989, the term sarcopenia was introduced to refer to the age-related decline in skeletal muscle mass.8 Currently, sarcopenia is defined as a progressive decline in muscle mass, strength, and physical function, thus increasing the risk for various adverse outcomes, including osteoporosis.9 Although muscle and bone tissues differ morphologically, their functioning is closely interconnected.
The sarcopenia-osteoporosis connection
Lima and colleagues sought to investigate the relationship between sarcopenia and osteoporosis.10 They measured women's fat free mass with dual-energy x-ray absorptiometry (DXA) scanning, muscle strength using a dynamometer to measure knee extension torque while participants were seated, and functional performance using the timed "up and go test" in which participants were timed as they got up from a chair, walked 3 meters around a cone, and returned to sit in the chair.10,11
The authors used definitions from the European Working Group on Sarcopenia in Older People (EWGSOP). Participants who had normal results in all 3 domains were considered nonsarcopenic. Presarcopenia was defined as having low fat free mass on DXA scanning but normal strength and function. Participants who had low fat free mass and either low strength or low function were labeled as having sarcopenia. Severe sarcopenia was defined as abnormal results in all 3 domains.
Two hundred thirty-four women (mean age, 68.3 years; range, 60-80) underwent BMD testing and were evaluated according to the 3 domains of possible sarcopenia. All were community dwelling and did not have cognitive impairment or functional dependency.
The rates of osteoporosis were 15.8%, 19.2%, 35.3%, and 46.2% for nonsarcopenia, presarcopenia, sarcopenia, and severe sarcopenia, respectively (P=.002). Whole-body and femoral neck BMD values were significantly lower among all sarcopenia stages when compared with nonsarcopenia (P<.05). The severe sarcopenia group showed the lowest lumbar spine T-scores (P<.05). When clustered, sarcopenia and severe sarcopenia presented a significantly higher risk for osteoporosis (odds ratio, 3.4; 95% confidence interval [CI], 1.5-7.8).
Consider sarcopenia a risk factor
The authors concluded that these "results provide support for the concept that a dose-response relationship exists between sarcopenia stages, BMD, and the presence of osteoporosis. These findings strengthen the clinical significance of the EWGSOP sarcopenia definitions and indicate that severe sarcopenia should be viewed with attention by healthcare professionals."
Osteoporotic fractures are defined as fragility fractures. While "frailty" has been a risk factor for such fractures in the past, increasing evidence now suggests that what we previously called frailty includes a significant component of loss of muscle mass, strength, and function—referred to as sarcopenia. While it is not likely that many ObGyns will perform objective testing for sarcopenia, conducting even a subjective assessment of such status should be considered in addition to BMD determinations in making decisions about pharmacotherapy.
Continue to: Certain characteristics may offset fracture risk in aromatase inhibitor users...
Certain characteristics may offset fracture risk in aromatase inhibitor users
Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
The use of AIs increases bone turnover and induces bone loss at trabecular-rich bone sites at an average rate of 1% to 3% per year, with reports of up to a threefold increased fracture incidence.13 By contrast, a large nationwide population-based cohort study using US Medicare data identified minimal fracture risk from AI use compared with tamoxifen use (11% higher for nonvertebral fractures, not significantly increased for hip fractures).14
An article published previously in this column reported that women on AIs treated with intravenous zoledronic acid had improvements in BMD, while women treated with denosumab had statistically significant fewer fractures compared with those receiving placebo, whether they had normal bone mass, osteopenia, or osteoporosis at
baseline.15-17
Data derived from a population-based BMD registry
In a recent cohort study, Leslie and colleagues offer the opinion that "observations in the clinical trial setting may differ from routine clinical practice."18 The authors examined fracture outcomes using a large clinical registry of BMD results from women in Manitoba, Canada. They identified women at least 40 years of age initiating AI therapy for breast cancer (n = 1,775), women with breast cancer not receiving AI therapy (n = 1,016), and women from the general population without breast cancer (n = 34,205).
Fracture outcomes were assessed after a mean of 6.2 years for the AI users, all of whom had at least 12 months of AI exposure. At baseline, AI users had higher BMI, higher BMD, lower osteoporosis prevalence, and fewer prior fractures than women from the general population or women with breast cancer without AI use (all P<.001). After adjusting for all covariates, AI users were not at significantly greater risk for major osteoporotic fractures (hazard ratio [HR], 1.15; 95% CI, 0.93-1.42), hip fracture (HR, 0.90; 95% CI, 0.56-1.43), or any fracture (HR, 1.06; 95% CI, 0.88-1.28) compared with the general population.
Results challenge prevailing view
Thus, the authors concluded that higher baseline BMI, BMD, and lower prevalence of prior fracture at baseline may offset the adverse effects of AI exposure. Although confirmatory data from large cohort studies are required, the authors stated that their findings challenge the view that all women with breast cancer initiating AI therapy should be considered at high risk for fracture.
It is well known that women with estrogen receptor-positive breast cancers tend to be more obese than noncancer patients and have higher levels of circulating estrogens. The study by Leslie and colleagues shows that such patients will have fewer previous fractures and better baseline bone mass values than the general population. This may prompt us to rethink whether all women initiating AI therapy need to be treated for fracture prevention, as some previous studies have suggested. Clearly, further study is necessary.
- International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-iscd-official-positions-adult. Accessed November 22, 2019.
- Goldstein SR, Neven P, Cummings S, et al. Postmenopausal evaluation and risk reduction with lasofoxifene (PEARL) trial: 5-year gynecological outcomes. Menopause. 2011;18:17-22.
- Kangas L, Unkila M. Tissue selectivity of ospemifene: pharmacologic profile and clinical implications. Steroids. 2013;78:1273-1280.
- Constantine GD, Kagan R, Miller PD. Effects of ospemifene on bone parameters including clinical biomarkers in postmenopausal women. Menopause. 2016;23:638-644.
- de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
- Gerdhem P, Ivaska KK, Alatalo SL, et al. Biochemical markers of bone metabolism and prediction of fracture in elderly women. J Bone Miner Res. 2004;19:386-393.
- Siris ES, Adler R, Bilezikian J, et al. The clinical diagnosis of osteoporosis: a position statement from the National Bone Health Alliance Working Group. Osteoporos Int. 2014;25:1439-1443.
- Epidemiologic and methodologic problems in determining nutritional status of older persons. Proceedings of a conference. Albuquerque, New Mexico, October 19-21, 1988. Am J Clin Nutr. 1989;50(5 suppl):1121-1235.
- Drey M, Sieber CC, Bertsch T, et al. Osteosarcopenia is more than sarcopenia and osteopenia alone. Aging Clin Exp Res. 2016;28:895-899.
- Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
- Mathias S, Nayak U, Isaacs B. Balance in elderly patients: the "get-up and go" test. Arch Phys Med Rehabil. 1986;67:387-389.
- Burstein HJ, Temin S, Anderson H, et al. Adjuvant endocrine therapy for women with hormone receptor-positive breast cancer: American Society of Clinical Oncology clinical practice guideline focused update. J Clin Oncol. 2014;32:2255-2269.
- Schmidt N, Jacob L, Coleman R, et al. The impact of treatment compliance on fracture risk in women with breast cancer treated with aromatase inhibitors in the United Kingdom. Breast Cancer Res Treat. 2016;155:151-157.
- Neuner JM, Shi Y, Kong AL, et al. Fractures in a nationwide population-based cohort of users of breast cancer hormonal therapy. J Cancer Surviv. 2018;12:268-275.
- Goldstein SR. 2015 Update on osteoporosis. OBG Manag. 2015;27:31-39.
- Majithia N, Atherton PJ, Lafky JM, et al. Zoledronic acid for treatment of osteopenia and osteoporosis in women with primary breast cancer undergoing adjuvant aromatase inhibitor therapy: a 5-year follow-up. Support Care Cancer. 2016;24:1219-1226.
- Gnant M, Pfeiler G, Dubsky PC, et al; Austrian Breast and Colorectal Cancer Study Group. Adjuvant denosumab in breast cancer (ABCSG-18): a multicenter, randomized, double-blind, placebo-controlled trial. Lancet. 2015;386:433-443.
- Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
- International Society for Clinical Densitometry. 2019 ISCD Official Positions-Adult. June 2019. https://www.iscd.org/official-positions/2019-iscd-official-positions-adult. Accessed November 22, 2019.
- Goldstein SR, Neven P, Cummings S, et al. Postmenopausal evaluation and risk reduction with lasofoxifene (PEARL) trial: 5-year gynecological outcomes. Menopause. 2011;18:17-22.
- Kangas L, Unkila M. Tissue selectivity of ospemifene: pharmacologic profile and clinical implications. Steroids. 2013;78:1273-1280.
- Constantine GD, Kagan R, Miller PD. Effects of ospemifene on bone parameters including clinical biomarkers in postmenopausal women. Menopause. 2016;23:638-644.
- de Villiers TJ, Altomare C, Particco M, et al. Effects of ospemifene on bone in postmenopausal women. Climacteric. 2019;22:442-447.
- Gerdhem P, Ivaska KK, Alatalo SL, et al. Biochemical markers of bone metabolism and prediction of fracture in elderly women. J Bone Miner Res. 2004;19:386-393.
- Siris ES, Adler R, Bilezikian J, et al. The clinical diagnosis of osteoporosis: a position statement from the National Bone Health Alliance Working Group. Osteoporos Int. 2014;25:1439-1443.
- Epidemiologic and methodologic problems in determining nutritional status of older persons. Proceedings of a conference. Albuquerque, New Mexico, October 19-21, 1988. Am J Clin Nutr. 1989;50(5 suppl):1121-1235.
- Drey M, Sieber CC, Bertsch T, et al. Osteosarcopenia is more than sarcopenia and osteopenia alone. Aging Clin Exp Res. 2016;28:895-899.
- Lima RM, de Oliveira RJ, Raposo R, et al. Stages of sarcopenia, bone mineral density, and the prevalence of osteoporosis in older women. Arch Osteoporos. 2019;14:38.
- Mathias S, Nayak U, Isaacs B. Balance in elderly patients: the "get-up and go" test. Arch Phys Med Rehabil. 1986;67:387-389.
- Burstein HJ, Temin S, Anderson H, et al. Adjuvant endocrine therapy for women with hormone receptor-positive breast cancer: American Society of Clinical Oncology clinical practice guideline focused update. J Clin Oncol. 2014;32:2255-2269.
- Schmidt N, Jacob L, Coleman R, et al. The impact of treatment compliance on fracture risk in women with breast cancer treated with aromatase inhibitors in the United Kingdom. Breast Cancer Res Treat. 2016;155:151-157.
- Neuner JM, Shi Y, Kong AL, et al. Fractures in a nationwide population-based cohort of users of breast cancer hormonal therapy. J Cancer Surviv. 2018;12:268-275.
- Goldstein SR. 2015 Update on osteoporosis. OBG Manag. 2015;27:31-39.
- Majithia N, Atherton PJ, Lafky JM, et al. Zoledronic acid for treatment of osteopenia and osteoporosis in women with primary breast cancer undergoing adjuvant aromatase inhibitor therapy: a 5-year follow-up. Support Care Cancer. 2016;24:1219-1226.
- Gnant M, Pfeiler G, Dubsky PC, et al; Austrian Breast and Colorectal Cancer Study Group. Adjuvant denosumab in breast cancer (ABCSG-18): a multicenter, randomized, double-blind, placebo-controlled trial. Lancet. 2015;386:433-443.
- Leslie WD, Morin SN, Lix LM, et al. Fracture risk in women with breast cancer initiating aromatase inhibitor therapy: a registry-based cohort study. Oncologist. 2019;24:1432-1438.
Thanksgiving took a bite out of HealthCare.gov
Health care insurance may have taken a bit of a back seat to turkey and shopping last week as according to the Centers for Medicare & Medicaid Services.

Consumers selected 28% fewer plans during week 5 (Nov. 24-30) of Open Enrollment 2020 than in week 4. A similar drop of 33% occurred last year between week 3 of open enrollment and week 4, which included Thanksgiving and Black Friday, CMS data show.
Through week 5, total plans selections for 2020 health insurance coverage came in at almost 2.9 million, which is down about 10% from last year’s 5-week total of 3.2 million for 2019 coverage.
The HealthCare.gov platform is being used by 38 states for the 2020 benefit year, and so far Florida residents have selected the most plans, almost 797,000. Texas is next with just over 400,000 selections, followed by Georgia with 173,000 and North Carolina with 162,000, CMS reported Dec. 4.
Health care insurance may have taken a bit of a back seat to turkey and shopping last week as according to the Centers for Medicare & Medicaid Services.

Consumers selected 28% fewer plans during week 5 (Nov. 24-30) of Open Enrollment 2020 than in week 4. A similar drop of 33% occurred last year between week 3 of open enrollment and week 4, which included Thanksgiving and Black Friday, CMS data show.
Through week 5, total plans selections for 2020 health insurance coverage came in at almost 2.9 million, which is down about 10% from last year’s 5-week total of 3.2 million for 2019 coverage.
The HealthCare.gov platform is being used by 38 states for the 2020 benefit year, and so far Florida residents have selected the most plans, almost 797,000. Texas is next with just over 400,000 selections, followed by Georgia with 173,000 and North Carolina with 162,000, CMS reported Dec. 4.
Health care insurance may have taken a bit of a back seat to turkey and shopping last week as according to the Centers for Medicare & Medicaid Services.

Consumers selected 28% fewer plans during week 5 (Nov. 24-30) of Open Enrollment 2020 than in week 4. A similar drop of 33% occurred last year between week 3 of open enrollment and week 4, which included Thanksgiving and Black Friday, CMS data show.
Through week 5, total plans selections for 2020 health insurance coverage came in at almost 2.9 million, which is down about 10% from last year’s 5-week total of 3.2 million for 2019 coverage.
The HealthCare.gov platform is being used by 38 states for the 2020 benefit year, and so far Florida residents have selected the most plans, almost 797,000. Texas is next with just over 400,000 selections, followed by Georgia with 173,000 and North Carolina with 162,000, CMS reported Dec. 4.
Snow Way to Take Care of Your Heart
ANSWER
This ECG shows normal sinus rhythm, an anterior myocardial infarction, and inferolateral injury consistent with an acute ST-elevation myocardial infarction (STEMI).
A P wave for every QRS complex and a QRS complex with every P wave, with a consistent PR interval and a rate > 60 and < 100 beats/min, signifies sinus rhythm.
Criteria for an anterior STEMI include new ST elevation (≥ 2 mm [0.2 mV]) at the J point in leads V3 and V4. Inferolateral injury is indicated inferiorly by ST changes in leads II, III, and aVL and laterally by the ST elevation in leads V5 and V6.
Subsequent cardiac catheterization showed an occluded proximal left anterior descending artery and significant diagonal and obtuse marginal disease.
ANSWER
This ECG shows normal sinus rhythm, an anterior myocardial infarction, and inferolateral injury consistent with an acute ST-elevation myocardial infarction (STEMI).
A P wave for every QRS complex and a QRS complex with every P wave, with a consistent PR interval and a rate > 60 and < 100 beats/min, signifies sinus rhythm.
Criteria for an anterior STEMI include new ST elevation (≥ 2 mm [0.2 mV]) at the J point in leads V3 and V4. Inferolateral injury is indicated inferiorly by ST changes in leads II, III, and aVL and laterally by the ST elevation in leads V5 and V6.
Subsequent cardiac catheterization showed an occluded proximal left anterior descending artery and significant diagonal and obtuse marginal disease.
ANSWER
This ECG shows normal sinus rhythm, an anterior myocardial infarction, and inferolateral injury consistent with an acute ST-elevation myocardial infarction (STEMI).
A P wave for every QRS complex and a QRS complex with every P wave, with a consistent PR interval and a rate > 60 and < 100 beats/min, signifies sinus rhythm.
Criteria for an anterior STEMI include new ST elevation (≥ 2 mm [0.2 mV]) at the J point in leads V3 and V4. Inferolateral injury is indicated inferiorly by ST changes in leads II, III, and aVL and laterally by the ST elevation in leads V5 and V6.
Subsequent cardiac catheterization showed an occluded proximal left anterior descending artery and significant diagonal and obtuse marginal disease.
A 58-year-old man is snowmobiling with friends when he develops crushing substernal chest pain. He immediately stops his snowmobile and waves his arms for help—but by the time his friends reach him, he is lying on the ground, clutching his chest.
When asked what happened, he tells his friends that he’s been experiencing chest pain for the past hour but didn’t want to stop or interrupt their fun. He further reveals that he’s had chest “twinges” for the past 2 months, but they were always brief, and he didn’t think they were anything to be concerned about. He acknowledges that the current episode is “far worse” than what he previously experienced.
Because they are in the wilderness, no one in the group is able to establish cellphone service to call 911. The patient is loaded onto the back of another snowmobile for the 30-minute ride to the parking lot, where cellular service is accessible. They call 911, and an ACLS ambulance arrives about 50 minutes later.
An ECG is obtained in the field and transmitted to the receiving hospital, and the catherization lab is notified of an incoming patient. Transport to the hospital takes an hour; during the trip, the patient is administered oxygen, morphine, nitroglycerin, and an aspirin, and he is noted to have several nonsustained episodes of polymorphic ventricular tachycardia. The patient arrives at the hospital about 4 hours after onset of chest pain.
Medical history includes longstanding uncontrolled hypertension, recent onset of type 2 diabetes, and gastric reflux. He has never had shortness of breath, dyspnea on exertion, syncope, or near-syncope.
Current medications include lisinopril and metformin. However, the patient informs you that he hasn’t taken lisinopril in more than 3 months, and although he’s been given a prescription for metformin, he hasn’t filled it. He has no known drug allergies.
The patient is a mechanic at a local auto dealership. He smokes between 1 and 1.5 packs of cigarettes per day and has attempted to quit several times. He also consumes about 1 case of beer per week.
He is divorced, has no children, and lives alone. Both parents died in an automobile accident. The patient knows his father had several heart attacks beginning in his mid-50s and his mother “had thyroid problems.” His grandparents were known to have coronary artery disease and diabetes.
Review of systems is positive for a longstanding smoker’s cough and a healing burn on his right forearm, attributed to a welding injury.
His pretransport vital signs include a blood pressure of 178/88 mm Hg; pulse, 88 beats/min; respiratory rate, 18 breaths/min-1; and temperature, 97.6ºF. His stated weight is 265 lb and his height, 69 in.
Your findings on the physical exam corroborate those called in by the paramedics: an obese white male in obvious distress but alert and cooperative. His lungs reveal diffuse rales and crackles that clear with vigorous coughing. His cardiac exam reveals a regular rhythm at a rate of 80 beats/min with no murmurs or rubs. The abdomen is obese but otherwise normal. There is no peripheral edema. Pulses are strong and equal bilaterally. The neurologic exam is grossly intact. A bandaged second-degree burn is noted on the lower right forearm.
A repeat ECG shows a ventricular rate of 80 beats/min; PR interval, 162 ms; QRS duration, 106 ms; QT/QTc interval, 370/426 ms; P axis, 51°; R axis, –20°; and T axis, 70°. What is your interpretation?
FDA fast-tracks psilocybin for major depressive disorder
Psilocybin, a short-acting compound that is the psychoactive ingredient in “magic mushrooms,” has received a Breakthrough Therapy designation from the Food and Drug Administration for the treatment of adults with major depressive disorder.
The designation was given to the Usona Institute, a nonprofit medical research organization, and comes in the wake of Usona’s launch of a phase 2 clinical trial that will include about 80 participants at seven study sites across the United States, according to a press release. Two sites are currently recruiting patients, and the others are expected to begin recruiting in 2020.
Breakthrough Therapy designation as defined by the FDA means that, based on preliminary research, “the drug may demonstrate substantial improvement over available therapy on a clinically significant endpoint.” In this case, Usona is working with the University of Wisconsin’s University Hospital in Madison, and other collaborators, according to a presentation by Malynn Utzinger, MD, director of integrative medicine and cofounder of the organization.
More information on the Usona Institute and Usona’s clinical trials is available at https://usonaclinicaltrials.org/.
Psilocybin, a short-acting compound that is the psychoactive ingredient in “magic mushrooms,” has received a Breakthrough Therapy designation from the Food and Drug Administration for the treatment of adults with major depressive disorder.
The designation was given to the Usona Institute, a nonprofit medical research organization, and comes in the wake of Usona’s launch of a phase 2 clinical trial that will include about 80 participants at seven study sites across the United States, according to a press release. Two sites are currently recruiting patients, and the others are expected to begin recruiting in 2020.
Breakthrough Therapy designation as defined by the FDA means that, based on preliminary research, “the drug may demonstrate substantial improvement over available therapy on a clinically significant endpoint.” In this case, Usona is working with the University of Wisconsin’s University Hospital in Madison, and other collaborators, according to a presentation by Malynn Utzinger, MD, director of integrative medicine and cofounder of the organization.
More information on the Usona Institute and Usona’s clinical trials is available at https://usonaclinicaltrials.org/.
Psilocybin, a short-acting compound that is the psychoactive ingredient in “magic mushrooms,” has received a Breakthrough Therapy designation from the Food and Drug Administration for the treatment of adults with major depressive disorder.
The designation was given to the Usona Institute, a nonprofit medical research organization, and comes in the wake of Usona’s launch of a phase 2 clinical trial that will include about 80 participants at seven study sites across the United States, according to a press release. Two sites are currently recruiting patients, and the others are expected to begin recruiting in 2020.
Breakthrough Therapy designation as defined by the FDA means that, based on preliminary research, “the drug may demonstrate substantial improvement over available therapy on a clinically significant endpoint.” In this case, Usona is working with the University of Wisconsin’s University Hospital in Madison, and other collaborators, according to a presentation by Malynn Utzinger, MD, director of integrative medicine and cofounder of the organization.
More information on the Usona Institute and Usona’s clinical trials is available at https://usonaclinicaltrials.org/.
Prosody recognition associated with functioning in first-episode schizophrenia
Affective prosody recognition is associated with role and social functioning in patients with a recent first episode of schizophrenia, according to Kelsey A. Bonfils, PhD, and associates.
The investigators conducted an analysis of 49 patients aged between 18 and 45 years with a recent first episode of schizophrenia who were participating in a larger randomized, controlled trial. Symptoms of schizophrenia were assessed using a 24-item version of the Brief Psychiatric Rating Scale (BPRS) and functioning was assessed using the Global Functioning Scale (GFS) and Role Functioning Scale (RFS). Study participants took the Prosody Task, which assessed the ability to recognize happiness, sadness, anger, fear, and disgust, and the Facial Emotion Identification Test (FEIT), which assesses the ability to recognize happiness, sadness, anger, fear, surprise, and disgust, reported Dr. Bonfils of the Veterans Affairs Pittsburgh Healthcare System and the department of psychiatry at the University of Pittsburgh. The study was published in Schizophrenia Research: Cognition.
In the Prosody Task, patients were significantly more likely to recognize anger (45.6% correct) and sadness (43.8%), and significantly less likely to recognize disgust (21.9%). In the FEIT, patients were most likely to recognize happiness (97.5%), followed by surprise (90.0%), anger (85.0%), sadness (77.5%), disgust (73.8%), and fear (55.0%).
Performance in the Prosody Task was associated with GFS role functioning and RFS social functioning, while FEIT performance was not significantly associated with any functioning measure. In terms of symptoms, Prosody Task performance was negatively associated with disorganization in the BPRS, and FEIT performance was associated with disorganization, reality distortion, and positive symptoms.
“These findings are consistent with the view that emotion recognition deficits could be contributing to deficits in the ability of people with first-episode schizophrenia to adequately function in the real world, both in relationships with friends and in normative young adult roles,” the investigators wrote.
Dr. Bonfils reported no conflicts of interest. Three coauthors reported receiving support, research grants, and funding from several pharmaceutical companies.
SOURCE: Bonfils KA et al. Schizophr Res Cogn. 2019. doi: 10.1016/j.scog.2019.100153.
Affective prosody recognition is associated with role and social functioning in patients with a recent first episode of schizophrenia, according to Kelsey A. Bonfils, PhD, and associates.
The investigators conducted an analysis of 49 patients aged between 18 and 45 years with a recent first episode of schizophrenia who were participating in a larger randomized, controlled trial. Symptoms of schizophrenia were assessed using a 24-item version of the Brief Psychiatric Rating Scale (BPRS) and functioning was assessed using the Global Functioning Scale (GFS) and Role Functioning Scale (RFS). Study participants took the Prosody Task, which assessed the ability to recognize happiness, sadness, anger, fear, and disgust, and the Facial Emotion Identification Test (FEIT), which assesses the ability to recognize happiness, sadness, anger, fear, surprise, and disgust, reported Dr. Bonfils of the Veterans Affairs Pittsburgh Healthcare System and the department of psychiatry at the University of Pittsburgh. The study was published in Schizophrenia Research: Cognition.
In the Prosody Task, patients were significantly more likely to recognize anger (45.6% correct) and sadness (43.8%), and significantly less likely to recognize disgust (21.9%). In the FEIT, patients were most likely to recognize happiness (97.5%), followed by surprise (90.0%), anger (85.0%), sadness (77.5%), disgust (73.8%), and fear (55.0%).
Performance in the Prosody Task was associated with GFS role functioning and RFS social functioning, while FEIT performance was not significantly associated with any functioning measure. In terms of symptoms, Prosody Task performance was negatively associated with disorganization in the BPRS, and FEIT performance was associated with disorganization, reality distortion, and positive symptoms.
“These findings are consistent with the view that emotion recognition deficits could be contributing to deficits in the ability of people with first-episode schizophrenia to adequately function in the real world, both in relationships with friends and in normative young adult roles,” the investigators wrote.
Dr. Bonfils reported no conflicts of interest. Three coauthors reported receiving support, research grants, and funding from several pharmaceutical companies.
SOURCE: Bonfils KA et al. Schizophr Res Cogn. 2019. doi: 10.1016/j.scog.2019.100153.
Affective prosody recognition is associated with role and social functioning in patients with a recent first episode of schizophrenia, according to Kelsey A. Bonfils, PhD, and associates.
The investigators conducted an analysis of 49 patients aged between 18 and 45 years with a recent first episode of schizophrenia who were participating in a larger randomized, controlled trial. Symptoms of schizophrenia were assessed using a 24-item version of the Brief Psychiatric Rating Scale (BPRS) and functioning was assessed using the Global Functioning Scale (GFS) and Role Functioning Scale (RFS). Study participants took the Prosody Task, which assessed the ability to recognize happiness, sadness, anger, fear, and disgust, and the Facial Emotion Identification Test (FEIT), which assesses the ability to recognize happiness, sadness, anger, fear, surprise, and disgust, reported Dr. Bonfils of the Veterans Affairs Pittsburgh Healthcare System and the department of psychiatry at the University of Pittsburgh. The study was published in Schizophrenia Research: Cognition.
In the Prosody Task, patients were significantly more likely to recognize anger (45.6% correct) and sadness (43.8%), and significantly less likely to recognize disgust (21.9%). In the FEIT, patients were most likely to recognize happiness (97.5%), followed by surprise (90.0%), anger (85.0%), sadness (77.5%), disgust (73.8%), and fear (55.0%).
Performance in the Prosody Task was associated with GFS role functioning and RFS social functioning, while FEIT performance was not significantly associated with any functioning measure. In terms of symptoms, Prosody Task performance was negatively associated with disorganization in the BPRS, and FEIT performance was associated with disorganization, reality distortion, and positive symptoms.
“These findings are consistent with the view that emotion recognition deficits could be contributing to deficits in the ability of people with first-episode schizophrenia to adequately function in the real world, both in relationships with friends and in normative young adult roles,” the investigators wrote.
Dr. Bonfils reported no conflicts of interest. Three coauthors reported receiving support, research grants, and funding from several pharmaceutical companies.
SOURCE: Bonfils KA et al. Schizophr Res Cogn. 2019. doi: 10.1016/j.scog.2019.100153.
FROM SCHIZOPHRENIA RESEARCH: COGNITION
Intensive BP control reduced dementia but increased brain atrophy and hurt cognition
SAN DIEGO – Intensive blood pressure control over 4 years reduced the overall risk of all-cause dementia by 17%, compared with standard care, but in subanalyses of the Systolic Blood Pressure Intervention Trial (SPRINT) it was also associated with significant decreases in cognitive function and total brain volume, researchers said at the Clinical Trials on Alzheimer’s Disease conference.
Whether these between-group differences were clinically meaningful was the topic of some debate, but they were enough to prompt Mary Sano, PhD, to strongly state her reservations.
“The cardiovascular effects of SPRINT were impressive, but I am concerned about minimizing the potentially negative effect on cognition,” said Dr. Sano, professor of psychiatry and director of the Alzheimer’s Disease Research Center at the Icahn School of Medicine at Mount Sinai, New York. “Do I really want to treat a healthy, nonimpaired patient like this if I have to warn them that their cognition might actually get worse? We just cannot minimize this risk. There is very strong evidence that [intensive treatment of blood pressure] might be a step backward in cognition. Would you lower your own blood pressure at a risk of losing some points on your cognition?”
The subanalyses were conducted as part of the SPRINT Memory and Cognition In Decreased Hypertension (SPRINT MIND) substudy, which looked at cardiovascular and mortality outcomes in 9,361 subjects whose hypertension was managed intensively or by standard care (target systolic blood pressure less than 120 mm Hg vs. less than 140 mm Hg). The trial was stopped early because of a 25% reduction in the primary composite cardiovascular disease endpoint and a 27% reduction in all-cause mortality in the intensive-treatment group.
SPRINT MIND examined the risks of incident probable dementia, mild cognitive impairment (MCI), and a composite outcome of both. Intensive control reduced the risk of MCI by 19% and the combined outcome by 15%.
At the conference, SPRINT MIND investigators presented three long-term subanalyses with a median intervention and follow-up time of about 4 years.
Sarah Gaussoin of Wake Forest University, Winston-Salem, N.C., presented unpublished data detailing the effects of intensive control on several dementia subtypes: nonamnestic single domain, nonamnestic multidomain, amnestic single domain, and amnestic multidomain. There were 640 subjects in this analysis.
After a median of 3.3 years of intervention and 5 years of follow-up, there were no differences in the rate of incident probable dementia between the single- and multidomain nonamnestic groups. “We did see a strong 22% decreased risk in single-domain versus multidomain amnestic MCI, however,” she said.
Nicholas Pajewski, PhD, also of Wake Forest University, discussed more detailed cognitive outcomes in SPRINT MIND among 2,900 subjects who had a full battery of cognitive testing at every assessment over 5 years. The outcomes included memory deficit and processing speed.
Dr. Pajewski reported finding no significant difference between the groups in the rates of memory decline in either outcome. But there was a greater rate of decline in processing speed in the intensively treated group, he added. The difference was small but statistically significant.
The difference was largely driven by results of a single cognitive test – the Trail Making Test Part A. “It corresponded to about a 1.25-second increase over 4 years,” in processing speed on this test, Dr. Pajewski said.
There were no between-group differences in any of the other domains explored, including language, executive function, global cognitive function, or the Montreal Cognitive Assessment.
“Obviously, these results are perplexing,” given the overall positive results of SPRINT MIND, he said. “Intensive blood pressure control is a beneficial thing, and we expected to see an effect on memory, or a blunting of decline, and instead we saw some small decrements going the other way. This led us to speculate about what’s going on.”
The trial relied on a narrow definition of MCI that might have affected the outcomes. There was also a very broad range of ages in the study, ranging from 53 to 86 years. More importantly, he said, the original SPRINT study didn’t collect cognitive data at baseline, so there was no way to know how many subjects already might have had MCI when they entered the trial.
Ilya Nasrallah, MD, PhD, of the University of Pennsylvania, Philadelphia, presented MRI data on white-matter lesions, hippocampal volume fractional anisotropy in the cingulum, and cerebral blood flow. The median time between scans was 4 years, with a median treatment time of 3.4 years.
The standard-care group showed a significantly greater increase in white-matter lesion volume at the follow-up scan than did the intensive-treatment group (1.45 cm3 vs. 0.92 cm3). But the intensively treated group had significantly more brain atrophy, losing a median of 30.6 cm3, compared with a loss of 26.9 cm3 in the standard-treatment group.
“It was a very small difference amounting to less than 1% of the total brain volume, but it was still statistically significant,” Dr. Nasrallah said.
Loss of gray-matter volume drove about two-thirds of the difference in the intensively treated group. There was a corresponding increase in cerebrospinal fluid volume that was driven by differences in the ventricles and the subarachnoid space.
However, there were no significant differences in right, left, or total hippocampal volume. There also were no differences in cingulate bundle anisotropy or cerebral blood flow.
SPRINT was funded by the National Institutes of Health. None of the investigators reported having financial conflicts of interest.
SAN DIEGO – Intensive blood pressure control over 4 years reduced the overall risk of all-cause dementia by 17%, compared with standard care, but in subanalyses of the Systolic Blood Pressure Intervention Trial (SPRINT) it was also associated with significant decreases in cognitive function and total brain volume, researchers said at the Clinical Trials on Alzheimer’s Disease conference.
Whether these between-group differences were clinically meaningful was the topic of some debate, but they were enough to prompt Mary Sano, PhD, to strongly state her reservations.
“The cardiovascular effects of SPRINT were impressive, but I am concerned about minimizing the potentially negative effect on cognition,” said Dr. Sano, professor of psychiatry and director of the Alzheimer’s Disease Research Center at the Icahn School of Medicine at Mount Sinai, New York. “Do I really want to treat a healthy, nonimpaired patient like this if I have to warn them that their cognition might actually get worse? We just cannot minimize this risk. There is very strong evidence that [intensive treatment of blood pressure] might be a step backward in cognition. Would you lower your own blood pressure at a risk of losing some points on your cognition?”
The subanalyses were conducted as part of the SPRINT Memory and Cognition In Decreased Hypertension (SPRINT MIND) substudy, which looked at cardiovascular and mortality outcomes in 9,361 subjects whose hypertension was managed intensively or by standard care (target systolic blood pressure less than 120 mm Hg vs. less than 140 mm Hg). The trial was stopped early because of a 25% reduction in the primary composite cardiovascular disease endpoint and a 27% reduction in all-cause mortality in the intensive-treatment group.
SPRINT MIND examined the risks of incident probable dementia, mild cognitive impairment (MCI), and a composite outcome of both. Intensive control reduced the risk of MCI by 19% and the combined outcome by 15%.
At the conference, SPRINT MIND investigators presented three long-term subanalyses with a median intervention and follow-up time of about 4 years.
Sarah Gaussoin of Wake Forest University, Winston-Salem, N.C., presented unpublished data detailing the effects of intensive control on several dementia subtypes: nonamnestic single domain, nonamnestic multidomain, amnestic single domain, and amnestic multidomain. There were 640 subjects in this analysis.
After a median of 3.3 years of intervention and 5 years of follow-up, there were no differences in the rate of incident probable dementia between the single- and multidomain nonamnestic groups. “We did see a strong 22% decreased risk in single-domain versus multidomain amnestic MCI, however,” she said.
Nicholas Pajewski, PhD, also of Wake Forest University, discussed more detailed cognitive outcomes in SPRINT MIND among 2,900 subjects who had a full battery of cognitive testing at every assessment over 5 years. The outcomes included memory deficit and processing speed.
Dr. Pajewski reported finding no significant difference between the groups in the rates of memory decline in either outcome. But there was a greater rate of decline in processing speed in the intensively treated group, he added. The difference was small but statistically significant.
The difference was largely driven by results of a single cognitive test – the Trail Making Test Part A. “It corresponded to about a 1.25-second increase over 4 years,” in processing speed on this test, Dr. Pajewski said.
There were no between-group differences in any of the other domains explored, including language, executive function, global cognitive function, or the Montreal Cognitive Assessment.
“Obviously, these results are perplexing,” given the overall positive results of SPRINT MIND, he said. “Intensive blood pressure control is a beneficial thing, and we expected to see an effect on memory, or a blunting of decline, and instead we saw some small decrements going the other way. This led us to speculate about what’s going on.”
The trial relied on a narrow definition of MCI that might have affected the outcomes. There was also a very broad range of ages in the study, ranging from 53 to 86 years. More importantly, he said, the original SPRINT study didn’t collect cognitive data at baseline, so there was no way to know how many subjects already might have had MCI when they entered the trial.
Ilya Nasrallah, MD, PhD, of the University of Pennsylvania, Philadelphia, presented MRI data on white-matter lesions, hippocampal volume fractional anisotropy in the cingulum, and cerebral blood flow. The median time between scans was 4 years, with a median treatment time of 3.4 years.
The standard-care group showed a significantly greater increase in white-matter lesion volume at the follow-up scan than did the intensive-treatment group (1.45 cm3 vs. 0.92 cm3). But the intensively treated group had significantly more brain atrophy, losing a median of 30.6 cm3, compared with a loss of 26.9 cm3 in the standard-treatment group.
“It was a very small difference amounting to less than 1% of the total brain volume, but it was still statistically significant,” Dr. Nasrallah said.
Loss of gray-matter volume drove about two-thirds of the difference in the intensively treated group. There was a corresponding increase in cerebrospinal fluid volume that was driven by differences in the ventricles and the subarachnoid space.
However, there were no significant differences in right, left, or total hippocampal volume. There also were no differences in cingulate bundle anisotropy or cerebral blood flow.
SPRINT was funded by the National Institutes of Health. None of the investigators reported having financial conflicts of interest.
SAN DIEGO – Intensive blood pressure control over 4 years reduced the overall risk of all-cause dementia by 17%, compared with standard care, but in subanalyses of the Systolic Blood Pressure Intervention Trial (SPRINT) it was also associated with significant decreases in cognitive function and total brain volume, researchers said at the Clinical Trials on Alzheimer’s Disease conference.
Whether these between-group differences were clinically meaningful was the topic of some debate, but they were enough to prompt Mary Sano, PhD, to strongly state her reservations.
“The cardiovascular effects of SPRINT were impressive, but I am concerned about minimizing the potentially negative effect on cognition,” said Dr. Sano, professor of psychiatry and director of the Alzheimer’s Disease Research Center at the Icahn School of Medicine at Mount Sinai, New York. “Do I really want to treat a healthy, nonimpaired patient like this if I have to warn them that their cognition might actually get worse? We just cannot minimize this risk. There is very strong evidence that [intensive treatment of blood pressure] might be a step backward in cognition. Would you lower your own blood pressure at a risk of losing some points on your cognition?”
The subanalyses were conducted as part of the SPRINT Memory and Cognition In Decreased Hypertension (SPRINT MIND) substudy, which looked at cardiovascular and mortality outcomes in 9,361 subjects whose hypertension was managed intensively or by standard care (target systolic blood pressure less than 120 mm Hg vs. less than 140 mm Hg). The trial was stopped early because of a 25% reduction in the primary composite cardiovascular disease endpoint and a 27% reduction in all-cause mortality in the intensive-treatment group.
SPRINT MIND examined the risks of incident probable dementia, mild cognitive impairment (MCI), and a composite outcome of both. Intensive control reduced the risk of MCI by 19% and the combined outcome by 15%.
At the conference, SPRINT MIND investigators presented three long-term subanalyses with a median intervention and follow-up time of about 4 years.
Sarah Gaussoin of Wake Forest University, Winston-Salem, N.C., presented unpublished data detailing the effects of intensive control on several dementia subtypes: nonamnestic single domain, nonamnestic multidomain, amnestic single domain, and amnestic multidomain. There were 640 subjects in this analysis.
After a median of 3.3 years of intervention and 5 years of follow-up, there were no differences in the rate of incident probable dementia between the single- and multidomain nonamnestic groups. “We did see a strong 22% decreased risk in single-domain versus multidomain amnestic MCI, however,” she said.
Nicholas Pajewski, PhD, also of Wake Forest University, discussed more detailed cognitive outcomes in SPRINT MIND among 2,900 subjects who had a full battery of cognitive testing at every assessment over 5 years. The outcomes included memory deficit and processing speed.
Dr. Pajewski reported finding no significant difference between the groups in the rates of memory decline in either outcome. But there was a greater rate of decline in processing speed in the intensively treated group, he added. The difference was small but statistically significant.
The difference was largely driven by results of a single cognitive test – the Trail Making Test Part A. “It corresponded to about a 1.25-second increase over 4 years,” in processing speed on this test, Dr. Pajewski said.
There were no between-group differences in any of the other domains explored, including language, executive function, global cognitive function, or the Montreal Cognitive Assessment.
“Obviously, these results are perplexing,” given the overall positive results of SPRINT MIND, he said. “Intensive blood pressure control is a beneficial thing, and we expected to see an effect on memory, or a blunting of decline, and instead we saw some small decrements going the other way. This led us to speculate about what’s going on.”
The trial relied on a narrow definition of MCI that might have affected the outcomes. There was also a very broad range of ages in the study, ranging from 53 to 86 years. More importantly, he said, the original SPRINT study didn’t collect cognitive data at baseline, so there was no way to know how many subjects already might have had MCI when they entered the trial.
Ilya Nasrallah, MD, PhD, of the University of Pennsylvania, Philadelphia, presented MRI data on white-matter lesions, hippocampal volume fractional anisotropy in the cingulum, and cerebral blood flow. The median time between scans was 4 years, with a median treatment time of 3.4 years.
The standard-care group showed a significantly greater increase in white-matter lesion volume at the follow-up scan than did the intensive-treatment group (1.45 cm3 vs. 0.92 cm3). But the intensively treated group had significantly more brain atrophy, losing a median of 30.6 cm3, compared with a loss of 26.9 cm3 in the standard-treatment group.
“It was a very small difference amounting to less than 1% of the total brain volume, but it was still statistically significant,” Dr. Nasrallah said.
Loss of gray-matter volume drove about two-thirds of the difference in the intensively treated group. There was a corresponding increase in cerebrospinal fluid volume that was driven by differences in the ventricles and the subarachnoid space.
However, there were no significant differences in right, left, or total hippocampal volume. There also were no differences in cingulate bundle anisotropy or cerebral blood flow.
SPRINT was funded by the National Institutes of Health. None of the investigators reported having financial conflicts of interest.
REPORTING FROM CTAD 2019
Gastroenterology practice evaluations: Can patients get satisfaction?
Although largely untouched by the first and second industrial revolutions in the 18th and 20th centuries, the practice of medicine in the 21st century is increasingly susceptible to the vast transformative power of the third – and rapidly approaching fourth – industrial revolutions. New technological advances and their associated distribution of knowledge and connectedness have allowed patients unprecedented access to health care information. The salutary effects of this change is manifest in a diversity of areas, including registries that facilitate participation in state of the art research such as ClinicalTrials.gov and the ability to track nascent trends in infectious diseases with Google searches.1
Although the stakes may seem lower when patients go online to choose a practitioner, the reality demonstrates just how important those search results can be. With parallels of similar trends in other sectors, there is an increasing emphasis on ranking health care facilities, practitioners, and medical experiences. This phenomenon extends beyond private Internet sites into government scorecards, which has significant implications. But even with widespread access to information, there is frequently a lack of context for interpreting these data. Consequently, it is worth exploring why measuring satisfaction can be important, how patients can rate practitioners, and what to do with the available information to improve care delivery.
The idea to measure patient satisfaction of delivered health care began in earnest during the 1980s with Irwin Press and Rodney Ganey collaborating to create formal processes for collecting data on the “salient aspects of ... health care experience, [involving] the interaction of expectations, preferences, and satisfaction with medical care.”2,3 The enthusiasm for collecting these data has grown greatly since that time. More recently, the federal government began obtaining data in 2002 when the Centers for Medicaid & Medicare Services and the Agency for Healthcare Research and Quality (AHRQ) collaborated to develop a standardized questionnaire for hospitalized patients known as the Hospital Consumer Assessment of Healthcare Providers and Systems, or HCAHPS.4 Subsequently, standardized survey instruments have been developed for nearly every phase of care, including outpatient care (CG-CAHPS), emergency care (ED-CAHPS), and ambulatory surgery care (OAS-CAHPS). These instruments are particularly relevant to gastroenterologists, with questions querying patients about preprocedure instructions, surgery center check-in processes, comfort of procedure and waiting rooms, friendliness of providers, and quality of postprocedure information.
The focus on rating satisfaction intensified in 2010 after the passage of the Affordable Care Act (ACA). Around this time, patient satisfaction and health outcomes became more deeply integrated concepts in health care quality. As part of a broader emphasis in this area, CMS initiated the hospital value-based purchasing (VBP) program, which tied incentive payments for Medicare beneficiaries to hospital-based health care quality and patient satisfaction. Within this schema, 25% of performance, and its associated economic stakes, is measured by HCAHPS scores.5 Other value programs such as the Merit-Based Incentive Payment Program (MIPS) include CAHPS instruments as optional assessments of quality.
Given the financial risks linked to satisfaction rankings and their online visibility, many argue that patient satisfaction is prioritized in organizations above more clinically meaningful metrics. Studies have shown, however, that high levels of patient satisfaction can lead to increased patient loyalty, treatment adherence, patient retention, staff morale, and personal and professional satisfaction.6,7 In fact, not surprisingly, there is an inverse correlation between patient satisfaction and the rates of malpractice lawsuits.7-10
Despite the growing relevance of patient perceptions to clinical practice, measuring satisfaction remains a challenge. While current metrics are particular to an individual patient’s experiences, underlying health conditions influence opinions of these episodes of care. Specifically, patients with depression and anxiety are, in general, less satisfied with the care they receive.11,12 Similarly, patients with chronic diseases on multiple medications and those with more severe symptoms are commonly less satisfied with their care than are patients with acute issues2 and with milder symptoms.3 As gastroenterologists, seeing sicker patients with chronic conditions is not uncommon, and this could serve as a disadvantage when compared with peers in other specialties because scores are not typically adjusted.
Since patient-centered metrics are likely to remain relevant in the future, and with the unique challenges this can present to practicing gastroenterologists, achieving higher degrees of patient satisfaction remains both aspirational and difficult. We will be asked to reconcile and manage not only clinical conundrums but also seemingly conflicting realities of patient preferences. For example, it has been shown that, among patients with irritable bowel syndrome (IBS), more testing led to higher satisfaction only until that testing was performed within the context of a gastroenterologist’s care.13 In contrast, within the endoscopy setting, a preprocedure diagnosis of IBS did not increase the risk for procedure-related dissatisfaction, provided patients were not prescribed chronic psychotropic medication, nervous prior to the procedure, distressed or in pain during the procedure, or had unmet physical or emotional needs during the procedure.14 Furthermore, there is poor correlation between endoscopic quality measures with strong evidence – such as adenoma detection rate, withdrawal time, and cecal intubation rate – and patient satisfaction.15
So, when considering these conflicting findings and evidence that patients’ global rating of their health care is not reliably associated with the quality of the care they receive,16 should we emphasize experience over outcome? As clinicians practicing in an increasingly transparent and value-based health care environment, we are subject to many priorities contending for our attention. We strive to provide care that is at once patient centric, evidence based, and low cost; however, achieving these goals often requires different strategies. At the end of the day, our primary aim is to provide consistently excellent patient care. We believe that quality and experience are not competing principles. Patient satisfaction is relevant and important, but it should not preclude adherence to our primary responsibility of providing high-quality care.
When trying to make clinical decisions that may compromise one of these goals for another, it can be helpful to recall the “me and my family” rule: What kind of care would I want for myself or my loved ones in this situation?
Acknowledgement
We thank Dr. Ziad Gellad (Duke University, Durham, N.C.) for his assistance in reviewing and providing feedback on this manuscript.
1. Proc Natl Acad Sci U S A. 2015;112(47):14473-8. 2. Am J Manag Care. 1997;3(4):579-94.
3. Gut. 2004;53(SUPPL. 4):40-4.
4. Virtual Mentor. 2013;15(11):982-7.
5. J Hosp Med. 2013;8(5):271-7.
6. Int J Health Care Qual Assur. 2011;24(4):266-73.
7. J Cutan Aesthet Surg. 2010;3(3):151-5.
8. Am J Med. 2005;118(10):1126-33.
9. JAMA. 2002;287(22):2951-7. 10. JAMA. 1994;272(20):1583-7.
11. J Diabetes Metab. 2012;3(7):1000210.
12. Am Heart J. 2000;140(1):105-10.
13. J Clin Gastroenterol. 2018;52(7):614-21.
14. Dig Dis Sci. 2005;50(10):1860-71.15. Am J Gastroenterol. 2014;109(7):1089-91.
16. Ann Intern Med. 2006;144(9):665-72.
Dr. Finn is a gastroenterologist with the Palo Alto Medical Foundation, Mountain View, Calif.; Dr. Leiman is assistant professor of medicine, director of esophageal research and quality in the division of gastroenterology, Duke University, Duke Clinical Research Institute, and chair-elect of the AGA Quality Committee.
Although largely untouched by the first and second industrial revolutions in the 18th and 20th centuries, the practice of medicine in the 21st century is increasingly susceptible to the vast transformative power of the third – and rapidly approaching fourth – industrial revolutions. New technological advances and their associated distribution of knowledge and connectedness have allowed patients unprecedented access to health care information. The salutary effects of this change is manifest in a diversity of areas, including registries that facilitate participation in state of the art research such as ClinicalTrials.gov and the ability to track nascent trends in infectious diseases with Google searches.1
Although the stakes may seem lower when patients go online to choose a practitioner, the reality demonstrates just how important those search results can be. With parallels of similar trends in other sectors, there is an increasing emphasis on ranking health care facilities, practitioners, and medical experiences. This phenomenon extends beyond private Internet sites into government scorecards, which has significant implications. But even with widespread access to information, there is frequently a lack of context for interpreting these data. Consequently, it is worth exploring why measuring satisfaction can be important, how patients can rate practitioners, and what to do with the available information to improve care delivery.
The idea to measure patient satisfaction of delivered health care began in earnest during the 1980s with Irwin Press and Rodney Ganey collaborating to create formal processes for collecting data on the “salient aspects of ... health care experience, [involving] the interaction of expectations, preferences, and satisfaction with medical care.”2,3 The enthusiasm for collecting these data has grown greatly since that time. More recently, the federal government began obtaining data in 2002 when the Centers for Medicaid & Medicare Services and the Agency for Healthcare Research and Quality (AHRQ) collaborated to develop a standardized questionnaire for hospitalized patients known as the Hospital Consumer Assessment of Healthcare Providers and Systems, or HCAHPS.4 Subsequently, standardized survey instruments have been developed for nearly every phase of care, including outpatient care (CG-CAHPS), emergency care (ED-CAHPS), and ambulatory surgery care (OAS-CAHPS). These instruments are particularly relevant to gastroenterologists, with questions querying patients about preprocedure instructions, surgery center check-in processes, comfort of procedure and waiting rooms, friendliness of providers, and quality of postprocedure information.
The focus on rating satisfaction intensified in 2010 after the passage of the Affordable Care Act (ACA). Around this time, patient satisfaction and health outcomes became more deeply integrated concepts in health care quality. As part of a broader emphasis in this area, CMS initiated the hospital value-based purchasing (VBP) program, which tied incentive payments for Medicare beneficiaries to hospital-based health care quality and patient satisfaction. Within this schema, 25% of performance, and its associated economic stakes, is measured by HCAHPS scores.5 Other value programs such as the Merit-Based Incentive Payment Program (MIPS) include CAHPS instruments as optional assessments of quality.
Given the financial risks linked to satisfaction rankings and their online visibility, many argue that patient satisfaction is prioritized in organizations above more clinically meaningful metrics. Studies have shown, however, that high levels of patient satisfaction can lead to increased patient loyalty, treatment adherence, patient retention, staff morale, and personal and professional satisfaction.6,7 In fact, not surprisingly, there is an inverse correlation between patient satisfaction and the rates of malpractice lawsuits.7-10
Despite the growing relevance of patient perceptions to clinical practice, measuring satisfaction remains a challenge. While current metrics are particular to an individual patient’s experiences, underlying health conditions influence opinions of these episodes of care. Specifically, patients with depression and anxiety are, in general, less satisfied with the care they receive.11,12 Similarly, patients with chronic diseases on multiple medications and those with more severe symptoms are commonly less satisfied with their care than are patients with acute issues2 and with milder symptoms.3 As gastroenterologists, seeing sicker patients with chronic conditions is not uncommon, and this could serve as a disadvantage when compared with peers in other specialties because scores are not typically adjusted.
Since patient-centered metrics are likely to remain relevant in the future, and with the unique challenges this can present to practicing gastroenterologists, achieving higher degrees of patient satisfaction remains both aspirational and difficult. We will be asked to reconcile and manage not only clinical conundrums but also seemingly conflicting realities of patient preferences. For example, it has been shown that, among patients with irritable bowel syndrome (IBS), more testing led to higher satisfaction only until that testing was performed within the context of a gastroenterologist’s care.13 In contrast, within the endoscopy setting, a preprocedure diagnosis of IBS did not increase the risk for procedure-related dissatisfaction, provided patients were not prescribed chronic psychotropic medication, nervous prior to the procedure, distressed or in pain during the procedure, or had unmet physical or emotional needs during the procedure.14 Furthermore, there is poor correlation between endoscopic quality measures with strong evidence – such as adenoma detection rate, withdrawal time, and cecal intubation rate – and patient satisfaction.15
So, when considering these conflicting findings and evidence that patients’ global rating of their health care is not reliably associated with the quality of the care they receive,16 should we emphasize experience over outcome? As clinicians practicing in an increasingly transparent and value-based health care environment, we are subject to many priorities contending for our attention. We strive to provide care that is at once patient centric, evidence based, and low cost; however, achieving these goals often requires different strategies. At the end of the day, our primary aim is to provide consistently excellent patient care. We believe that quality and experience are not competing principles. Patient satisfaction is relevant and important, but it should not preclude adherence to our primary responsibility of providing high-quality care.
When trying to make clinical decisions that may compromise one of these goals for another, it can be helpful to recall the “me and my family” rule: What kind of care would I want for myself or my loved ones in this situation?
Acknowledgement
We thank Dr. Ziad Gellad (Duke University, Durham, N.C.) for his assistance in reviewing and providing feedback on this manuscript.
1. Proc Natl Acad Sci U S A. 2015;112(47):14473-8. 2. Am J Manag Care. 1997;3(4):579-94.
3. Gut. 2004;53(SUPPL. 4):40-4.
4. Virtual Mentor. 2013;15(11):982-7.
5. J Hosp Med. 2013;8(5):271-7.
6. Int J Health Care Qual Assur. 2011;24(4):266-73.
7. J Cutan Aesthet Surg. 2010;3(3):151-5.
8. Am J Med. 2005;118(10):1126-33.
9. JAMA. 2002;287(22):2951-7. 10. JAMA. 1994;272(20):1583-7.
11. J Diabetes Metab. 2012;3(7):1000210.
12. Am Heart J. 2000;140(1):105-10.
13. J Clin Gastroenterol. 2018;52(7):614-21.
14. Dig Dis Sci. 2005;50(10):1860-71.15. Am J Gastroenterol. 2014;109(7):1089-91.
16. Ann Intern Med. 2006;144(9):665-72.
Dr. Finn is a gastroenterologist with the Palo Alto Medical Foundation, Mountain View, Calif.; Dr. Leiman is assistant professor of medicine, director of esophageal research and quality in the division of gastroenterology, Duke University, Duke Clinical Research Institute, and chair-elect of the AGA Quality Committee.
Although largely untouched by the first and second industrial revolutions in the 18th and 20th centuries, the practice of medicine in the 21st century is increasingly susceptible to the vast transformative power of the third – and rapidly approaching fourth – industrial revolutions. New technological advances and their associated distribution of knowledge and connectedness have allowed patients unprecedented access to health care information. The salutary effects of this change is manifest in a diversity of areas, including registries that facilitate participation in state of the art research such as ClinicalTrials.gov and the ability to track nascent trends in infectious diseases with Google searches.1
Although the stakes may seem lower when patients go online to choose a practitioner, the reality demonstrates just how important those search results can be. With parallels of similar trends in other sectors, there is an increasing emphasis on ranking health care facilities, practitioners, and medical experiences. This phenomenon extends beyond private Internet sites into government scorecards, which has significant implications. But even with widespread access to information, there is frequently a lack of context for interpreting these data. Consequently, it is worth exploring why measuring satisfaction can be important, how patients can rate practitioners, and what to do with the available information to improve care delivery.
The idea to measure patient satisfaction of delivered health care began in earnest during the 1980s with Irwin Press and Rodney Ganey collaborating to create formal processes for collecting data on the “salient aspects of ... health care experience, [involving] the interaction of expectations, preferences, and satisfaction with medical care.”2,3 The enthusiasm for collecting these data has grown greatly since that time. More recently, the federal government began obtaining data in 2002 when the Centers for Medicaid & Medicare Services and the Agency for Healthcare Research and Quality (AHRQ) collaborated to develop a standardized questionnaire for hospitalized patients known as the Hospital Consumer Assessment of Healthcare Providers and Systems, or HCAHPS.4 Subsequently, standardized survey instruments have been developed for nearly every phase of care, including outpatient care (CG-CAHPS), emergency care (ED-CAHPS), and ambulatory surgery care (OAS-CAHPS). These instruments are particularly relevant to gastroenterologists, with questions querying patients about preprocedure instructions, surgery center check-in processes, comfort of procedure and waiting rooms, friendliness of providers, and quality of postprocedure information.
The focus on rating satisfaction intensified in 2010 after the passage of the Affordable Care Act (ACA). Around this time, patient satisfaction and health outcomes became more deeply integrated concepts in health care quality. As part of a broader emphasis in this area, CMS initiated the hospital value-based purchasing (VBP) program, which tied incentive payments for Medicare beneficiaries to hospital-based health care quality and patient satisfaction. Within this schema, 25% of performance, and its associated economic stakes, is measured by HCAHPS scores.5 Other value programs such as the Merit-Based Incentive Payment Program (MIPS) include CAHPS instruments as optional assessments of quality.
Given the financial risks linked to satisfaction rankings and their online visibility, many argue that patient satisfaction is prioritized in organizations above more clinically meaningful metrics. Studies have shown, however, that high levels of patient satisfaction can lead to increased patient loyalty, treatment adherence, patient retention, staff morale, and personal and professional satisfaction.6,7 In fact, not surprisingly, there is an inverse correlation between patient satisfaction and the rates of malpractice lawsuits.7-10
Despite the growing relevance of patient perceptions to clinical practice, measuring satisfaction remains a challenge. While current metrics are particular to an individual patient’s experiences, underlying health conditions influence opinions of these episodes of care. Specifically, patients with depression and anxiety are, in general, less satisfied with the care they receive.11,12 Similarly, patients with chronic diseases on multiple medications and those with more severe symptoms are commonly less satisfied with their care than are patients with acute issues2 and with milder symptoms.3 As gastroenterologists, seeing sicker patients with chronic conditions is not uncommon, and this could serve as a disadvantage when compared with peers in other specialties because scores are not typically adjusted.
Since patient-centered metrics are likely to remain relevant in the future, and with the unique challenges this can present to practicing gastroenterologists, achieving higher degrees of patient satisfaction remains both aspirational and difficult. We will be asked to reconcile and manage not only clinical conundrums but also seemingly conflicting realities of patient preferences. For example, it has been shown that, among patients with irritable bowel syndrome (IBS), more testing led to higher satisfaction only until that testing was performed within the context of a gastroenterologist’s care.13 In contrast, within the endoscopy setting, a preprocedure diagnosis of IBS did not increase the risk for procedure-related dissatisfaction, provided patients were not prescribed chronic psychotropic medication, nervous prior to the procedure, distressed or in pain during the procedure, or had unmet physical or emotional needs during the procedure.14 Furthermore, there is poor correlation between endoscopic quality measures with strong evidence – such as adenoma detection rate, withdrawal time, and cecal intubation rate – and patient satisfaction.15
So, when considering these conflicting findings and evidence that patients’ global rating of their health care is not reliably associated with the quality of the care they receive,16 should we emphasize experience over outcome? As clinicians practicing in an increasingly transparent and value-based health care environment, we are subject to many priorities contending for our attention. We strive to provide care that is at once patient centric, evidence based, and low cost; however, achieving these goals often requires different strategies. At the end of the day, our primary aim is to provide consistently excellent patient care. We believe that quality and experience are not competing principles. Patient satisfaction is relevant and important, but it should not preclude adherence to our primary responsibility of providing high-quality care.
When trying to make clinical decisions that may compromise one of these goals for another, it can be helpful to recall the “me and my family” rule: What kind of care would I want for myself or my loved ones in this situation?
Acknowledgement
We thank Dr. Ziad Gellad (Duke University, Durham, N.C.) for his assistance in reviewing and providing feedback on this manuscript.
1. Proc Natl Acad Sci U S A. 2015;112(47):14473-8. 2. Am J Manag Care. 1997;3(4):579-94.
3. Gut. 2004;53(SUPPL. 4):40-4.
4. Virtual Mentor. 2013;15(11):982-7.
5. J Hosp Med. 2013;8(5):271-7.
6. Int J Health Care Qual Assur. 2011;24(4):266-73.
7. J Cutan Aesthet Surg. 2010;3(3):151-5.
8. Am J Med. 2005;118(10):1126-33.
9. JAMA. 2002;287(22):2951-7. 10. JAMA. 1994;272(20):1583-7.
11. J Diabetes Metab. 2012;3(7):1000210.
12. Am Heart J. 2000;140(1):105-10.
13. J Clin Gastroenterol. 2018;52(7):614-21.
14. Dig Dis Sci. 2005;50(10):1860-71.15. Am J Gastroenterol. 2014;109(7):1089-91.
16. Ann Intern Med. 2006;144(9):665-72.
Dr. Finn is a gastroenterologist with the Palo Alto Medical Foundation, Mountain View, Calif.; Dr. Leiman is assistant professor of medicine, director of esophageal research and quality in the division of gastroenterology, Duke University, Duke Clinical Research Institute, and chair-elect of the AGA Quality Committee.













