User login
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Hiccups in patients with cancer often overlooked, undertreated
But even if recognized, hiccups may not be treated effectively, according to a national survey of cancer care clinicians.
When poorly controlled, persistent hiccups can affect a patient’s quality of life, with 40% of survey respondents considering chronic hiccups “much more” or “somewhat more” severe than nausea and vomiting.
Overall, the findings indicate that patients with cancer who develop persistent hiccups are “truly suffering,” the authors wrote.
The survey results were published online recently in the American Journal of Hospice and Palliative Medicine.
Hiccups may simply be a nuisance for most, but these spasms can become problematic for patients with cancer, leading to sleep deprivation, fatigue, aspiration pneumonia, compromised food intake, weight loss, pain, and even death.
Hiccups can develop when the nerve that controls the diaphragm becomes irritated, which can be triggered by certain chemotherapy drugs.
Yet few studies have focused on hiccups in patients with cancer and none, until now, has sought the perspectives of cancer care clinicians.
Aminah Jatoi, MD, medical oncologist with the Mayo Clinic in Rochester, Minn., and two Mayo colleagues developed a survey, alongside MeterHealth, which this news organization distributed to clinicians with an interest in cancer care.
The survey gauged clinicians’ awareness or lack of awareness about clinically significant hiccups as well as treatments for hiccups and whether they consider hiccups an unmet palliative need.
A total of 684 clinicians completed two eligibility screening questions, which required them to have cared for more than 10 patients with cancer in the past 6 months with clinically significant hiccups (defined as hiccups that lasted more than 48 hours or occurred from cancer or cancer care).
Among 113 eligible health care professionals, 90 completed the survey: 42 physicians, 29 nurses, 15 nurse practitioners, and 4 physician assistants.
The survey revealed three key issues.
The first is that hiccups appear to be an underrecognized issue.
Among health care professionals who answered the eligibility screening questions, fewer than 20% reported caring for more than 10 patients with cancer in the past 6 months who had persistent hiccups. Most of these clinicians reported caring for more than 1,000 patients per year.
Given that 15%-40% of patients with cancer report hiccups, this finding suggests that hiccups are not widely recognized by health care professionals.
Second: The survey data showed that hiccups often increase patients’ anxiety, fatigue, and sleep problems and can decrease productivity at work or school.
In fact, when comparing hiccups to nausea and vomiting – sometimes described as one of the most severe side effects of cancer care – 40% of respondents rated hiccups as “much more” or “somewhat more” severe than nausea and vomiting for their patients and 38% rated the severity of the two issues as “about the same.”
Finally, even when hiccups are recognized and treated, about 20% of respondents said that current therapies are not very effective, and more treatment options are needed.
Among the survey respondents, the most frequently prescribed medications for chronic hiccups were the antipsychotic chlorpromazine, the muscle relaxant baclofen (Lioresal), the antiemetic metoclopramide (Metozolv ODT, Reglan), and the anticonvulsants gabapentin (Neurontin) and carbamazepine (Tegretol).
Survey respondents who provided comments about current treatments for hiccups highlighted a range of challenges. One respondent said, “When current therapies do not work, it can be very demoralizing to our patients.” Another said, “I feel like it is a gamble whether treatment for hiccups will work or not.”
Still another felt that while current treatments work “quite well to halt hiccups,” they come with side effects which can be “quite severe.”
These results “clearly point to the unmet needs of hiccups in patients with cancer and should prompt more research aimed at generating more palliative options,” the authors said.
This research had no commercial funding. MeterHealth reviewed the manuscript and provided input on the accuracy of methods and results. Dr. Jatoi reports serving on an advisory board for MeterHealth (honoraria to institution).
A version of this article first appeared on Medscape.com.
But even if recognized, hiccups may not be treated effectively, according to a national survey of cancer care clinicians.
When poorly controlled, persistent hiccups can affect a patient’s quality of life, with 40% of survey respondents considering chronic hiccups “much more” or “somewhat more” severe than nausea and vomiting.
Overall, the findings indicate that patients with cancer who develop persistent hiccups are “truly suffering,” the authors wrote.
The survey results were published online recently in the American Journal of Hospice and Palliative Medicine.
Hiccups may simply be a nuisance for most, but these spasms can become problematic for patients with cancer, leading to sleep deprivation, fatigue, aspiration pneumonia, compromised food intake, weight loss, pain, and even death.
Hiccups can develop when the nerve that controls the diaphragm becomes irritated, which can be triggered by certain chemotherapy drugs.
Yet few studies have focused on hiccups in patients with cancer and none, until now, has sought the perspectives of cancer care clinicians.
Aminah Jatoi, MD, medical oncologist with the Mayo Clinic in Rochester, Minn., and two Mayo colleagues developed a survey, alongside MeterHealth, which this news organization distributed to clinicians with an interest in cancer care.
The survey gauged clinicians’ awareness or lack of awareness about clinically significant hiccups as well as treatments for hiccups and whether they consider hiccups an unmet palliative need.
A total of 684 clinicians completed two eligibility screening questions, which required them to have cared for more than 10 patients with cancer in the past 6 months with clinically significant hiccups (defined as hiccups that lasted more than 48 hours or occurred from cancer or cancer care).
Among 113 eligible health care professionals, 90 completed the survey: 42 physicians, 29 nurses, 15 nurse practitioners, and 4 physician assistants.
The survey revealed three key issues.
The first is that hiccups appear to be an underrecognized issue.
Among health care professionals who answered the eligibility screening questions, fewer than 20% reported caring for more than 10 patients with cancer in the past 6 months who had persistent hiccups. Most of these clinicians reported caring for more than 1,000 patients per year.
Given that 15%-40% of patients with cancer report hiccups, this finding suggests that hiccups are not widely recognized by health care professionals.
Second: The survey data showed that hiccups often increase patients’ anxiety, fatigue, and sleep problems and can decrease productivity at work or school.
In fact, when comparing hiccups to nausea and vomiting – sometimes described as one of the most severe side effects of cancer care – 40% of respondents rated hiccups as “much more” or “somewhat more” severe than nausea and vomiting for their patients and 38% rated the severity of the two issues as “about the same.”
Finally, even when hiccups are recognized and treated, about 20% of respondents said that current therapies are not very effective, and more treatment options are needed.
Among the survey respondents, the most frequently prescribed medications for chronic hiccups were the antipsychotic chlorpromazine, the muscle relaxant baclofen (Lioresal), the antiemetic metoclopramide (Metozolv ODT, Reglan), and the anticonvulsants gabapentin (Neurontin) and carbamazepine (Tegretol).
Survey respondents who provided comments about current treatments for hiccups highlighted a range of challenges. One respondent said, “When current therapies do not work, it can be very demoralizing to our patients.” Another said, “I feel like it is a gamble whether treatment for hiccups will work or not.”
Still another felt that while current treatments work “quite well to halt hiccups,” they come with side effects which can be “quite severe.”
These results “clearly point to the unmet needs of hiccups in patients with cancer and should prompt more research aimed at generating more palliative options,” the authors said.
This research had no commercial funding. MeterHealth reviewed the manuscript and provided input on the accuracy of methods and results. Dr. Jatoi reports serving on an advisory board for MeterHealth (honoraria to institution).
A version of this article first appeared on Medscape.com.
But even if recognized, hiccups may not be treated effectively, according to a national survey of cancer care clinicians.
When poorly controlled, persistent hiccups can affect a patient’s quality of life, with 40% of survey respondents considering chronic hiccups “much more” or “somewhat more” severe than nausea and vomiting.
Overall, the findings indicate that patients with cancer who develop persistent hiccups are “truly suffering,” the authors wrote.
The survey results were published online recently in the American Journal of Hospice and Palliative Medicine.
Hiccups may simply be a nuisance for most, but these spasms can become problematic for patients with cancer, leading to sleep deprivation, fatigue, aspiration pneumonia, compromised food intake, weight loss, pain, and even death.
Hiccups can develop when the nerve that controls the diaphragm becomes irritated, which can be triggered by certain chemotherapy drugs.
Yet few studies have focused on hiccups in patients with cancer and none, until now, has sought the perspectives of cancer care clinicians.
Aminah Jatoi, MD, medical oncologist with the Mayo Clinic in Rochester, Minn., and two Mayo colleagues developed a survey, alongside MeterHealth, which this news organization distributed to clinicians with an interest in cancer care.
The survey gauged clinicians’ awareness or lack of awareness about clinically significant hiccups as well as treatments for hiccups and whether they consider hiccups an unmet palliative need.
A total of 684 clinicians completed two eligibility screening questions, which required them to have cared for more than 10 patients with cancer in the past 6 months with clinically significant hiccups (defined as hiccups that lasted more than 48 hours or occurred from cancer or cancer care).
Among 113 eligible health care professionals, 90 completed the survey: 42 physicians, 29 nurses, 15 nurse practitioners, and 4 physician assistants.
The survey revealed three key issues.
The first is that hiccups appear to be an underrecognized issue.
Among health care professionals who answered the eligibility screening questions, fewer than 20% reported caring for more than 10 patients with cancer in the past 6 months who had persistent hiccups. Most of these clinicians reported caring for more than 1,000 patients per year.
Given that 15%-40% of patients with cancer report hiccups, this finding suggests that hiccups are not widely recognized by health care professionals.
Second: The survey data showed that hiccups often increase patients’ anxiety, fatigue, and sleep problems and can decrease productivity at work or school.
In fact, when comparing hiccups to nausea and vomiting – sometimes described as one of the most severe side effects of cancer care – 40% of respondents rated hiccups as “much more” or “somewhat more” severe than nausea and vomiting for their patients and 38% rated the severity of the two issues as “about the same.”
Finally, even when hiccups are recognized and treated, about 20% of respondents said that current therapies are not very effective, and more treatment options are needed.
Among the survey respondents, the most frequently prescribed medications for chronic hiccups were the antipsychotic chlorpromazine, the muscle relaxant baclofen (Lioresal), the antiemetic metoclopramide (Metozolv ODT, Reglan), and the anticonvulsants gabapentin (Neurontin) and carbamazepine (Tegretol).
Survey respondents who provided comments about current treatments for hiccups highlighted a range of challenges. One respondent said, “When current therapies do not work, it can be very demoralizing to our patients.” Another said, “I feel like it is a gamble whether treatment for hiccups will work or not.”
Still another felt that while current treatments work “quite well to halt hiccups,” they come with side effects which can be “quite severe.”
These results “clearly point to the unmet needs of hiccups in patients with cancer and should prompt more research aimed at generating more palliative options,” the authors said.
This research had no commercial funding. MeterHealth reviewed the manuscript and provided input on the accuracy of methods and results. Dr. Jatoi reports serving on an advisory board for MeterHealth (honoraria to institution).
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF HOSPICE AND PALLIATIVE MEDICINE
Which anticoagulant is safest for frail elderly patients with nonvalvular A-fib?
ILLUSTRATIVE CASE
A frail 76-year-old woman with a history of hypertension and hyperlipidemia presents for evaluation of palpitations. An in-office electrocardiogram reveals that the patient is in AF. Her CHA2DS2-VASc score is 4 and her HAS-BLED score is 2.2,3 Using shared decision making, you decide to start medications for her AF. You plan to initiate a beta-blocker for rate control and must now decide on anticoagulation. Which oral anticoagulant would you prescribe for this patient’s AF, given her frail status?
Frailty is defined as a state of vulnerability with a decreased ability to recover from an acute stressful event.4 The prevalence of frailty varies by the measurements used and the population studied. A 2021 meta-analysis found that frailty prevalence ranges from 12% to 24% worldwide in patients older than 50 years5 and may increase to > 30% among those ages 85 years and older.6 Frailty increases rates of AEs such as falls7 and fracture,8 leading to disability,9 decreased quality of life,10 increased utilization of health care,11 and increased mortality.12 A number of validated approaches are available to screen for and measure frailty.13-18
Given the association with negative health outcomes and high health care utilization, frailty is an important clinical factor for physicians to consider when treating elderly patients. Frailty assessment may allow for more tailored treatment choices for patients, with a potential reduction in complications. Although CHA2DS2-VASc and HAS-BLED scores assist in the decision-making process of whether to start anticoagulation,these tools do not take frailty into consideration or guide anticoagulant choice.2,3 The purpose of this study was to analyze how levels of frailty affect the association of 3 different direct oral anticoagulants (DOACs) vs warfarin with various AEs (death, stroke, or major bleeding).
STUDY SUMMARY
This DOAC rose above the others
This retrospective cohort study compared the safety of 3 DOACs—dabigatran, rivaroxaban, and apixaban—vs warfarin in Medicare beneficiaries with AF, using 1:1 propensity score (PS)–matched analysis. Eligible patients were ages 65 years or older, with a filled prescription for a DOAC or warfarin, no prior oral anticoagulant exposure in the previous 183 days, a diagnostic code of AF, and continuous enrollment in Medicare Parts A, B, and D only. Patients were excluded if they had missing demographic data, received hospice care, resided in a nursing facility at drug initiation, had another indication for anticoagulation, or had a contraindication to either a DOAC or warfarin.
Frailty was measured using a claims-based frailty index (CFI), which applies health care utilization data to estimate a frailty index, with cut points for nonfrailty, prefrailty, and frailty. The CFI score has 93 claims-based variables, including wheelchairs and durable medical equipment, open wounds, diseases such as chronic obstructive pulmonary disease and ischemic heart disease, and transportation services.15-17 In this study, nonfrailty was defined as a CFI < 0.15, prefrailty as a CFI of 0.15 to 0.24, and frailty as a CFI ≥ 0.25.
The primary outcome—a composite endpoint of death, ischemic stroke, or major bleeding—was measured for each of the DOAC–warfarin cohorts in the overall population and stratified by frailty classification. Patients were followed until the occurrence of a study outcome, Medicare disenrollment, the end of the study period, discontinuation of the index drug (defined as > 5 days), change to a different anticoagulant, admission to a nursing facility, enrollment in hospice, initiation of dialysis, or kidney transplant. The authors conducted a PS-matched analysis to reduce any imbalances in clinical characteristics between the DOAC- and warfarin-treated groups, as well as a sensitivity analysis to assess the strength of the data findings using different assumptions.
The authors created 3 DOAC–warfarin cohorts: dabigatran (n = 81,863) vs warfarin (n = 256,722), rivaroxaban (n = 185,011) vs warfarin (n = 228,028), and apixaban (n = 222,478) vs warfarin (n = 206,031). After PS matching, the mean age in all cohorts was 76 to 77 years, about 50% were female, and 91% were White. The mean HAS-BLED score was 2 and the mean CHA2DS2-VASc score was 4. The mean CFI was 0.19 to 0.20, defined as prefrail. Patients classified as frail were older, more likely to be female, and more likely to have greater comorbidities, higher scores on CHA2DS2-VASc and HAS-BLED, and higher health care utilization.
Continue to: In the dabigatran-warfarin...
In the dabigatran–warfarin cohort (median follow-up, 72 days), the event rate of the composite endpoint per 1000 person-years (PY) was 63.5 for dabigatran and 65.6 for warfarin (hazard ratio [HR] = 0.98; 95% CI, 0.92 to 1.05; rate difference [RD] per 1000 PY = –2.2; 95% CI, –6.5 to 2.1). A lower rate of the composite endpoint was associated with dabigatran than warfarin for the nonfrail subgroup but not the prefrail or frail groups.
In the rivaroxaban–warfarin cohort (median follow-up, 82 days), the composite endpoint rate per 1000 PY was 77.8 for rivaroxaban and 83.7 for warfarin (HR = 0.98; 95% CI, 0.94 to 1.02; RD per 1000 PY = –5.9; 95% CI, –9.4 to –2.4). When stratifying by frailty category, both dabigatran and rivaroxaban were associated with a lower composite endpoint rate than warfarin for the nonfrail population only (HR = 0.81; 95% CI, 0.68 to 0.97, and HR = 0.88; 95% CI, 0.77 to 0.99, respectively).
In the apixaban–warfarin cohort (median follow-up, 84 days), the rate of the composite endpoint per 1000 PY was 60.1 for apixaban and 92.3 for warfarin (HR = 0.68; 95% CI, 0.65 to 0.72; RD per 1000 PY = –32.2; 95% CI, –36.1 to –28.3). The beneficial association for apixaban was present in all frailty categories, with an HR of 0.61 (95% CI, 0.52 to 0.71) for nonfrail patients, 0.66 (95% CI, 0.61 to 0.70) for prefrail patients, and 0.73 (95% CI, 0.67 to 0.80) for frail patients. Apixaban was the only DOAC with a relative reduction in the hazard of death, ischemic stroke, or major bleeding among all frailty groups.
WHAT’S NEW
Only apixaban had lower AE rates vs warfarin across frailty levels
Three DOACs (dabigatran, rivaroxaban, and apixaban) reduced the risk of death, ischemic stroke, or major bleeding compared with warfarin in older adults with AF, but only apixaban was associated with a relative reduction of these adverse outcomes in patients of all frailty classifications.
CAVEATS
Important data but RCTs are needed
The power of this observational study is considerable. However, it remains a retrospective observational study. The authors attempted to account for these limitations and potential confounders by performing a PS-matched analysis and sensitivity analysis; however, these findings should be confirmed with randomized controlled trials.
Continue to: Additionally, the study...
Additionally, the study collected data on each of the DOAC–warfarin cohorts for < 90 days. Trials to address long-term outcomes are warranted.
Finally, there was no control group in comparison with anticoagulation. It is possible that choosing not to use an anticoagulant is the best choice for frail elderly patients.
CHALLENGES TO IMPLEMENTATION
Doctors need a practical frailty scale, patients need an affordable Rx
Frailty is not often considered a measurable trait. The approach used in the study to determine the CFI is not a practical clinical tool. Studies comparing a frailty calculation software application or an easily implementable survey may help bring this clinically impactful information to the hands of primary care physicians. The Clinical Frailty Scale—a brief, 7-point scale based on the physician’s clinical impression of the patient—has been found to correlate with other established frailty measures18 and might be an option for busy clinicians. However, the current study did not utilize this measurement, and the validity of its use by primary care physicians in the outpatient setting requires further study.
In addition, cost may be a barrier for patients younger than 65 years or for those older than 65 years who do not qualify for Medicare or do not have Medicare Part D. The average monthly cost of the DOACs ranges from $560 for dabigatran19 to $600 for rivaroxaban20 and $623 for apixaban.21 As always, the choice of anticoagulant therapy is a clinical judgment and a joint decision of the patient and physician.
1. Kim DH, Pawar A, Gagne JJ, et al. Frailty and clinical outcomes of direct oral anticoagulants versus warfarin in older adults with atrial fibrillation: a cohort study. Ann Intern Med. 2021;174:1214-1223. doi: 10.7326/M20-7141
2. Zhu W, He W, Guo L, et al. The HAS-BLED score for predicting major bleeding risk in anticoagulated patients with atrial fibrillation: a systematic review and meta-analysis. Clin Cardiol. 2015;38:555-561. doi: 10.1002/clc.22435
3. Olesen JB, Lip GYH, Hansen ML, et al. Validation of risk stratification schemes for predicting stroke and thromboembolism in patients with atrial fibrillation: nationwide cohort study. BMJ. 2011;342:d124. doi: 10.1136/bmj.d124
4. Xue QL. The frailty syndrome: definition and natural history. Clin Geriatr Med. 2011;27:1-15. doi: 10.1016/j.cger.2010.08.009
5. O’Caoimh R, Sezgin D, O’Donovan MR, et al. Prevalence of frailty in 62 countries across the world: a systematic review and meta-analysis of population-level studies. Age Ageing. 2021;50:96-104. doi: 10.1093/ageing/afaa219
6. Campitelli MA, Bronskill SE, Hogan DB, et al. The prevalence and health consequences of frailty in a population-based older home care cohort: a comparison of different measures. BMC Geriatr. 2016;16:133. doi: 10.1186/s12877-016-0309-z
7. Kojima G. Frailty as a predictor of future falls among community-dwelling older people: a systematic review and meta-analysis. J Am Med Dir Assoc. 2015;16:1027-1033. doi: 10.1016/j.jamda. 2015.06.018
8. Kojima G. Frailty as a predictor of fractures among community-dwelling older people: a systematic review and meta-analysis. Bone. 2016;90:116-122. doi: 10.1016/j.bone.2016.06.009
9. Kojima G. Quick and simple FRAIL scale predicts incident activities of daily living (ADL) and instrumental ADL (IADL) disabilities: a systematic review and meta-analysis. J Am Med Dir Assoc. 2018;19:1063-1068. doi: 10.1016/j.jamda.2018.07.019
10. Kojima G, Liljas AEM, Iliffe S. Frailty syndrome: implications and challenges for health care policy. Risk Manag Healthc Policy. 2019;12:23-30. doi: 10.2147/RMHP.S168750
11. Roe L, Normand C, Wren MA, et al. The impact of frailty on healthcare utilisation in Ireland: evidence from The Irish Longitudinal Study on Ageing. BMC Geriatr. 2017;17:203. doi: 10.1186/s12877-017-0579-0
12. Hao Q, Zhou L, Dong B, et al. The role of frailty in predicting mortality and readmission in older adults in acute care wards: a prospective study. Sci Rep. 2019;9:1207. doi: 10.1038/s41598-018-38072-7
13. Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci. 2001;56:M146-M156. doi: 10.1093/gerona/56.3.m146
14. Ryan J, Espinoza S, Ernst ME, et al. Validation of a deficit-accumulation frailty Index in the ASPirin in Reducing Events in the Elderly study and its predictive capacity for disability-free survival. J Gerontol A Biol Sci Med Sci. 2022;77:19-26. doi: 10.1093/gerona/glab225
15. Kim DH, Glynn RJ, Avorn J, et al. Validation of a claims-based frailty index against physical performance and adverse health outcomes in the Health and Retirement Study. J Gerontol A Biol Sci Med Sci. 2019;74:1271-1276. doi: 10.1093/gerona/gly197
16. Kim DH, Schneeweiss S, Glynn RJ, et al. Measuring frailty in Medicare data: development and validation of a claims-based frailty index. J Gerontol A Biol Sci Med Sci. 2018;73:980-987. doi: 10.1093/gerona/glx229
17. Claims-based frailty index. Harvard Dataverse website. 2022. Accessed April 5, 2022. https://dataverse.harvard.edu/dataverse/cfi
18. Rockwood K, Song X, MacKnight C, et al. A global clinical measure of fitness and frailty in elderly people. CMAJ. 2005;173:489-95. doi: 10.1503/cmaj.050051
19. Dabigatran. GoodRx. Accessed September 26, 2022. www.goodrx.com/dabigatran
20. Rivaroxaban. GoodRx. Accessed September 26, 2022. www.goodrx.com/rivaroxaban
21. Apixaban (Eliquis). GoodRx. Accessed September 26, 2022. www.goodrx.com/eliquis
ILLUSTRATIVE CASE
A frail 76-year-old woman with a history of hypertension and hyperlipidemia presents for evaluation of palpitations. An in-office electrocardiogram reveals that the patient is in AF. Her CHA2DS2-VASc score is 4 and her HAS-BLED score is 2.2,3 Using shared decision making, you decide to start medications for her AF. You plan to initiate a beta-blocker for rate control and must now decide on anticoagulation. Which oral anticoagulant would you prescribe for this patient’s AF, given her frail status?
Frailty is defined as a state of vulnerability with a decreased ability to recover from an acute stressful event.4 The prevalence of frailty varies by the measurements used and the population studied. A 2021 meta-analysis found that frailty prevalence ranges from 12% to 24% worldwide in patients older than 50 years5 and may increase to > 30% among those ages 85 years and older.6 Frailty increases rates of AEs such as falls7 and fracture,8 leading to disability,9 decreased quality of life,10 increased utilization of health care,11 and increased mortality.12 A number of validated approaches are available to screen for and measure frailty.13-18
Given the association with negative health outcomes and high health care utilization, frailty is an important clinical factor for physicians to consider when treating elderly patients. Frailty assessment may allow for more tailored treatment choices for patients, with a potential reduction in complications. Although CHA2DS2-VASc and HAS-BLED scores assist in the decision-making process of whether to start anticoagulation,these tools do not take frailty into consideration or guide anticoagulant choice.2,3 The purpose of this study was to analyze how levels of frailty affect the association of 3 different direct oral anticoagulants (DOACs) vs warfarin with various AEs (death, stroke, or major bleeding).
STUDY SUMMARY
This DOAC rose above the others
This retrospective cohort study compared the safety of 3 DOACs—dabigatran, rivaroxaban, and apixaban—vs warfarin in Medicare beneficiaries with AF, using 1:1 propensity score (PS)–matched analysis. Eligible patients were ages 65 years or older, with a filled prescription for a DOAC or warfarin, no prior oral anticoagulant exposure in the previous 183 days, a diagnostic code of AF, and continuous enrollment in Medicare Parts A, B, and D only. Patients were excluded if they had missing demographic data, received hospice care, resided in a nursing facility at drug initiation, had another indication for anticoagulation, or had a contraindication to either a DOAC or warfarin.
Frailty was measured using a claims-based frailty index (CFI), which applies health care utilization data to estimate a frailty index, with cut points for nonfrailty, prefrailty, and frailty. The CFI score has 93 claims-based variables, including wheelchairs and durable medical equipment, open wounds, diseases such as chronic obstructive pulmonary disease and ischemic heart disease, and transportation services.15-17 In this study, nonfrailty was defined as a CFI < 0.15, prefrailty as a CFI of 0.15 to 0.24, and frailty as a CFI ≥ 0.25.
The primary outcome—a composite endpoint of death, ischemic stroke, or major bleeding—was measured for each of the DOAC–warfarin cohorts in the overall population and stratified by frailty classification. Patients were followed until the occurrence of a study outcome, Medicare disenrollment, the end of the study period, discontinuation of the index drug (defined as > 5 days), change to a different anticoagulant, admission to a nursing facility, enrollment in hospice, initiation of dialysis, or kidney transplant. The authors conducted a PS-matched analysis to reduce any imbalances in clinical characteristics between the DOAC- and warfarin-treated groups, as well as a sensitivity analysis to assess the strength of the data findings using different assumptions.
The authors created 3 DOAC–warfarin cohorts: dabigatran (n = 81,863) vs warfarin (n = 256,722), rivaroxaban (n = 185,011) vs warfarin (n = 228,028), and apixaban (n = 222,478) vs warfarin (n = 206,031). After PS matching, the mean age in all cohorts was 76 to 77 years, about 50% were female, and 91% were White. The mean HAS-BLED score was 2 and the mean CHA2DS2-VASc score was 4. The mean CFI was 0.19 to 0.20, defined as prefrail. Patients classified as frail were older, more likely to be female, and more likely to have greater comorbidities, higher scores on CHA2DS2-VASc and HAS-BLED, and higher health care utilization.
Continue to: In the dabigatran-warfarin...
In the dabigatran–warfarin cohort (median follow-up, 72 days), the event rate of the composite endpoint per 1000 person-years (PY) was 63.5 for dabigatran and 65.6 for warfarin (hazard ratio [HR] = 0.98; 95% CI, 0.92 to 1.05; rate difference [RD] per 1000 PY = –2.2; 95% CI, –6.5 to 2.1). A lower rate of the composite endpoint was associated with dabigatran than warfarin for the nonfrail subgroup but not the prefrail or frail groups.
In the rivaroxaban–warfarin cohort (median follow-up, 82 days), the composite endpoint rate per 1000 PY was 77.8 for rivaroxaban and 83.7 for warfarin (HR = 0.98; 95% CI, 0.94 to 1.02; RD per 1000 PY = –5.9; 95% CI, –9.4 to –2.4). When stratifying by frailty category, both dabigatran and rivaroxaban were associated with a lower composite endpoint rate than warfarin for the nonfrail population only (HR = 0.81; 95% CI, 0.68 to 0.97, and HR = 0.88; 95% CI, 0.77 to 0.99, respectively).
In the apixaban–warfarin cohort (median follow-up, 84 days), the rate of the composite endpoint per 1000 PY was 60.1 for apixaban and 92.3 for warfarin (HR = 0.68; 95% CI, 0.65 to 0.72; RD per 1000 PY = –32.2; 95% CI, –36.1 to –28.3). The beneficial association for apixaban was present in all frailty categories, with an HR of 0.61 (95% CI, 0.52 to 0.71) for nonfrail patients, 0.66 (95% CI, 0.61 to 0.70) for prefrail patients, and 0.73 (95% CI, 0.67 to 0.80) for frail patients. Apixaban was the only DOAC with a relative reduction in the hazard of death, ischemic stroke, or major bleeding among all frailty groups.
WHAT’S NEW
Only apixaban had lower AE rates vs warfarin across frailty levels
Three DOACs (dabigatran, rivaroxaban, and apixaban) reduced the risk of death, ischemic stroke, or major bleeding compared with warfarin in older adults with AF, but only apixaban was associated with a relative reduction of these adverse outcomes in patients of all frailty classifications.
CAVEATS
Important data but RCTs are needed
The power of this observational study is considerable. However, it remains a retrospective observational study. The authors attempted to account for these limitations and potential confounders by performing a PS-matched analysis and sensitivity analysis; however, these findings should be confirmed with randomized controlled trials.
Continue to: Additionally, the study...
Additionally, the study collected data on each of the DOAC–warfarin cohorts for < 90 days. Trials to address long-term outcomes are warranted.
Finally, there was no control group in comparison with anticoagulation. It is possible that choosing not to use an anticoagulant is the best choice for frail elderly patients.
CHALLENGES TO IMPLEMENTATION
Doctors need a practical frailty scale, patients need an affordable Rx
Frailty is not often considered a measurable trait. The approach used in the study to determine the CFI is not a practical clinical tool. Studies comparing a frailty calculation software application or an easily implementable survey may help bring this clinically impactful information to the hands of primary care physicians. The Clinical Frailty Scale—a brief, 7-point scale based on the physician’s clinical impression of the patient—has been found to correlate with other established frailty measures18 and might be an option for busy clinicians. However, the current study did not utilize this measurement, and the validity of its use by primary care physicians in the outpatient setting requires further study.
In addition, cost may be a barrier for patients younger than 65 years or for those older than 65 years who do not qualify for Medicare or do not have Medicare Part D. The average monthly cost of the DOACs ranges from $560 for dabigatran19 to $600 for rivaroxaban20 and $623 for apixaban.21 As always, the choice of anticoagulant therapy is a clinical judgment and a joint decision of the patient and physician.
ILLUSTRATIVE CASE
A frail 76-year-old woman with a history of hypertension and hyperlipidemia presents for evaluation of palpitations. An in-office electrocardiogram reveals that the patient is in AF. Her CHA2DS2-VASc score is 4 and her HAS-BLED score is 2.2,3 Using shared decision making, you decide to start medications for her AF. You plan to initiate a beta-blocker for rate control and must now decide on anticoagulation. Which oral anticoagulant would you prescribe for this patient’s AF, given her frail status?
Frailty is defined as a state of vulnerability with a decreased ability to recover from an acute stressful event.4 The prevalence of frailty varies by the measurements used and the population studied. A 2021 meta-analysis found that frailty prevalence ranges from 12% to 24% worldwide in patients older than 50 years5 and may increase to > 30% among those ages 85 years and older.6 Frailty increases rates of AEs such as falls7 and fracture,8 leading to disability,9 decreased quality of life,10 increased utilization of health care,11 and increased mortality.12 A number of validated approaches are available to screen for and measure frailty.13-18
Given the association with negative health outcomes and high health care utilization, frailty is an important clinical factor for physicians to consider when treating elderly patients. Frailty assessment may allow for more tailored treatment choices for patients, with a potential reduction in complications. Although CHA2DS2-VASc and HAS-BLED scores assist in the decision-making process of whether to start anticoagulation,these tools do not take frailty into consideration or guide anticoagulant choice.2,3 The purpose of this study was to analyze how levels of frailty affect the association of 3 different direct oral anticoagulants (DOACs) vs warfarin with various AEs (death, stroke, or major bleeding).
STUDY SUMMARY
This DOAC rose above the others
This retrospective cohort study compared the safety of 3 DOACs—dabigatran, rivaroxaban, and apixaban—vs warfarin in Medicare beneficiaries with AF, using 1:1 propensity score (PS)–matched analysis. Eligible patients were ages 65 years or older, with a filled prescription for a DOAC or warfarin, no prior oral anticoagulant exposure in the previous 183 days, a diagnostic code of AF, and continuous enrollment in Medicare Parts A, B, and D only. Patients were excluded if they had missing demographic data, received hospice care, resided in a nursing facility at drug initiation, had another indication for anticoagulation, or had a contraindication to either a DOAC or warfarin.
Frailty was measured using a claims-based frailty index (CFI), which applies health care utilization data to estimate a frailty index, with cut points for nonfrailty, prefrailty, and frailty. The CFI score has 93 claims-based variables, including wheelchairs and durable medical equipment, open wounds, diseases such as chronic obstructive pulmonary disease and ischemic heart disease, and transportation services.15-17 In this study, nonfrailty was defined as a CFI < 0.15, prefrailty as a CFI of 0.15 to 0.24, and frailty as a CFI ≥ 0.25.
The primary outcome—a composite endpoint of death, ischemic stroke, or major bleeding—was measured for each of the DOAC–warfarin cohorts in the overall population and stratified by frailty classification. Patients were followed until the occurrence of a study outcome, Medicare disenrollment, the end of the study period, discontinuation of the index drug (defined as > 5 days), change to a different anticoagulant, admission to a nursing facility, enrollment in hospice, initiation of dialysis, or kidney transplant. The authors conducted a PS-matched analysis to reduce any imbalances in clinical characteristics between the DOAC- and warfarin-treated groups, as well as a sensitivity analysis to assess the strength of the data findings using different assumptions.
The authors created 3 DOAC–warfarin cohorts: dabigatran (n = 81,863) vs warfarin (n = 256,722), rivaroxaban (n = 185,011) vs warfarin (n = 228,028), and apixaban (n = 222,478) vs warfarin (n = 206,031). After PS matching, the mean age in all cohorts was 76 to 77 years, about 50% were female, and 91% were White. The mean HAS-BLED score was 2 and the mean CHA2DS2-VASc score was 4. The mean CFI was 0.19 to 0.20, defined as prefrail. Patients classified as frail were older, more likely to be female, and more likely to have greater comorbidities, higher scores on CHA2DS2-VASc and HAS-BLED, and higher health care utilization.
Continue to: In the dabigatran-warfarin...
In the dabigatran–warfarin cohort (median follow-up, 72 days), the event rate of the composite endpoint per 1000 person-years (PY) was 63.5 for dabigatran and 65.6 for warfarin (hazard ratio [HR] = 0.98; 95% CI, 0.92 to 1.05; rate difference [RD] per 1000 PY = –2.2; 95% CI, –6.5 to 2.1). A lower rate of the composite endpoint was associated with dabigatran than warfarin for the nonfrail subgroup but not the prefrail or frail groups.
In the rivaroxaban–warfarin cohort (median follow-up, 82 days), the composite endpoint rate per 1000 PY was 77.8 for rivaroxaban and 83.7 for warfarin (HR = 0.98; 95% CI, 0.94 to 1.02; RD per 1000 PY = –5.9; 95% CI, –9.4 to –2.4). When stratifying by frailty category, both dabigatran and rivaroxaban were associated with a lower composite endpoint rate than warfarin for the nonfrail population only (HR = 0.81; 95% CI, 0.68 to 0.97, and HR = 0.88; 95% CI, 0.77 to 0.99, respectively).
In the apixaban–warfarin cohort (median follow-up, 84 days), the rate of the composite endpoint per 1000 PY was 60.1 for apixaban and 92.3 for warfarin (HR = 0.68; 95% CI, 0.65 to 0.72; RD per 1000 PY = –32.2; 95% CI, –36.1 to –28.3). The beneficial association for apixaban was present in all frailty categories, with an HR of 0.61 (95% CI, 0.52 to 0.71) for nonfrail patients, 0.66 (95% CI, 0.61 to 0.70) for prefrail patients, and 0.73 (95% CI, 0.67 to 0.80) for frail patients. Apixaban was the only DOAC with a relative reduction in the hazard of death, ischemic stroke, or major bleeding among all frailty groups.
WHAT’S NEW
Only apixaban had lower AE rates vs warfarin across frailty levels
Three DOACs (dabigatran, rivaroxaban, and apixaban) reduced the risk of death, ischemic stroke, or major bleeding compared with warfarin in older adults with AF, but only apixaban was associated with a relative reduction of these adverse outcomes in patients of all frailty classifications.
CAVEATS
Important data but RCTs are needed
The power of this observational study is considerable. However, it remains a retrospective observational study. The authors attempted to account for these limitations and potential confounders by performing a PS-matched analysis and sensitivity analysis; however, these findings should be confirmed with randomized controlled trials.
Continue to: Additionally, the study...
Additionally, the study collected data on each of the DOAC–warfarin cohorts for < 90 days. Trials to address long-term outcomes are warranted.
Finally, there was no control group in comparison with anticoagulation. It is possible that choosing not to use an anticoagulant is the best choice for frail elderly patients.
CHALLENGES TO IMPLEMENTATION
Doctors need a practical frailty scale, patients need an affordable Rx
Frailty is not often considered a measurable trait. The approach used in the study to determine the CFI is not a practical clinical tool. Studies comparing a frailty calculation software application or an easily implementable survey may help bring this clinically impactful information to the hands of primary care physicians. The Clinical Frailty Scale—a brief, 7-point scale based on the physician’s clinical impression of the patient—has been found to correlate with other established frailty measures18 and might be an option for busy clinicians. However, the current study did not utilize this measurement, and the validity of its use by primary care physicians in the outpatient setting requires further study.
In addition, cost may be a barrier for patients younger than 65 years or for those older than 65 years who do not qualify for Medicare or do not have Medicare Part D. The average monthly cost of the DOACs ranges from $560 for dabigatran19 to $600 for rivaroxaban20 and $623 for apixaban.21 As always, the choice of anticoagulant therapy is a clinical judgment and a joint decision of the patient and physician.
1. Kim DH, Pawar A, Gagne JJ, et al. Frailty and clinical outcomes of direct oral anticoagulants versus warfarin in older adults with atrial fibrillation: a cohort study. Ann Intern Med. 2021;174:1214-1223. doi: 10.7326/M20-7141
2. Zhu W, He W, Guo L, et al. The HAS-BLED score for predicting major bleeding risk in anticoagulated patients with atrial fibrillation: a systematic review and meta-analysis. Clin Cardiol. 2015;38:555-561. doi: 10.1002/clc.22435
3. Olesen JB, Lip GYH, Hansen ML, et al. Validation of risk stratification schemes for predicting stroke and thromboembolism in patients with atrial fibrillation: nationwide cohort study. BMJ. 2011;342:d124. doi: 10.1136/bmj.d124
4. Xue QL. The frailty syndrome: definition and natural history. Clin Geriatr Med. 2011;27:1-15. doi: 10.1016/j.cger.2010.08.009
5. O’Caoimh R, Sezgin D, O’Donovan MR, et al. Prevalence of frailty in 62 countries across the world: a systematic review and meta-analysis of population-level studies. Age Ageing. 2021;50:96-104. doi: 10.1093/ageing/afaa219
6. Campitelli MA, Bronskill SE, Hogan DB, et al. The prevalence and health consequences of frailty in a population-based older home care cohort: a comparison of different measures. BMC Geriatr. 2016;16:133. doi: 10.1186/s12877-016-0309-z
7. Kojima G. Frailty as a predictor of future falls among community-dwelling older people: a systematic review and meta-analysis. J Am Med Dir Assoc. 2015;16:1027-1033. doi: 10.1016/j.jamda. 2015.06.018
8. Kojima G. Frailty as a predictor of fractures among community-dwelling older people: a systematic review and meta-analysis. Bone. 2016;90:116-122. doi: 10.1016/j.bone.2016.06.009
9. Kojima G. Quick and simple FRAIL scale predicts incident activities of daily living (ADL) and instrumental ADL (IADL) disabilities: a systematic review and meta-analysis. J Am Med Dir Assoc. 2018;19:1063-1068. doi: 10.1016/j.jamda.2018.07.019
10. Kojima G, Liljas AEM, Iliffe S. Frailty syndrome: implications and challenges for health care policy. Risk Manag Healthc Policy. 2019;12:23-30. doi: 10.2147/RMHP.S168750
11. Roe L, Normand C, Wren MA, et al. The impact of frailty on healthcare utilisation in Ireland: evidence from The Irish Longitudinal Study on Ageing. BMC Geriatr. 2017;17:203. doi: 10.1186/s12877-017-0579-0
12. Hao Q, Zhou L, Dong B, et al. The role of frailty in predicting mortality and readmission in older adults in acute care wards: a prospective study. Sci Rep. 2019;9:1207. doi: 10.1038/s41598-018-38072-7
13. Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci. 2001;56:M146-M156. doi: 10.1093/gerona/56.3.m146
14. Ryan J, Espinoza S, Ernst ME, et al. Validation of a deficit-accumulation frailty Index in the ASPirin in Reducing Events in the Elderly study and its predictive capacity for disability-free survival. J Gerontol A Biol Sci Med Sci. 2022;77:19-26. doi: 10.1093/gerona/glab225
15. Kim DH, Glynn RJ, Avorn J, et al. Validation of a claims-based frailty index against physical performance and adverse health outcomes in the Health and Retirement Study. J Gerontol A Biol Sci Med Sci. 2019;74:1271-1276. doi: 10.1093/gerona/gly197
16. Kim DH, Schneeweiss S, Glynn RJ, et al. Measuring frailty in Medicare data: development and validation of a claims-based frailty index. J Gerontol A Biol Sci Med Sci. 2018;73:980-987. doi: 10.1093/gerona/glx229
17. Claims-based frailty index. Harvard Dataverse website. 2022. Accessed April 5, 2022. https://dataverse.harvard.edu/dataverse/cfi
18. Rockwood K, Song X, MacKnight C, et al. A global clinical measure of fitness and frailty in elderly people. CMAJ. 2005;173:489-95. doi: 10.1503/cmaj.050051
19. Dabigatran. GoodRx. Accessed September 26, 2022. www.goodrx.com/dabigatran
20. Rivaroxaban. GoodRx. Accessed September 26, 2022. www.goodrx.com/rivaroxaban
21. Apixaban (Eliquis). GoodRx. Accessed September 26, 2022. www.goodrx.com/eliquis
1. Kim DH, Pawar A, Gagne JJ, et al. Frailty and clinical outcomes of direct oral anticoagulants versus warfarin in older adults with atrial fibrillation: a cohort study. Ann Intern Med. 2021;174:1214-1223. doi: 10.7326/M20-7141
2. Zhu W, He W, Guo L, et al. The HAS-BLED score for predicting major bleeding risk in anticoagulated patients with atrial fibrillation: a systematic review and meta-analysis. Clin Cardiol. 2015;38:555-561. doi: 10.1002/clc.22435
3. Olesen JB, Lip GYH, Hansen ML, et al. Validation of risk stratification schemes for predicting stroke and thromboembolism in patients with atrial fibrillation: nationwide cohort study. BMJ. 2011;342:d124. doi: 10.1136/bmj.d124
4. Xue QL. The frailty syndrome: definition and natural history. Clin Geriatr Med. 2011;27:1-15. doi: 10.1016/j.cger.2010.08.009
5. O’Caoimh R, Sezgin D, O’Donovan MR, et al. Prevalence of frailty in 62 countries across the world: a systematic review and meta-analysis of population-level studies. Age Ageing. 2021;50:96-104. doi: 10.1093/ageing/afaa219
6. Campitelli MA, Bronskill SE, Hogan DB, et al. The prevalence and health consequences of frailty in a population-based older home care cohort: a comparison of different measures. BMC Geriatr. 2016;16:133. doi: 10.1186/s12877-016-0309-z
7. Kojima G. Frailty as a predictor of future falls among community-dwelling older people: a systematic review and meta-analysis. J Am Med Dir Assoc. 2015;16:1027-1033. doi: 10.1016/j.jamda. 2015.06.018
8. Kojima G. Frailty as a predictor of fractures among community-dwelling older people: a systematic review and meta-analysis. Bone. 2016;90:116-122. doi: 10.1016/j.bone.2016.06.009
9. Kojima G. Quick and simple FRAIL scale predicts incident activities of daily living (ADL) and instrumental ADL (IADL) disabilities: a systematic review and meta-analysis. J Am Med Dir Assoc. 2018;19:1063-1068. doi: 10.1016/j.jamda.2018.07.019
10. Kojima G, Liljas AEM, Iliffe S. Frailty syndrome: implications and challenges for health care policy. Risk Manag Healthc Policy. 2019;12:23-30. doi: 10.2147/RMHP.S168750
11. Roe L, Normand C, Wren MA, et al. The impact of frailty on healthcare utilisation in Ireland: evidence from The Irish Longitudinal Study on Ageing. BMC Geriatr. 2017;17:203. doi: 10.1186/s12877-017-0579-0
12. Hao Q, Zhou L, Dong B, et al. The role of frailty in predicting mortality and readmission in older adults in acute care wards: a prospective study. Sci Rep. 2019;9:1207. doi: 10.1038/s41598-018-38072-7
13. Fried LP, Tangen CM, Walston J, et al; Cardiovascular Health Study Collaborative Research Group. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci. 2001;56:M146-M156. doi: 10.1093/gerona/56.3.m146
14. Ryan J, Espinoza S, Ernst ME, et al. Validation of a deficit-accumulation frailty Index in the ASPirin in Reducing Events in the Elderly study and its predictive capacity for disability-free survival. J Gerontol A Biol Sci Med Sci. 2022;77:19-26. doi: 10.1093/gerona/glab225
15. Kim DH, Glynn RJ, Avorn J, et al. Validation of a claims-based frailty index against physical performance and adverse health outcomes in the Health and Retirement Study. J Gerontol A Biol Sci Med Sci. 2019;74:1271-1276. doi: 10.1093/gerona/gly197
16. Kim DH, Schneeweiss S, Glynn RJ, et al. Measuring frailty in Medicare data: development and validation of a claims-based frailty index. J Gerontol A Biol Sci Med Sci. 2018;73:980-987. doi: 10.1093/gerona/glx229
17. Claims-based frailty index. Harvard Dataverse website. 2022. Accessed April 5, 2022. https://dataverse.harvard.edu/dataverse/cfi
18. Rockwood K, Song X, MacKnight C, et al. A global clinical measure of fitness and frailty in elderly people. CMAJ. 2005;173:489-95. doi: 10.1503/cmaj.050051
19. Dabigatran. GoodRx. Accessed September 26, 2022. www.goodrx.com/dabigatran
20. Rivaroxaban. GoodRx. Accessed September 26, 2022. www.goodrx.com/rivaroxaban
21. Apixaban (Eliquis). GoodRx. Accessed September 26, 2022. www.goodrx.com/eliquis
PRACTICE CHANGER
Consider apixaban, which demonstrated a lower adverse event (AE) rate than warfarin regardless of frailty status, for anticoagulation treatment of older patients with nonvalvular atrial fibrillation (AF); by comparison, AE rates for dabigatran and rivaroxaban were lower vs warfarin only among nonfrail individuals.
STRENGTH OF RECOMMENDATION
C: Based on a retrospective observational cohort study.1
Kim DH, Pawar A, Gagne JJ, et al. Frailty and clinical outcomes of direct oral anticoagulants versus warfarin in older adults with atrial fibrillation: a cohort study. Ann Intern Med. 2021;174:1214-1223. doi: 10.7326/M20-7141
Prednisone, colchicine equivalent in efficacy for CPP crystal arthritis
PHILADELPHIA – Prednisone appears to have the edge over colchicine for control of pain in patients with acute calcium pyrophosphate (CPP) crystal arthritis, an intensely painful rheumatic disease primarily affecting older patients.
Among 111 patients with acute CPP crystal arthritis randomized to receive either prednisone or colchicine for control of acute pain in a multicenter study, 2 days of therapy with the oral agents provided equivalent pain relief on the second day, and patients generally tolerated each agent well, reported Tristan Pascart, MD, from the Groupement Hospitalier de l’Institut Catholique de Lille (France).
“Almost three-fourths of patients are considered to be good responders to both drugs on day 3, and, maybe, safety is the key issue distinguishing the two treatments: Colchicine was generally well tolerated, but even with this very short time frame of treatment, one patient out of five had diarrhea, which is more of a concern in this elderly population at risk of dehydration,” he said in an oral abstract session at the annual meeting of the American College of Rheumatology.
In contrast, only about 6% of patients assigned to prednisone had diarrhea, and other adverse events that occurred more frequently with the corticosteroid, including hypertension, hyperglycemia, and insomnia all resolved after the therapy was stopped.
Common and acutely painful
Acute CPP crystal arthritis is a common complication that often occurs during hospitalization for primarily nonrheumatologic causes, Dr. Pascart said, and “in the absence of clinical trials, the management relies on expert opinion, which stems from extrapolated data from gap studies” primarily with prednisone or colchicine, Dr. Pascart said.
To fill in the knowledge gap, Dr. Pascart and colleagues conducted the COLCHICORT study to evaluate whether the two drugs were comparable in efficacy and safety for control of acute pain in a vulnerable population.
The multicenter, open-label trial included patients older than age 65 years with an estimated glomerular filtration rate above 30 mL/min per 1.73 m2 who presented with acute CPP deposition arthritis with symptoms occurring within the previous 36 hours. CPP arthritis was defined by the identification of CPP crystals on synovial fluid analysis or typical clinical presentation with evidence of chondrocalcinosis on x-rays or ultrasound.
Patients with a history of gout, cognitive decline that could impair pain assessment, or contraindications to either of the study drugs were excluded.
The participants were randomized to receive either colchicine 1.5 mg (1 mg to start, then 0.5 mg one hour later) at baseline and then 1 mg on day 1, or oral prednisone 30 mg at baseline and on day 1. The patients also received 1 g of systemic acetaminophen, and three 50-mg doses of tramadol during the first 24 hours.
Of the 111 patients randomized, 54 were assigned to receive prednisone, and 57 were assigned to receive colchicine. Baseline characteristics were similar between the groups, with a mean age of about 86 years, body mass index of around 25 kg/m2, and blood pressure in the range of 130/69 mm Hg.
For nearly half of all patients in study each arm the most painful joint was the knee, followed by wrists and ankles.
There was no difference between the groups in the primary efficacy outcome of a change at 24 hours over baseline in visual analog scale (VAS) (0-100 mm) scores, either in a per-protocol analysis or modified intention-to-treat analysis. The mean change in VAS at 24 hours in the colchicine group was –36.6 mm, compared with –37.7 mm in the prednisone group. The investigators had previously determined that any difference between the two drugs of less than 13 mm on pain VAS at 24 hours would meet the definition for equivalent efficacy.
In both groups, a majority of patients had either an improvement greater than 50% in pain VAS scores and/or a pain VAS score less than 40 mm at both 24 and 48 hours.
At 7 days of follow-up, 21.8% of patients assigned to colchicine had diarrhea, compared with 5.6% of those assigned to prednisone. Adverse events occurring more frequently with prednisone included hyperglycemia, hypertension, and insomnia.
Patients who received colchicine and were also on statins had a trend toward a higher risk for diarrhea, but the study was not adequately powered to detect an association, and the trend was not statistically significant, Dr. Pascart said.
“Taken together, safety issues suggest that prednisone should be considered as the first-line therapy in acute CPP crystal arthritis. Future research is warranted to determine factors increasing the risk of colchicine-induced diarrhea,” he concluded.
Both drugs are used
Sara K. Tedeschi, MD, from Brigham & Women’s Hospital in Boston, who attended the session where the data were presented, has a special clinical interest in CPP deposition disease. She applauded Dr. Pascart and colleagues for conducting a rare clinical trial in CPP crystal arthritis.
In an interview, she said that the study suggests “we can keep in mind shorter courses of treatment for acute CPP crystal arthritis; I think that’s one big takeaway from this study.”
Asked whether she would change her practice based on the findings, Dr. Tedeschi replied: “I personally am not sure that I would be moved to use prednisone more than colchicine; I actually take away from this that colchicine is equivalent to prednisone for short-term use for CPP arthritis, but I think it’s also really important to note that this is in the context of quite a lot of acetaminophen and quite a lot of tramadol, and frankly I don’t usually use tramadol with my patients, but I might consider doing that, especially as there were no delirium events in this population.”
Dr. Tedeschi was not involved in the study.
Asked the same question, Michael Toprover, MD, from New York University Langone Medical Center, a moderator of the session who was not involved in the study, said: “I usually use a combination of medications. I generally, in someone who is hospitalized in particular and is in such severe pain, use a combination of colchicine and prednisone, unless I’m worried about infection, in which case I’ll start colchicine until we’ve proven that it’s CPPD, and then I’ll add prednisone.”
The study was funded by PHRC-1 GIRCI Nord Ouest, a clinical research program funded by the Ministry of Health in France. Dr. Pascart, Dr. Tedeschi, and Dr. Toprover all reported having no relevant conflicts of interest.
PHILADELPHIA – Prednisone appears to have the edge over colchicine for control of pain in patients with acute calcium pyrophosphate (CPP) crystal arthritis, an intensely painful rheumatic disease primarily affecting older patients.
Among 111 patients with acute CPP crystal arthritis randomized to receive either prednisone or colchicine for control of acute pain in a multicenter study, 2 days of therapy with the oral agents provided equivalent pain relief on the second day, and patients generally tolerated each agent well, reported Tristan Pascart, MD, from the Groupement Hospitalier de l’Institut Catholique de Lille (France).
“Almost three-fourths of patients are considered to be good responders to both drugs on day 3, and, maybe, safety is the key issue distinguishing the two treatments: Colchicine was generally well tolerated, but even with this very short time frame of treatment, one patient out of five had diarrhea, which is more of a concern in this elderly population at risk of dehydration,” he said in an oral abstract session at the annual meeting of the American College of Rheumatology.
In contrast, only about 6% of patients assigned to prednisone had diarrhea, and other adverse events that occurred more frequently with the corticosteroid, including hypertension, hyperglycemia, and insomnia all resolved after the therapy was stopped.
Common and acutely painful
Acute CPP crystal arthritis is a common complication that often occurs during hospitalization for primarily nonrheumatologic causes, Dr. Pascart said, and “in the absence of clinical trials, the management relies on expert opinion, which stems from extrapolated data from gap studies” primarily with prednisone or colchicine, Dr. Pascart said.
To fill in the knowledge gap, Dr. Pascart and colleagues conducted the COLCHICORT study to evaluate whether the two drugs were comparable in efficacy and safety for control of acute pain in a vulnerable population.
The multicenter, open-label trial included patients older than age 65 years with an estimated glomerular filtration rate above 30 mL/min per 1.73 m2 who presented with acute CPP deposition arthritis with symptoms occurring within the previous 36 hours. CPP arthritis was defined by the identification of CPP crystals on synovial fluid analysis or typical clinical presentation with evidence of chondrocalcinosis on x-rays or ultrasound.
Patients with a history of gout, cognitive decline that could impair pain assessment, or contraindications to either of the study drugs were excluded.
The participants were randomized to receive either colchicine 1.5 mg (1 mg to start, then 0.5 mg one hour later) at baseline and then 1 mg on day 1, or oral prednisone 30 mg at baseline and on day 1. The patients also received 1 g of systemic acetaminophen, and three 50-mg doses of tramadol during the first 24 hours.
Of the 111 patients randomized, 54 were assigned to receive prednisone, and 57 were assigned to receive colchicine. Baseline characteristics were similar between the groups, with a mean age of about 86 years, body mass index of around 25 kg/m2, and blood pressure in the range of 130/69 mm Hg.
For nearly half of all patients in study each arm the most painful joint was the knee, followed by wrists and ankles.
There was no difference between the groups in the primary efficacy outcome of a change at 24 hours over baseline in visual analog scale (VAS) (0-100 mm) scores, either in a per-protocol analysis or modified intention-to-treat analysis. The mean change in VAS at 24 hours in the colchicine group was –36.6 mm, compared with –37.7 mm in the prednisone group. The investigators had previously determined that any difference between the two drugs of less than 13 mm on pain VAS at 24 hours would meet the definition for equivalent efficacy.
In both groups, a majority of patients had either an improvement greater than 50% in pain VAS scores and/or a pain VAS score less than 40 mm at both 24 and 48 hours.
At 7 days of follow-up, 21.8% of patients assigned to colchicine had diarrhea, compared with 5.6% of those assigned to prednisone. Adverse events occurring more frequently with prednisone included hyperglycemia, hypertension, and insomnia.
Patients who received colchicine and were also on statins had a trend toward a higher risk for diarrhea, but the study was not adequately powered to detect an association, and the trend was not statistically significant, Dr. Pascart said.
“Taken together, safety issues suggest that prednisone should be considered as the first-line therapy in acute CPP crystal arthritis. Future research is warranted to determine factors increasing the risk of colchicine-induced diarrhea,” he concluded.
Both drugs are used
Sara K. Tedeschi, MD, from Brigham & Women’s Hospital in Boston, who attended the session where the data were presented, has a special clinical interest in CPP deposition disease. She applauded Dr. Pascart and colleagues for conducting a rare clinical trial in CPP crystal arthritis.
In an interview, she said that the study suggests “we can keep in mind shorter courses of treatment for acute CPP crystal arthritis; I think that’s one big takeaway from this study.”
Asked whether she would change her practice based on the findings, Dr. Tedeschi replied: “I personally am not sure that I would be moved to use prednisone more than colchicine; I actually take away from this that colchicine is equivalent to prednisone for short-term use for CPP arthritis, but I think it’s also really important to note that this is in the context of quite a lot of acetaminophen and quite a lot of tramadol, and frankly I don’t usually use tramadol with my patients, but I might consider doing that, especially as there were no delirium events in this population.”
Dr. Tedeschi was not involved in the study.
Asked the same question, Michael Toprover, MD, from New York University Langone Medical Center, a moderator of the session who was not involved in the study, said: “I usually use a combination of medications. I generally, in someone who is hospitalized in particular and is in such severe pain, use a combination of colchicine and prednisone, unless I’m worried about infection, in which case I’ll start colchicine until we’ve proven that it’s CPPD, and then I’ll add prednisone.”
The study was funded by PHRC-1 GIRCI Nord Ouest, a clinical research program funded by the Ministry of Health in France. Dr. Pascart, Dr. Tedeschi, and Dr. Toprover all reported having no relevant conflicts of interest.
PHILADELPHIA – Prednisone appears to have the edge over colchicine for control of pain in patients with acute calcium pyrophosphate (CPP) crystal arthritis, an intensely painful rheumatic disease primarily affecting older patients.
Among 111 patients with acute CPP crystal arthritis randomized to receive either prednisone or colchicine for control of acute pain in a multicenter study, 2 days of therapy with the oral agents provided equivalent pain relief on the second day, and patients generally tolerated each agent well, reported Tristan Pascart, MD, from the Groupement Hospitalier de l’Institut Catholique de Lille (France).
“Almost three-fourths of patients are considered to be good responders to both drugs on day 3, and, maybe, safety is the key issue distinguishing the two treatments: Colchicine was generally well tolerated, but even with this very short time frame of treatment, one patient out of five had diarrhea, which is more of a concern in this elderly population at risk of dehydration,” he said in an oral abstract session at the annual meeting of the American College of Rheumatology.
In contrast, only about 6% of patients assigned to prednisone had diarrhea, and other adverse events that occurred more frequently with the corticosteroid, including hypertension, hyperglycemia, and insomnia all resolved after the therapy was stopped.
Common and acutely painful
Acute CPP crystal arthritis is a common complication that often occurs during hospitalization for primarily nonrheumatologic causes, Dr. Pascart said, and “in the absence of clinical trials, the management relies on expert opinion, which stems from extrapolated data from gap studies” primarily with prednisone or colchicine, Dr. Pascart said.
To fill in the knowledge gap, Dr. Pascart and colleagues conducted the COLCHICORT study to evaluate whether the two drugs were comparable in efficacy and safety for control of acute pain in a vulnerable population.
The multicenter, open-label trial included patients older than age 65 years with an estimated glomerular filtration rate above 30 mL/min per 1.73 m2 who presented with acute CPP deposition arthritis with symptoms occurring within the previous 36 hours. CPP arthritis was defined by the identification of CPP crystals on synovial fluid analysis or typical clinical presentation with evidence of chondrocalcinosis on x-rays or ultrasound.
Patients with a history of gout, cognitive decline that could impair pain assessment, or contraindications to either of the study drugs were excluded.
The participants were randomized to receive either colchicine 1.5 mg (1 mg to start, then 0.5 mg one hour later) at baseline and then 1 mg on day 1, or oral prednisone 30 mg at baseline and on day 1. The patients also received 1 g of systemic acetaminophen, and three 50-mg doses of tramadol during the first 24 hours.
Of the 111 patients randomized, 54 were assigned to receive prednisone, and 57 were assigned to receive colchicine. Baseline characteristics were similar between the groups, with a mean age of about 86 years, body mass index of around 25 kg/m2, and blood pressure in the range of 130/69 mm Hg.
For nearly half of all patients in study each arm the most painful joint was the knee, followed by wrists and ankles.
There was no difference between the groups in the primary efficacy outcome of a change at 24 hours over baseline in visual analog scale (VAS) (0-100 mm) scores, either in a per-protocol analysis or modified intention-to-treat analysis. The mean change in VAS at 24 hours in the colchicine group was –36.6 mm, compared with –37.7 mm in the prednisone group. The investigators had previously determined that any difference between the two drugs of less than 13 mm on pain VAS at 24 hours would meet the definition for equivalent efficacy.
In both groups, a majority of patients had either an improvement greater than 50% in pain VAS scores and/or a pain VAS score less than 40 mm at both 24 and 48 hours.
At 7 days of follow-up, 21.8% of patients assigned to colchicine had diarrhea, compared with 5.6% of those assigned to prednisone. Adverse events occurring more frequently with prednisone included hyperglycemia, hypertension, and insomnia.
Patients who received colchicine and were also on statins had a trend toward a higher risk for diarrhea, but the study was not adequately powered to detect an association, and the trend was not statistically significant, Dr. Pascart said.
“Taken together, safety issues suggest that prednisone should be considered as the first-line therapy in acute CPP crystal arthritis. Future research is warranted to determine factors increasing the risk of colchicine-induced diarrhea,” he concluded.
Both drugs are used
Sara K. Tedeschi, MD, from Brigham & Women’s Hospital in Boston, who attended the session where the data were presented, has a special clinical interest in CPP deposition disease. She applauded Dr. Pascart and colleagues for conducting a rare clinical trial in CPP crystal arthritis.
In an interview, she said that the study suggests “we can keep in mind shorter courses of treatment for acute CPP crystal arthritis; I think that’s one big takeaway from this study.”
Asked whether she would change her practice based on the findings, Dr. Tedeschi replied: “I personally am not sure that I would be moved to use prednisone more than colchicine; I actually take away from this that colchicine is equivalent to prednisone for short-term use for CPP arthritis, but I think it’s also really important to note that this is in the context of quite a lot of acetaminophen and quite a lot of tramadol, and frankly I don’t usually use tramadol with my patients, but I might consider doing that, especially as there were no delirium events in this population.”
Dr. Tedeschi was not involved in the study.
Asked the same question, Michael Toprover, MD, from New York University Langone Medical Center, a moderator of the session who was not involved in the study, said: “I usually use a combination of medications. I generally, in someone who is hospitalized in particular and is in such severe pain, use a combination of colchicine and prednisone, unless I’m worried about infection, in which case I’ll start colchicine until we’ve proven that it’s CPPD, and then I’ll add prednisone.”
The study was funded by PHRC-1 GIRCI Nord Ouest, a clinical research program funded by the Ministry of Health in France. Dr. Pascart, Dr. Tedeschi, and Dr. Toprover all reported having no relevant conflicts of interest.
AT ACR 2022
Total replacement and fusion yield similar outcomes for ankle osteoarthritis
Ankle osteoarthritis remains a cause of severe pain and disability. Patients are treated nonoperatively if possible, but surgery is often needed for individuals with end-stage disease, wrote Andrew Goldberg, MBBS, of University College London and colleagues in the Annals of Internal Medicine.
“Most patients with ankle arthritis respond to nonoperative treatments, such as weight loss, activity modification, support braces, and analgesia, [but] once the disease has progressed to end-stage osteoarthritis, the main surgical treatments are total ankle re-placement or ankle arthrodesis,” Dr. Goldberg said, in an interview.
In the new study, patients were randomized to receive either a total ankle replacement (TAR) or ankle fusion (AF).
“We showed that, in both treatment groups the clinical scores improved hugely, by more than three times the minimal clinically important difference,” Dr. Goldberg said in an interview.
“Although the ankle replacement arm improved, on average, by more than an extra 4 points over ankle fusion, this was not considered clinically or statistically significant,” he said.
The study is the first randomized trial to show high-quality and robust results, he noted, and findings support data from previous studies.
“Although both TAR and ankle fusion have been shown to be effective, they are very different treatments, with one fusing the bones so that there is no ankle joint movement, and the other replacing the joint with the aim of retaining ankle joint movement. It is difficult for a patient to know which treatment is more suitable for them, with most seeking guidance from their surgeon,” he said.
Generating high-quality evidence
The study, a randomized, multicenter, open-label trial known as TARVA (Total Ankle Replacement Versus Ankle Arthrodesis), aimed to compare the clinical effectiveness of the two existing publicly funded U.K. treatment options, the authors wrote.
Patients were recruited at 17 U.K. centers between March 6, 2015, and Jan. 10, 2019. The study enrolled 303 adults aged 50-85 years with end-stage ankle osteoarthritis. The mean age of the participants was 68 years; 71% were men. A total of 137 TAR patients and 144 ankle fusion patients completed their surgeries with clinical scores available for analysis. Baseline characteristics were mainly similar between the groups.
Blinding was not possible because of the nature of the procedures, but the surgeons who screened the patients were not aware of the randomization allocations, the researchers noted. A total of 33 surgeons participated in the trial, with a median number of seven patients per surgeon during the study period.
For TAR, U.K. surgeons use both two-component, fixed-bearing and three-component, mobile-bearing implants, the authors write. Ankle fusion was done using the surgeon’s usual technique of either arthroscopic-assisted or open ankle fusion.
The primary outcome was the change in the Manchester–Oxford Foot Questionnaire walking/standing (MOXFQ-W/S) domain scores from baseline to 52 weeks after surgery. The MOXFQ-W/S uses a scale of 0-100, with lower scores representing better outcomes. Secondary outcomes included change in the MOXFQ-W/S scores at 26 weeks after surgery, as well as measures of patient quality of life.
No statistically significant difference
Overall, the mean MOXFQ-W/S scores improved significantly from baseline to 52 weeks for both groups, with average improvements of 49.9 in the TAR group and 44.4 points in the AF group. The average scores at 52 weeks were 31.4 in the TAR group and 36.8 in the AF group.
The adjusted difference in score change from baseline was –5.56, showing a slightly greater degree of improvement with TAR, but this difference was not clinically or statistically significant, the researchers noted.
Adverse event numbers were similar for both procedures, with 54% of TAR patients and 53% of AF patients experiencing at least 1 adverse event during the study period. Of those, 18% of TAR patients and 24% of AF patients experienced at least 1 serious adverse event.
However, the TAR patients experienced a higher rate of wound healing complications and nerve injuries, while thromboembolism was higher in the AF patients, the researchers noted.
A prespecified subgroup analysis of patients with osteoarthritis in adjacent joints suggested a greater improvement in TAR, compared with AF, a difference that increased when fixed-bearing TAR was compared with AF, the authors wrote.
“This reinforces previous reports that suggest that the presence of adjacent joint arthritis may be an indication for ankle replacement over AF,” the authors wrote in their discussion.
“Many of these patients did not have any symptoms in the adjacent joints,” they noted.
“The presence of adjacent joint arthritis, meaning the wear and tear of the joints around the ankle joint, seemed to favor ankle replacement,” Dr. Goldberg said. Approximately 30 joints in the foot continue to move after the ankle is fused, and if these adjacent joints are not healthy before surgery [as was the case in 42% of the study patients], the results of fusion were less successful, he explained.
A post hoc analysis between TAR subtypes showed that patients who had fixed-bearing TAR had significantly greater improvements, compared with AF patients, but this difference was not observed in patients who had mobile-bearing TAR, the researchers noted.
Dr. Goldberg said it was surprising “that, in a separate analysis, we found that the fixed-bearing ankle replacement patients [who accounted for half of the implants used] improved by a much greater difference when compared to ankle fusion.”
The study findings were limited by several factors including the short follow-up and study design that allowed surgeons to choose any implant and technique, the researchers noted.
Other limitations include a lack of data on cost-effectiveness and the impact of comorbidities on outcomes, they wrote. However, the study is the first completed multicenter randomized controlled trial to compare TAR and AF procedures for end-stage ankle osteoarthritis and shows that both yield similar clinical improvements, they concluded.
Data can inform treatment discussion
The take-home messages for clinicians are that both ankle replacement and ankle fusion are effective treatments that improve patients’ quality of life, and it is important to establish the health of adjacent joints before making treatment recommendations, Dr. Goldberg said.
“Careful counseling on the relative risks of each procedure should be part of the informed consent process,” he added. Ideally, all patients seeking surgical care for ankle arthritis should have a choice between ankle replacement and ankle fusion, but sometimes there is inequity of provision of the two treatments, he noted.
“We now encourage all surgeons to work in ankle arthritis networks so that every patient, no matter where they live, can have choice about the best treatment for them,” he said.
Researchers met the challenge of surgical RCT
Randomized trials of surgical interventions are challenging to conduct, and therefore limited, wrote Bruce Sangeorzan, MD, of the University of Washington, Seattle, and colleagues in an accompanying editorial. However, the new study was strengthened by the inclusion of 17 centers for heterogeneity of implant type and surgeon experience level, the editorialists said in the Annals of Internal Medicine.
The study is especially important, because ankle arthritis treatment is very understudied, compared with hip and knee arthritis, but it has a similar impact on activity, editorial coauthor Dr. Sangeorzan said in an interview.
“Randomized controlled trials are the gold standard for comparing medical therapies,” he said, “but they are very difficult to do in surgical treatments, particularly when the two treatments can be differentiated, in this case by movement of the ankle.”
In addition, there is a strong placebo effect attached to interventions, Dr. Sangeorzan noted. “Determining best-case treatment relies on prospective research, preferably randomized. Since both ankle fusion and ankle replacement are effective therapies, a prospective randomized trial is the best way to help make treatment decisions,” he said.
The current study findings are not surprising, but they are preliminary, and 1 year of follow-up is not enough to determine effectiveness, Dr. Sangeorzan emphasized. However, “the authors have done the hard work of randomizing the patients and collecting the data, and the patients can now be followed for a longer time,” he said.
“In addition, the trial was designed with multiple secondary outcome measures, so the data can be matched up with larger trials that were not randomized to identify key elements of success for each procedure,” he noted.
The key message for clinicians is that ankle arthritis has a significant impact on patients’ lives, but there are two effective treatments that can reduce the impact of the disease, said Dr. Sangeorzan. “The data suggest that there are differences in implant design and differences in comorbidities that should influence decision-making,” he added.
Additional research is needed in the form of a longer study duration with larger cohorts, said Dr. Sangeorzan. In particular, researchers need to determine what comorbidities might drive patients to one type of care vs. another, he said. “The suggestion that [patients receiving implants with two motion segments have better outcomes than those receiving implants with a one-motion segment] also deserves further study,” he added.
The research was supported by the UK National Institute for Health and Care Research Health Technology Assessment Programme. The trial was sponsored by University College London. Dr. Goldberg disclosed grant support from NIHR HTA, as well as financial relationships with companies including Stryker, Paragon 28, and stock options with Standing CT Company, Elstree Waterfront Outpatients, and X Bolt Orthopedics.
The editorialists had no financial conflicts to disclose.
Ankle osteoarthritis remains a cause of severe pain and disability. Patients are treated nonoperatively if possible, but surgery is often needed for individuals with end-stage disease, wrote Andrew Goldberg, MBBS, of University College London and colleagues in the Annals of Internal Medicine.
“Most patients with ankle arthritis respond to nonoperative treatments, such as weight loss, activity modification, support braces, and analgesia, [but] once the disease has progressed to end-stage osteoarthritis, the main surgical treatments are total ankle re-placement or ankle arthrodesis,” Dr. Goldberg said, in an interview.
In the new study, patients were randomized to receive either a total ankle replacement (TAR) or ankle fusion (AF).
“We showed that, in both treatment groups the clinical scores improved hugely, by more than three times the minimal clinically important difference,” Dr. Goldberg said in an interview.
“Although the ankle replacement arm improved, on average, by more than an extra 4 points over ankle fusion, this was not considered clinically or statistically significant,” he said.
The study is the first randomized trial to show high-quality and robust results, he noted, and findings support data from previous studies.
“Although both TAR and ankle fusion have been shown to be effective, they are very different treatments, with one fusing the bones so that there is no ankle joint movement, and the other replacing the joint with the aim of retaining ankle joint movement. It is difficult for a patient to know which treatment is more suitable for them, with most seeking guidance from their surgeon,” he said.
Generating high-quality evidence
The study, a randomized, multicenter, open-label trial known as TARVA (Total Ankle Replacement Versus Ankle Arthrodesis), aimed to compare the clinical effectiveness of the two existing publicly funded U.K. treatment options, the authors wrote.
Patients were recruited at 17 U.K. centers between March 6, 2015, and Jan. 10, 2019. The study enrolled 303 adults aged 50-85 years with end-stage ankle osteoarthritis. The mean age of the participants was 68 years; 71% were men. A total of 137 TAR patients and 144 ankle fusion patients completed their surgeries with clinical scores available for analysis. Baseline characteristics were mainly similar between the groups.
Blinding was not possible because of the nature of the procedures, but the surgeons who screened the patients were not aware of the randomization allocations, the researchers noted. A total of 33 surgeons participated in the trial, with a median number of seven patients per surgeon during the study period.
For TAR, U.K. surgeons use both two-component, fixed-bearing and three-component, mobile-bearing implants, the authors write. Ankle fusion was done using the surgeon’s usual technique of either arthroscopic-assisted or open ankle fusion.
The primary outcome was the change in the Manchester–Oxford Foot Questionnaire walking/standing (MOXFQ-W/S) domain scores from baseline to 52 weeks after surgery. The MOXFQ-W/S uses a scale of 0-100, with lower scores representing better outcomes. Secondary outcomes included change in the MOXFQ-W/S scores at 26 weeks after surgery, as well as measures of patient quality of life.
No statistically significant difference
Overall, the mean MOXFQ-W/S scores improved significantly from baseline to 52 weeks for both groups, with average improvements of 49.9 in the TAR group and 44.4 points in the AF group. The average scores at 52 weeks were 31.4 in the TAR group and 36.8 in the AF group.
The adjusted difference in score change from baseline was –5.56, showing a slightly greater degree of improvement with TAR, but this difference was not clinically or statistically significant, the researchers noted.
Adverse event numbers were similar for both procedures, with 54% of TAR patients and 53% of AF patients experiencing at least 1 adverse event during the study period. Of those, 18% of TAR patients and 24% of AF patients experienced at least 1 serious adverse event.
However, the TAR patients experienced a higher rate of wound healing complications and nerve injuries, while thromboembolism was higher in the AF patients, the researchers noted.
A prespecified subgroup analysis of patients with osteoarthritis in adjacent joints suggested a greater improvement in TAR, compared with AF, a difference that increased when fixed-bearing TAR was compared with AF, the authors wrote.
“This reinforces previous reports that suggest that the presence of adjacent joint arthritis may be an indication for ankle replacement over AF,” the authors wrote in their discussion.
“Many of these patients did not have any symptoms in the adjacent joints,” they noted.
“The presence of adjacent joint arthritis, meaning the wear and tear of the joints around the ankle joint, seemed to favor ankle replacement,” Dr. Goldberg said. Approximately 30 joints in the foot continue to move after the ankle is fused, and if these adjacent joints are not healthy before surgery [as was the case in 42% of the study patients], the results of fusion were less successful, he explained.
A post hoc analysis between TAR subtypes showed that patients who had fixed-bearing TAR had significantly greater improvements, compared with AF patients, but this difference was not observed in patients who had mobile-bearing TAR, the researchers noted.
Dr. Goldberg said it was surprising “that, in a separate analysis, we found that the fixed-bearing ankle replacement patients [who accounted for half of the implants used] improved by a much greater difference when compared to ankle fusion.”
The study findings were limited by several factors including the short follow-up and study design that allowed surgeons to choose any implant and technique, the researchers noted.
Other limitations include a lack of data on cost-effectiveness and the impact of comorbidities on outcomes, they wrote. However, the study is the first completed multicenter randomized controlled trial to compare TAR and AF procedures for end-stage ankle osteoarthritis and shows that both yield similar clinical improvements, they concluded.
Data can inform treatment discussion
The take-home messages for clinicians are that both ankle replacement and ankle fusion are effective treatments that improve patients’ quality of life, and it is important to establish the health of adjacent joints before making treatment recommendations, Dr. Goldberg said.
“Careful counseling on the relative risks of each procedure should be part of the informed consent process,” he added. Ideally, all patients seeking surgical care for ankle arthritis should have a choice between ankle replacement and ankle fusion, but sometimes there is inequity of provision of the two treatments, he noted.
“We now encourage all surgeons to work in ankle arthritis networks so that every patient, no matter where they live, can have choice about the best treatment for them,” he said.
Researchers met the challenge of surgical RCT
Randomized trials of surgical interventions are challenging to conduct, and therefore limited, wrote Bruce Sangeorzan, MD, of the University of Washington, Seattle, and colleagues in an accompanying editorial. However, the new study was strengthened by the inclusion of 17 centers for heterogeneity of implant type and surgeon experience level, the editorialists said in the Annals of Internal Medicine.
The study is especially important, because ankle arthritis treatment is very understudied, compared with hip and knee arthritis, but it has a similar impact on activity, editorial coauthor Dr. Sangeorzan said in an interview.
“Randomized controlled trials are the gold standard for comparing medical therapies,” he said, “but they are very difficult to do in surgical treatments, particularly when the two treatments can be differentiated, in this case by movement of the ankle.”
In addition, there is a strong placebo effect attached to interventions, Dr. Sangeorzan noted. “Determining best-case treatment relies on prospective research, preferably randomized. Since both ankle fusion and ankle replacement are effective therapies, a prospective randomized trial is the best way to help make treatment decisions,” he said.
The current study findings are not surprising, but they are preliminary, and 1 year of follow-up is not enough to determine effectiveness, Dr. Sangeorzan emphasized. However, “the authors have done the hard work of randomizing the patients and collecting the data, and the patients can now be followed for a longer time,” he said.
“In addition, the trial was designed with multiple secondary outcome measures, so the data can be matched up with larger trials that were not randomized to identify key elements of success for each procedure,” he noted.
The key message for clinicians is that ankle arthritis has a significant impact on patients’ lives, but there are two effective treatments that can reduce the impact of the disease, said Dr. Sangeorzan. “The data suggest that there are differences in implant design and differences in comorbidities that should influence decision-making,” he added.
Additional research is needed in the form of a longer study duration with larger cohorts, said Dr. Sangeorzan. In particular, researchers need to determine what comorbidities might drive patients to one type of care vs. another, he said. “The suggestion that [patients receiving implants with two motion segments have better outcomes than those receiving implants with a one-motion segment] also deserves further study,” he added.
The research was supported by the UK National Institute for Health and Care Research Health Technology Assessment Programme. The trial was sponsored by University College London. Dr. Goldberg disclosed grant support from NIHR HTA, as well as financial relationships with companies including Stryker, Paragon 28, and stock options with Standing CT Company, Elstree Waterfront Outpatients, and X Bolt Orthopedics.
The editorialists had no financial conflicts to disclose.
Ankle osteoarthritis remains a cause of severe pain and disability. Patients are treated nonoperatively if possible, but surgery is often needed for individuals with end-stage disease, wrote Andrew Goldberg, MBBS, of University College London and colleagues in the Annals of Internal Medicine.
“Most patients with ankle arthritis respond to nonoperative treatments, such as weight loss, activity modification, support braces, and analgesia, [but] once the disease has progressed to end-stage osteoarthritis, the main surgical treatments are total ankle re-placement or ankle arthrodesis,” Dr. Goldberg said, in an interview.
In the new study, patients were randomized to receive either a total ankle replacement (TAR) or ankle fusion (AF).
“We showed that, in both treatment groups the clinical scores improved hugely, by more than three times the minimal clinically important difference,” Dr. Goldberg said in an interview.
“Although the ankle replacement arm improved, on average, by more than an extra 4 points over ankle fusion, this was not considered clinically or statistically significant,” he said.
The study is the first randomized trial to show high-quality and robust results, he noted, and findings support data from previous studies.
“Although both TAR and ankle fusion have been shown to be effective, they are very different treatments, with one fusing the bones so that there is no ankle joint movement, and the other replacing the joint with the aim of retaining ankle joint movement. It is difficult for a patient to know which treatment is more suitable for them, with most seeking guidance from their surgeon,” he said.
Generating high-quality evidence
The study, a randomized, multicenter, open-label trial known as TARVA (Total Ankle Replacement Versus Ankle Arthrodesis), aimed to compare the clinical effectiveness of the two existing publicly funded U.K. treatment options, the authors wrote.
Patients were recruited at 17 U.K. centers between March 6, 2015, and Jan. 10, 2019. The study enrolled 303 adults aged 50-85 years with end-stage ankle osteoarthritis. The mean age of the participants was 68 years; 71% were men. A total of 137 TAR patients and 144 ankle fusion patients completed their surgeries with clinical scores available for analysis. Baseline characteristics were mainly similar between the groups.
Blinding was not possible because of the nature of the procedures, but the surgeons who screened the patients were not aware of the randomization allocations, the researchers noted. A total of 33 surgeons participated in the trial, with a median number of seven patients per surgeon during the study period.
For TAR, U.K. surgeons use both two-component, fixed-bearing and three-component, mobile-bearing implants, the authors write. Ankle fusion was done using the surgeon’s usual technique of either arthroscopic-assisted or open ankle fusion.
The primary outcome was the change in the Manchester–Oxford Foot Questionnaire walking/standing (MOXFQ-W/S) domain scores from baseline to 52 weeks after surgery. The MOXFQ-W/S uses a scale of 0-100, with lower scores representing better outcomes. Secondary outcomes included change in the MOXFQ-W/S scores at 26 weeks after surgery, as well as measures of patient quality of life.
No statistically significant difference
Overall, the mean MOXFQ-W/S scores improved significantly from baseline to 52 weeks for both groups, with average improvements of 49.9 in the TAR group and 44.4 points in the AF group. The average scores at 52 weeks were 31.4 in the TAR group and 36.8 in the AF group.
The adjusted difference in score change from baseline was –5.56, showing a slightly greater degree of improvement with TAR, but this difference was not clinically or statistically significant, the researchers noted.
Adverse event numbers were similar for both procedures, with 54% of TAR patients and 53% of AF patients experiencing at least 1 adverse event during the study period. Of those, 18% of TAR patients and 24% of AF patients experienced at least 1 serious adverse event.
However, the TAR patients experienced a higher rate of wound healing complications and nerve injuries, while thromboembolism was higher in the AF patients, the researchers noted.
A prespecified subgroup analysis of patients with osteoarthritis in adjacent joints suggested a greater improvement in TAR, compared with AF, a difference that increased when fixed-bearing TAR was compared with AF, the authors wrote.
“This reinforces previous reports that suggest that the presence of adjacent joint arthritis may be an indication for ankle replacement over AF,” the authors wrote in their discussion.
“Many of these patients did not have any symptoms in the adjacent joints,” they noted.
“The presence of adjacent joint arthritis, meaning the wear and tear of the joints around the ankle joint, seemed to favor ankle replacement,” Dr. Goldberg said. Approximately 30 joints in the foot continue to move after the ankle is fused, and if these adjacent joints are not healthy before surgery [as was the case in 42% of the study patients], the results of fusion were less successful, he explained.
A post hoc analysis between TAR subtypes showed that patients who had fixed-bearing TAR had significantly greater improvements, compared with AF patients, but this difference was not observed in patients who had mobile-bearing TAR, the researchers noted.
Dr. Goldberg said it was surprising “that, in a separate analysis, we found that the fixed-bearing ankle replacement patients [who accounted for half of the implants used] improved by a much greater difference when compared to ankle fusion.”
The study findings were limited by several factors including the short follow-up and study design that allowed surgeons to choose any implant and technique, the researchers noted.
Other limitations include a lack of data on cost-effectiveness and the impact of comorbidities on outcomes, they wrote. However, the study is the first completed multicenter randomized controlled trial to compare TAR and AF procedures for end-stage ankle osteoarthritis and shows that both yield similar clinical improvements, they concluded.
Data can inform treatment discussion
The take-home messages for clinicians are that both ankle replacement and ankle fusion are effective treatments that improve patients’ quality of life, and it is important to establish the health of adjacent joints before making treatment recommendations, Dr. Goldberg said.
“Careful counseling on the relative risks of each procedure should be part of the informed consent process,” he added. Ideally, all patients seeking surgical care for ankle arthritis should have a choice between ankle replacement and ankle fusion, but sometimes there is inequity of provision of the two treatments, he noted.
“We now encourage all surgeons to work in ankle arthritis networks so that every patient, no matter where they live, can have choice about the best treatment for them,” he said.
Researchers met the challenge of surgical RCT
Randomized trials of surgical interventions are challenging to conduct, and therefore limited, wrote Bruce Sangeorzan, MD, of the University of Washington, Seattle, and colleagues in an accompanying editorial. However, the new study was strengthened by the inclusion of 17 centers for heterogeneity of implant type and surgeon experience level, the editorialists said in the Annals of Internal Medicine.
The study is especially important, because ankle arthritis treatment is very understudied, compared with hip and knee arthritis, but it has a similar impact on activity, editorial coauthor Dr. Sangeorzan said in an interview.
“Randomized controlled trials are the gold standard for comparing medical therapies,” he said, “but they are very difficult to do in surgical treatments, particularly when the two treatments can be differentiated, in this case by movement of the ankle.”
In addition, there is a strong placebo effect attached to interventions, Dr. Sangeorzan noted. “Determining best-case treatment relies on prospective research, preferably randomized. Since both ankle fusion and ankle replacement are effective therapies, a prospective randomized trial is the best way to help make treatment decisions,” he said.
The current study findings are not surprising, but they are preliminary, and 1 year of follow-up is not enough to determine effectiveness, Dr. Sangeorzan emphasized. However, “the authors have done the hard work of randomizing the patients and collecting the data, and the patients can now be followed for a longer time,” he said.
“In addition, the trial was designed with multiple secondary outcome measures, so the data can be matched up with larger trials that were not randomized to identify key elements of success for each procedure,” he noted.
The key message for clinicians is that ankle arthritis has a significant impact on patients’ lives, but there are two effective treatments that can reduce the impact of the disease, said Dr. Sangeorzan. “The data suggest that there are differences in implant design and differences in comorbidities that should influence decision-making,” he added.
Additional research is needed in the form of a longer study duration with larger cohorts, said Dr. Sangeorzan. In particular, researchers need to determine what comorbidities might drive patients to one type of care vs. another, he said. “The suggestion that [patients receiving implants with two motion segments have better outcomes than those receiving implants with a one-motion segment] also deserves further study,” he added.
The research was supported by the UK National Institute for Health and Care Research Health Technology Assessment Programme. The trial was sponsored by University College London. Dr. Goldberg disclosed grant support from NIHR HTA, as well as financial relationships with companies including Stryker, Paragon 28, and stock options with Standing CT Company, Elstree Waterfront Outpatients, and X Bolt Orthopedics.
The editorialists had no financial conflicts to disclose.
Nutrition for cognition: A missed opportunity in U.S. seniors?
, new research shows. Researchers assessed the memory function of more than 3,500 persons who used SNAP or did not use SNAP over a period of 20 years. They found that those who didn’t use the food benefits program experienced 2 more years of cognitive aging compared with program users.
Of the 3,555 individuals included in the study, all were eligible to use the benefits, but only 559 did, leaving 2,996 participants who did not take advantage of the program.
Low program participation levels translate into a missed opportunity to prevent dementia, said study investigator Adina Zeki Al Hazzouri, PhD, assistant professor of epidemiology at the Columbia Aging Center at Columbia University Mailman School of Public Health in New York.
She said that prior research has shown that stigma may prevent older Americans from using SNAP. “Educational programs are needed to reduce the stigma that the public holds towards SNAP use,” she said.
Policy change could increase usage among older individuals, Dr. Zeki Al Hazzouri noted. Such changes could include simplifying enrollment and reporting procedures, shortening recertification periods, and increasing benefit levels.
The study was published online in Neurology.
Memory preservation
Dr. Zeki Al Hazzouri and her team assessed respondents from the Health and Retirement Study (HRS), a representative sample of Americans aged 50 and older. All respondents who were eligible to participate in SNAP in 1996 were followed every 2 years until 2016.
At each assessment, HRS respondents completed memory tests, including immediate and delayed word recall. For those who were too impaired to complete the interview, proxy informants – typically, their spouses or family members – assessed the memory and cognition of their family members using validated instruments, such as the 16-item Informant Questionnaire for Cognitive Decline.
Investigators used a validated memory function composite score, which is benchmarked against the memory assessments and evaluations of the Aging, Demographics, and Memory Study (ADAMS) cohort.
The team found that compared with nonusers, SNAP users were more likely to be women, Black, and born in the southern United States. They were less likely to be married and had more chronic conditions, such as high blood pressure, diabetes, cancer, heart problems, psychiatric problems, and arthritis.
One important study limitation was that SNAP use was measured only once during the study, the investigators noted. Ideally, Dr. Zeki Al Hazzouri said, future research would examine cumulative SNAP use history and explore the pathways that might account for the association between SNAP use and memory decline.
While findings suggest that there were no significant differences in baseline memory function between SNAP users and nonusers, users experienced approximately 2 fewer years of cognitive aging over a 10-year period than those who didn’t use the program.
Dr. Zeki Al Hazzouri speculated that SNAP benefits may slow cognitive aging by contributing to overall brain health and that, in comparison with nonusers, SNAP users absorb more nutrients, which promote neuronal integrity.
The investigators theorized that SNAP benefits may reduce stress from financial hardship, which has been linked to premature cognitive aging in other research.
“SNAP may also increase the purchasing power and investment in other health preserving behaviors, but also resulting in better access to care, which may in turn result in better disease management and management of risk factors for cognitive function,” the investigators wrote.
An underutilized program
In an accompanying editorial, Steven Albert, PhD, Philip B. Hallen Endowed Chair in Community Health and Social Justice at the University of Pittsburgh, noted that in 2020, among households with people aged 50 and older in the United States, more than 9 million Americans experienced food insecurity.
Furthermore, he pointed out, research from 2018 showed that 71% of people aged 60 and older who met income eligibility for SNAP did not participate in the program. “SNAP is an underutilized food security program involving substantial income supplements for older people with low incomes.
“Against the backdrop of so many failures of pharmacotherapy for dementia and the so far inexorable increase in the prevalence of dementia due to population aging, are we missing an opportunity to support cognitive health by failing to enroll the 14 million Americans who are over age 60 and eligible for SNAP but who do not participate?” Dr. Albert asked. He suggested that it would be helpful to determine this through a randomized promotion trial.
The study was funded by the National Institute on Aging. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, new research shows. Researchers assessed the memory function of more than 3,500 persons who used SNAP or did not use SNAP over a period of 20 years. They found that those who didn’t use the food benefits program experienced 2 more years of cognitive aging compared with program users.
Of the 3,555 individuals included in the study, all were eligible to use the benefits, but only 559 did, leaving 2,996 participants who did not take advantage of the program.
Low program participation levels translate into a missed opportunity to prevent dementia, said study investigator Adina Zeki Al Hazzouri, PhD, assistant professor of epidemiology at the Columbia Aging Center at Columbia University Mailman School of Public Health in New York.
She said that prior research has shown that stigma may prevent older Americans from using SNAP. “Educational programs are needed to reduce the stigma that the public holds towards SNAP use,” she said.
Policy change could increase usage among older individuals, Dr. Zeki Al Hazzouri noted. Such changes could include simplifying enrollment and reporting procedures, shortening recertification periods, and increasing benefit levels.
The study was published online in Neurology.
Memory preservation
Dr. Zeki Al Hazzouri and her team assessed respondents from the Health and Retirement Study (HRS), a representative sample of Americans aged 50 and older. All respondents who were eligible to participate in SNAP in 1996 were followed every 2 years until 2016.
At each assessment, HRS respondents completed memory tests, including immediate and delayed word recall. For those who were too impaired to complete the interview, proxy informants – typically, their spouses or family members – assessed the memory and cognition of their family members using validated instruments, such as the 16-item Informant Questionnaire for Cognitive Decline.
Investigators used a validated memory function composite score, which is benchmarked against the memory assessments and evaluations of the Aging, Demographics, and Memory Study (ADAMS) cohort.
The team found that compared with nonusers, SNAP users were more likely to be women, Black, and born in the southern United States. They were less likely to be married and had more chronic conditions, such as high blood pressure, diabetes, cancer, heart problems, psychiatric problems, and arthritis.
One important study limitation was that SNAP use was measured only once during the study, the investigators noted. Ideally, Dr. Zeki Al Hazzouri said, future research would examine cumulative SNAP use history and explore the pathways that might account for the association between SNAP use and memory decline.
While findings suggest that there were no significant differences in baseline memory function between SNAP users and nonusers, users experienced approximately 2 fewer years of cognitive aging over a 10-year period than those who didn’t use the program.
Dr. Zeki Al Hazzouri speculated that SNAP benefits may slow cognitive aging by contributing to overall brain health and that, in comparison with nonusers, SNAP users absorb more nutrients, which promote neuronal integrity.
The investigators theorized that SNAP benefits may reduce stress from financial hardship, which has been linked to premature cognitive aging in other research.
“SNAP may also increase the purchasing power and investment in other health preserving behaviors, but also resulting in better access to care, which may in turn result in better disease management and management of risk factors for cognitive function,” the investigators wrote.
An underutilized program
In an accompanying editorial, Steven Albert, PhD, Philip B. Hallen Endowed Chair in Community Health and Social Justice at the University of Pittsburgh, noted that in 2020, among households with people aged 50 and older in the United States, more than 9 million Americans experienced food insecurity.
Furthermore, he pointed out, research from 2018 showed that 71% of people aged 60 and older who met income eligibility for SNAP did not participate in the program. “SNAP is an underutilized food security program involving substantial income supplements for older people with low incomes.
“Against the backdrop of so many failures of pharmacotherapy for dementia and the so far inexorable increase in the prevalence of dementia due to population aging, are we missing an opportunity to support cognitive health by failing to enroll the 14 million Americans who are over age 60 and eligible for SNAP but who do not participate?” Dr. Albert asked. He suggested that it would be helpful to determine this through a randomized promotion trial.
The study was funded by the National Institute on Aging. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, new research shows. Researchers assessed the memory function of more than 3,500 persons who used SNAP or did not use SNAP over a period of 20 years. They found that those who didn’t use the food benefits program experienced 2 more years of cognitive aging compared with program users.
Of the 3,555 individuals included in the study, all were eligible to use the benefits, but only 559 did, leaving 2,996 participants who did not take advantage of the program.
Low program participation levels translate into a missed opportunity to prevent dementia, said study investigator Adina Zeki Al Hazzouri, PhD, assistant professor of epidemiology at the Columbia Aging Center at Columbia University Mailman School of Public Health in New York.
She said that prior research has shown that stigma may prevent older Americans from using SNAP. “Educational programs are needed to reduce the stigma that the public holds towards SNAP use,” she said.
Policy change could increase usage among older individuals, Dr. Zeki Al Hazzouri noted. Such changes could include simplifying enrollment and reporting procedures, shortening recertification periods, and increasing benefit levels.
The study was published online in Neurology.
Memory preservation
Dr. Zeki Al Hazzouri and her team assessed respondents from the Health and Retirement Study (HRS), a representative sample of Americans aged 50 and older. All respondents who were eligible to participate in SNAP in 1996 were followed every 2 years until 2016.
At each assessment, HRS respondents completed memory tests, including immediate and delayed word recall. For those who were too impaired to complete the interview, proxy informants – typically, their spouses or family members – assessed the memory and cognition of their family members using validated instruments, such as the 16-item Informant Questionnaire for Cognitive Decline.
Investigators used a validated memory function composite score, which is benchmarked against the memory assessments and evaluations of the Aging, Demographics, and Memory Study (ADAMS) cohort.
The team found that compared with nonusers, SNAP users were more likely to be women, Black, and born in the southern United States. They were less likely to be married and had more chronic conditions, such as high blood pressure, diabetes, cancer, heart problems, psychiatric problems, and arthritis.
One important study limitation was that SNAP use was measured only once during the study, the investigators noted. Ideally, Dr. Zeki Al Hazzouri said, future research would examine cumulative SNAP use history and explore the pathways that might account for the association between SNAP use and memory decline.
While findings suggest that there were no significant differences in baseline memory function between SNAP users and nonusers, users experienced approximately 2 fewer years of cognitive aging over a 10-year period than those who didn’t use the program.
Dr. Zeki Al Hazzouri speculated that SNAP benefits may slow cognitive aging by contributing to overall brain health and that, in comparison with nonusers, SNAP users absorb more nutrients, which promote neuronal integrity.
The investigators theorized that SNAP benefits may reduce stress from financial hardship, which has been linked to premature cognitive aging in other research.
“SNAP may also increase the purchasing power and investment in other health preserving behaviors, but also resulting in better access to care, which may in turn result in better disease management and management of risk factors for cognitive function,” the investigators wrote.
An underutilized program
In an accompanying editorial, Steven Albert, PhD, Philip B. Hallen Endowed Chair in Community Health and Social Justice at the University of Pittsburgh, noted that in 2020, among households with people aged 50 and older in the United States, more than 9 million Americans experienced food insecurity.
Furthermore, he pointed out, research from 2018 showed that 71% of people aged 60 and older who met income eligibility for SNAP did not participate in the program. “SNAP is an underutilized food security program involving substantial income supplements for older people with low incomes.
“Against the backdrop of so many failures of pharmacotherapy for dementia and the so far inexorable increase in the prevalence of dementia due to population aging, are we missing an opportunity to support cognitive health by failing to enroll the 14 million Americans who are over age 60 and eligible for SNAP but who do not participate?” Dr. Albert asked. He suggested that it would be helpful to determine this through a randomized promotion trial.
The study was funded by the National Institute on Aging. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
From Neurology
Does subclinical hyperthyroidism raise fracture risk?
People with subclinical hyperthyroidism are at 34% greater risk of experiencing a fracture compared with those with normal thyroid function, new research shows.
The finding, from a study of nearly 11,000 middle-aged men and women followed for a median of 2 decades, “highlights a potential role for more aggressive screening and monitoring of patients with subclinical hyperthyroidism to prevent bone mineral disease,” the researchers wrote.
Primary care physicians “should be more aware of the risks for fracture among persons with subclinical hyperthyroidism in the ambulatory setting,” Natalie R. Daya, a PhD student in epidemiology at Johns Hopkins Bloomberg School of Public Health, Baltimore, and first author of the study, told this news organization.
Ms. Daya and her colleagues published their findings in JAMA Network Open.
Building on earlier findings
The results agree with previous work, including a meta-analysis of 13 prospective cohort studies of 70,289 primarily White individuals with an average age of 64 years, which found that subclinical hyperthyroidism was associated with a modestly increased risk for fractures, the researchers noted.
“Our study extends these findings to a younger, community-based cohort that included both Black and White participants, included extensive adjustment for potential confounders, and had a longer follow-up period (median follow-up of 21 years vs. 12 years),” they wrote.
The study included 10,946 participants in the Atherosclerosis Risk in Communities Study who were recruited in Washington County, Maryland; Forsyth County, North Carolina; Jackson, Mississippi; and the suburbs of Minneapolis.
Baseline thyroid function was measured in blood samples collected during the second visit, which occurred between 1990 and 1992. No participants in the new analysis took thyroid medications or had a history of hospitalization for fractures at baseline, and all identified as Black or White. The mean age was 57 years, 24% were Black, and 54.3% were female.
Subclinical hyperthyroidism was defined as a thyrotropin level less than 0.56 mIU/L; subclinical hypothyroidism as a thyrotropin level greater than 5.1 mIU/L; and normal thyroid function as a thyrotropin level between 0.56 and 5.1 mIU/L, with normal free thyroxine levels of 0.85-1.4 ng/dL.
The vast majority (93%) of participants had normal thyroid function, 2.6% had subclinical hyperthyroidism, and 4.4% had subclinical hypothyroidism, according to the researchers.
Median follow-up was 21 years. The researchers identified 3,556 incident fractures, detected with hospitalization discharge codes through 2019 and inpatient and Medicare claims data through 2018, for a rate of 167.1 per 10,000 person-years.
Adjusted hazard ratios for fracture were 1.34 (95% confidence interval [CI], 1.09-1.65) for people with subclinical hyperthyroidism and 0.90 (95% CI, 0.77-1.05) for those with subclinical hypothyroidism, compared with those with normal thyroid function.
Most fractures occurred in either the hip (14.1%) or spine (13.8%), according to the researchers.
Limitations included a lack of thyroid function data during the follow-up period and lack of data on bone mineral density, the researchers wrote.
‘An important risk factor’
Endocrinologist Michael McClung, MD, founding and emeritus director of the Oregon Osteoporosis Center, Portland, who was not involved in the study, pointed out that both subclinical hypothyroidism and subclinical hyperthyroidism have been linked to greater risk for cardiovascular disease as well as fracture.
The new paper underscores that subclinical hyperthyroidism “should be included as an important risk factor” for fracture as well as cardiovascular risk, Dr. McClung said in an interview. In considering whether to treat osteoporosis, subclinical hyperthyroidism “may be enough to tip the balance in favor of pharmacological therapy,” he added.
Thyroid-stimulating hormone (TSH) tests to assess thyroid function are typically ordered only if a patient has symptoms of hyperthyroidism or hypothyroidism, Ms. Daya said. Depending on the cause and severity of a low TSH level, a physician may prescribe methimazole or radioactive iodine therapy to reduce the production of thyroxine, she said.
However, well-designed studies are needed to evaluate whether treatment of subclinical thyroid dysfunction reduces the risk for fracture or cardiovascular problems and assess downsides such as side effects, costs, and psychological harm, Dr. McClung noted.
The U.S. Preventive Services Task Force concluded in 2015 that the data were insufficient to recommend screening for thyroid dysfunction in adults without symptoms. As of a year ago, no new evidence has emerged to support an update, according to the task force’s website.
“Until those studies are available, selective screening of thyroid function should be considered in all patients undergoing risk assessment for cardiovascular disease or skeletal health,” Dr. McClung said.
The Atherosclerosis Risk in Communities Study has been funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health (NIH) and the U.S. Department of Health and Human Services. Ms. Daya and four study authors reported receiving NIH grants during the study period. Dr. McClung reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
People with subclinical hyperthyroidism are at 34% greater risk of experiencing a fracture compared with those with normal thyroid function, new research shows.
The finding, from a study of nearly 11,000 middle-aged men and women followed for a median of 2 decades, “highlights a potential role for more aggressive screening and monitoring of patients with subclinical hyperthyroidism to prevent bone mineral disease,” the researchers wrote.
Primary care physicians “should be more aware of the risks for fracture among persons with subclinical hyperthyroidism in the ambulatory setting,” Natalie R. Daya, a PhD student in epidemiology at Johns Hopkins Bloomberg School of Public Health, Baltimore, and first author of the study, told this news organization.
Ms. Daya and her colleagues published their findings in JAMA Network Open.
Building on earlier findings
The results agree with previous work, including a meta-analysis of 13 prospective cohort studies of 70,289 primarily White individuals with an average age of 64 years, which found that subclinical hyperthyroidism was associated with a modestly increased risk for fractures, the researchers noted.
“Our study extends these findings to a younger, community-based cohort that included both Black and White participants, included extensive adjustment for potential confounders, and had a longer follow-up period (median follow-up of 21 years vs. 12 years),” they wrote.
The study included 10,946 participants in the Atherosclerosis Risk in Communities Study who were recruited in Washington County, Maryland; Forsyth County, North Carolina; Jackson, Mississippi; and the suburbs of Minneapolis.
Baseline thyroid function was measured in blood samples collected during the second visit, which occurred between 1990 and 1992. No participants in the new analysis took thyroid medications or had a history of hospitalization for fractures at baseline, and all identified as Black or White. The mean age was 57 years, 24% were Black, and 54.3% were female.
Subclinical hyperthyroidism was defined as a thyrotropin level less than 0.56 mIU/L; subclinical hypothyroidism as a thyrotropin level greater than 5.1 mIU/L; and normal thyroid function as a thyrotropin level between 0.56 and 5.1 mIU/L, with normal free thyroxine levels of 0.85-1.4 ng/dL.
The vast majority (93%) of participants had normal thyroid function, 2.6% had subclinical hyperthyroidism, and 4.4% had subclinical hypothyroidism, according to the researchers.
Median follow-up was 21 years. The researchers identified 3,556 incident fractures, detected with hospitalization discharge codes through 2019 and inpatient and Medicare claims data through 2018, for a rate of 167.1 per 10,000 person-years.
Adjusted hazard ratios for fracture were 1.34 (95% confidence interval [CI], 1.09-1.65) for people with subclinical hyperthyroidism and 0.90 (95% CI, 0.77-1.05) for those with subclinical hypothyroidism, compared with those with normal thyroid function.
Most fractures occurred in either the hip (14.1%) or spine (13.8%), according to the researchers.
Limitations included a lack of thyroid function data during the follow-up period and lack of data on bone mineral density, the researchers wrote.
‘An important risk factor’
Endocrinologist Michael McClung, MD, founding and emeritus director of the Oregon Osteoporosis Center, Portland, who was not involved in the study, pointed out that both subclinical hypothyroidism and subclinical hyperthyroidism have been linked to greater risk for cardiovascular disease as well as fracture.
The new paper underscores that subclinical hyperthyroidism “should be included as an important risk factor” for fracture as well as cardiovascular risk, Dr. McClung said in an interview. In considering whether to treat osteoporosis, subclinical hyperthyroidism “may be enough to tip the balance in favor of pharmacological therapy,” he added.
Thyroid-stimulating hormone (TSH) tests to assess thyroid function are typically ordered only if a patient has symptoms of hyperthyroidism or hypothyroidism, Ms. Daya said. Depending on the cause and severity of a low TSH level, a physician may prescribe methimazole or radioactive iodine therapy to reduce the production of thyroxine, she said.
However, well-designed studies are needed to evaluate whether treatment of subclinical thyroid dysfunction reduces the risk for fracture or cardiovascular problems and assess downsides such as side effects, costs, and psychological harm, Dr. McClung noted.
The U.S. Preventive Services Task Force concluded in 2015 that the data were insufficient to recommend screening for thyroid dysfunction in adults without symptoms. As of a year ago, no new evidence has emerged to support an update, according to the task force’s website.
“Until those studies are available, selective screening of thyroid function should be considered in all patients undergoing risk assessment for cardiovascular disease or skeletal health,” Dr. McClung said.
The Atherosclerosis Risk in Communities Study has been funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health (NIH) and the U.S. Department of Health and Human Services. Ms. Daya and four study authors reported receiving NIH grants during the study period. Dr. McClung reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
People with subclinical hyperthyroidism are at 34% greater risk of experiencing a fracture compared with those with normal thyroid function, new research shows.
The finding, from a study of nearly 11,000 middle-aged men and women followed for a median of 2 decades, “highlights a potential role for more aggressive screening and monitoring of patients with subclinical hyperthyroidism to prevent bone mineral disease,” the researchers wrote.
Primary care physicians “should be more aware of the risks for fracture among persons with subclinical hyperthyroidism in the ambulatory setting,” Natalie R. Daya, a PhD student in epidemiology at Johns Hopkins Bloomberg School of Public Health, Baltimore, and first author of the study, told this news organization.
Ms. Daya and her colleagues published their findings in JAMA Network Open.
Building on earlier findings
The results agree with previous work, including a meta-analysis of 13 prospective cohort studies of 70,289 primarily White individuals with an average age of 64 years, which found that subclinical hyperthyroidism was associated with a modestly increased risk for fractures, the researchers noted.
“Our study extends these findings to a younger, community-based cohort that included both Black and White participants, included extensive adjustment for potential confounders, and had a longer follow-up period (median follow-up of 21 years vs. 12 years),” they wrote.
The study included 10,946 participants in the Atherosclerosis Risk in Communities Study who were recruited in Washington County, Maryland; Forsyth County, North Carolina; Jackson, Mississippi; and the suburbs of Minneapolis.
Baseline thyroid function was measured in blood samples collected during the second visit, which occurred between 1990 and 1992. No participants in the new analysis took thyroid medications or had a history of hospitalization for fractures at baseline, and all identified as Black or White. The mean age was 57 years, 24% were Black, and 54.3% were female.
Subclinical hyperthyroidism was defined as a thyrotropin level less than 0.56 mIU/L; subclinical hypothyroidism as a thyrotropin level greater than 5.1 mIU/L; and normal thyroid function as a thyrotropin level between 0.56 and 5.1 mIU/L, with normal free thyroxine levels of 0.85-1.4 ng/dL.
The vast majority (93%) of participants had normal thyroid function, 2.6% had subclinical hyperthyroidism, and 4.4% had subclinical hypothyroidism, according to the researchers.
Median follow-up was 21 years. The researchers identified 3,556 incident fractures, detected with hospitalization discharge codes through 2019 and inpatient and Medicare claims data through 2018, for a rate of 167.1 per 10,000 person-years.
Adjusted hazard ratios for fracture were 1.34 (95% confidence interval [CI], 1.09-1.65) for people with subclinical hyperthyroidism and 0.90 (95% CI, 0.77-1.05) for those with subclinical hypothyroidism, compared with those with normal thyroid function.
Most fractures occurred in either the hip (14.1%) or spine (13.8%), according to the researchers.
Limitations included a lack of thyroid function data during the follow-up period and lack of data on bone mineral density, the researchers wrote.
‘An important risk factor’
Endocrinologist Michael McClung, MD, founding and emeritus director of the Oregon Osteoporosis Center, Portland, who was not involved in the study, pointed out that both subclinical hypothyroidism and subclinical hyperthyroidism have been linked to greater risk for cardiovascular disease as well as fracture.
The new paper underscores that subclinical hyperthyroidism “should be included as an important risk factor” for fracture as well as cardiovascular risk, Dr. McClung said in an interview. In considering whether to treat osteoporosis, subclinical hyperthyroidism “may be enough to tip the balance in favor of pharmacological therapy,” he added.
Thyroid-stimulating hormone (TSH) tests to assess thyroid function are typically ordered only if a patient has symptoms of hyperthyroidism or hypothyroidism, Ms. Daya said. Depending on the cause and severity of a low TSH level, a physician may prescribe methimazole or radioactive iodine therapy to reduce the production of thyroxine, she said.
However, well-designed studies are needed to evaluate whether treatment of subclinical thyroid dysfunction reduces the risk for fracture or cardiovascular problems and assess downsides such as side effects, costs, and psychological harm, Dr. McClung noted.
The U.S. Preventive Services Task Force concluded in 2015 that the data were insufficient to recommend screening for thyroid dysfunction in adults without symptoms. As of a year ago, no new evidence has emerged to support an update, according to the task force’s website.
“Until those studies are available, selective screening of thyroid function should be considered in all patients undergoing risk assessment for cardiovascular disease or skeletal health,” Dr. McClung said.
The Atherosclerosis Risk in Communities Study has been funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health (NIH) and the U.S. Department of Health and Human Services. Ms. Daya and four study authors reported receiving NIH grants during the study period. Dr. McClung reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Daily aspirin fails to reduce risk of fractures in older adults
Previous research suggests that aspirin may reduce the risk of fragility fractures by delaying bone loss, but the direct effects of aspirin on bone microarchitecture and the association between aspirin use and fracture risk in humans has not been explored, corresponding author Anna L. Barker, PhD, and colleagues wrote in their paper published in JAMA Internal Medicine.
Dr. Barker, who is executive director of research and innovation for Silverchain (a senior care program), said, in an interview, that she and her coauthors hypothesized “that aspirin could reduce both falls and fractures by reducing cardiovascular-associated physical and cognitive impairments and the anti-inflammatory properties mediating bone remodeling.”
Study methods and results
In the ASPREE-FRACTURE substudy, the authors examined the impact of daily low-dose aspirin (100 mg) on incidence of any fracture in more than 16,000 community-dwelling adults aged 70 years and older. A secondary endpoint was the incidence of serious falls, defined as falls requiring a hospital visit. Individuals with chronic illness and cardiovascular or cerebrovascular disease were excluded, as were those with dementia or other cognitive impairment, or a physical disability.
The study population included 16,703 participants enrolled in the larger Aspirin in Reducing Events in the Elderly (ASPREE) clinical trial between 2010 and 2014. Of these, 8,322 were randomized to aspirin and 8,381 to a placebo. The median age was 74 years, and 55% of the participants were women.
Over a median follow-up of 4.6 years (76,219 total person-years), the risk of first fracture was similar between the aspirin and placebo groups (hazard ratio, 0.97), but the risk of serious falls was significantly higher in the aspirin group (884 falls vs. 804 falls, P = .01).
The incidence of first fracture was similar between the aspirin and placebo groups (813 vs. 718), as was the incidence of all fractures (1,394 and 1,471, respectively).
The results for both fractures and falls were essentially unchanged in a multivariate analysis controlling for variables known to affect fracture and fall risk and remained similar for different types of fractures (hip, trauma-related, nonpathological) as well, the researchers noted.
In their discussion, the researchers wrote that the clinical significance of the study is the inability of aspirin to reduce the risk of fractures in otherwise healthy older adults. They expressed surprise at the increase in serious falls, citing their hypothesis that the antiplatelet effects of aspirin may reduce cardiovascular and cerebrovascular events, thereby slowing physical decline and decreasing fall risk.
The increased risk of serious falls was not accompanied by an increase in fractures, and the increased fall risk was similar across subgroups of aspirin users, the researchers said.
Low-dose aspirin’s failure to reduce the risk of fractures but increasing the risk of serious falls adds to evidence that this agent provides little favorable benefit in a healthy, White older adult population.
The study findings were limited by several factors including the relatively homogeneous older and healthy population, and possible insufficient study duration to allow for changes in fracture and fall risk, the researchers noted. Other potential limitations include that the dose of aspirin used in the study was too low to affect bone remodeling and the lack of data on bone density, rheumatoid arthritis, and osteoporosis, they said.
However, the results were strengthened by the large sample size and high participant retention rate, and represent the first known examination of data from a randomized, controlled trial of the effect of aspirin on fractures, they added.
Setting the stage for more research
Overall, “This study adds to the growing body of evidence from other studies that the use of aspirin in people who do not have a risk of cardiovascular disease or stroke provides little benefit,” said Dr. Barker, who is also a professor at Monash University, Melbourne, Australia. However, “Older adults with a medical reason to take aspirin should continue to do so,” she emphasized.
“The most important thing the study showed is the primary endpoint, which was that aspirin use does not have an effect on fracture risk,” said Neil Skolnik, MD, of Sidney Kimmel Medical College, Philadelphia, in an interview.
“The increase in serious falls, as defined by a fall resulting in a visit to a hospital, is likely due to an increased risk of bleeding after a fall on aspirin,” said Dr. Skolnik, who was not involved in the study. Dr. Skolnik added that the current study findings support the current recommendations of the United States Preventive Services Task Force, which he quoted as follows, “The USPSTF recommends against initiating low-dose aspirin use for the primary prevention of CVD in adults 60 years or older.”
The study was supported by the National Institute on Aging and the National Cancer Institute at the National Institutes of Health; the National Health and Medical Research Council (Australia); Monash University; and the Victorian Cancer Agency. Lead author Dr. Barker was supported in part by the NHMRC and also disclosed grants from the NHMRC outside the current study. The ASPREE substudy also was supported by the University of Pittsburgh Claude D. Pepper Older American Independence Center and the Wake Forest University Claude D. Pepper Older Americans Independence Center. Bayer AG provided the aspirin used in the study but had no other role. Dr. Skolnik had no financial conflicts to disclose, but he serves on the editorial advisory board of Family Practice News.
Previous research suggests that aspirin may reduce the risk of fragility fractures by delaying bone loss, but the direct effects of aspirin on bone microarchitecture and the association between aspirin use and fracture risk in humans has not been explored, corresponding author Anna L. Barker, PhD, and colleagues wrote in their paper published in JAMA Internal Medicine.
Dr. Barker, who is executive director of research and innovation for Silverchain (a senior care program), said, in an interview, that she and her coauthors hypothesized “that aspirin could reduce both falls and fractures by reducing cardiovascular-associated physical and cognitive impairments and the anti-inflammatory properties mediating bone remodeling.”
Study methods and results
In the ASPREE-FRACTURE substudy, the authors examined the impact of daily low-dose aspirin (100 mg) on incidence of any fracture in more than 16,000 community-dwelling adults aged 70 years and older. A secondary endpoint was the incidence of serious falls, defined as falls requiring a hospital visit. Individuals with chronic illness and cardiovascular or cerebrovascular disease were excluded, as were those with dementia or other cognitive impairment, or a physical disability.
The study population included 16,703 participants enrolled in the larger Aspirin in Reducing Events in the Elderly (ASPREE) clinical trial between 2010 and 2014. Of these, 8,322 were randomized to aspirin and 8,381 to a placebo. The median age was 74 years, and 55% of the participants were women.
Over a median follow-up of 4.6 years (76,219 total person-years), the risk of first fracture was similar between the aspirin and placebo groups (hazard ratio, 0.97), but the risk of serious falls was significantly higher in the aspirin group (884 falls vs. 804 falls, P = .01).
The incidence of first fracture was similar between the aspirin and placebo groups (813 vs. 718), as was the incidence of all fractures (1,394 and 1,471, respectively).
The results for both fractures and falls were essentially unchanged in a multivariate analysis controlling for variables known to affect fracture and fall risk and remained similar for different types of fractures (hip, trauma-related, nonpathological) as well, the researchers noted.
In their discussion, the researchers wrote that the clinical significance of the study is the inability of aspirin to reduce the risk of fractures in otherwise healthy older adults. They expressed surprise at the increase in serious falls, citing their hypothesis that the antiplatelet effects of aspirin may reduce cardiovascular and cerebrovascular events, thereby slowing physical decline and decreasing fall risk.
The increased risk of serious falls was not accompanied by an increase in fractures, and the increased fall risk was similar across subgroups of aspirin users, the researchers said.
Low-dose aspirin’s failure to reduce the risk of fractures but increasing the risk of serious falls adds to evidence that this agent provides little favorable benefit in a healthy, White older adult population.
The study findings were limited by several factors including the relatively homogeneous older and healthy population, and possible insufficient study duration to allow for changes in fracture and fall risk, the researchers noted. Other potential limitations include that the dose of aspirin used in the study was too low to affect bone remodeling and the lack of data on bone density, rheumatoid arthritis, and osteoporosis, they said.
However, the results were strengthened by the large sample size and high participant retention rate, and represent the first known examination of data from a randomized, controlled trial of the effect of aspirin on fractures, they added.
Setting the stage for more research
Overall, “This study adds to the growing body of evidence from other studies that the use of aspirin in people who do not have a risk of cardiovascular disease or stroke provides little benefit,” said Dr. Barker, who is also a professor at Monash University, Melbourne, Australia. However, “Older adults with a medical reason to take aspirin should continue to do so,” she emphasized.
“The most important thing the study showed is the primary endpoint, which was that aspirin use does not have an effect on fracture risk,” said Neil Skolnik, MD, of Sidney Kimmel Medical College, Philadelphia, in an interview.
“The increase in serious falls, as defined by a fall resulting in a visit to a hospital, is likely due to an increased risk of bleeding after a fall on aspirin,” said Dr. Skolnik, who was not involved in the study. Dr. Skolnik added that the current study findings support the current recommendations of the United States Preventive Services Task Force, which he quoted as follows, “The USPSTF recommends against initiating low-dose aspirin use for the primary prevention of CVD in adults 60 years or older.”
The study was supported by the National Institute on Aging and the National Cancer Institute at the National Institutes of Health; the National Health and Medical Research Council (Australia); Monash University; and the Victorian Cancer Agency. Lead author Dr. Barker was supported in part by the NHMRC and also disclosed grants from the NHMRC outside the current study. The ASPREE substudy also was supported by the University of Pittsburgh Claude D. Pepper Older American Independence Center and the Wake Forest University Claude D. Pepper Older Americans Independence Center. Bayer AG provided the aspirin used in the study but had no other role. Dr. Skolnik had no financial conflicts to disclose, but he serves on the editorial advisory board of Family Practice News.
Previous research suggests that aspirin may reduce the risk of fragility fractures by delaying bone loss, but the direct effects of aspirin on bone microarchitecture and the association between aspirin use and fracture risk in humans has not been explored, corresponding author Anna L. Barker, PhD, and colleagues wrote in their paper published in JAMA Internal Medicine.
Dr. Barker, who is executive director of research and innovation for Silverchain (a senior care program), said, in an interview, that she and her coauthors hypothesized “that aspirin could reduce both falls and fractures by reducing cardiovascular-associated physical and cognitive impairments and the anti-inflammatory properties mediating bone remodeling.”
Study methods and results
In the ASPREE-FRACTURE substudy, the authors examined the impact of daily low-dose aspirin (100 mg) on incidence of any fracture in more than 16,000 community-dwelling adults aged 70 years and older. A secondary endpoint was the incidence of serious falls, defined as falls requiring a hospital visit. Individuals with chronic illness and cardiovascular or cerebrovascular disease were excluded, as were those with dementia or other cognitive impairment, or a physical disability.
The study population included 16,703 participants enrolled in the larger Aspirin in Reducing Events in the Elderly (ASPREE) clinical trial between 2010 and 2014. Of these, 8,322 were randomized to aspirin and 8,381 to a placebo. The median age was 74 years, and 55% of the participants were women.
Over a median follow-up of 4.6 years (76,219 total person-years), the risk of first fracture was similar between the aspirin and placebo groups (hazard ratio, 0.97), but the risk of serious falls was significantly higher in the aspirin group (884 falls vs. 804 falls, P = .01).
The incidence of first fracture was similar between the aspirin and placebo groups (813 vs. 718), as was the incidence of all fractures (1,394 and 1,471, respectively).
The results for both fractures and falls were essentially unchanged in a multivariate analysis controlling for variables known to affect fracture and fall risk and remained similar for different types of fractures (hip, trauma-related, nonpathological) as well, the researchers noted.
In their discussion, the researchers wrote that the clinical significance of the study is the inability of aspirin to reduce the risk of fractures in otherwise healthy older adults. They expressed surprise at the increase in serious falls, citing their hypothesis that the antiplatelet effects of aspirin may reduce cardiovascular and cerebrovascular events, thereby slowing physical decline and decreasing fall risk.
The increased risk of serious falls was not accompanied by an increase in fractures, and the increased fall risk was similar across subgroups of aspirin users, the researchers said.
Low-dose aspirin’s failure to reduce the risk of fractures but increasing the risk of serious falls adds to evidence that this agent provides little favorable benefit in a healthy, White older adult population.
The study findings were limited by several factors including the relatively homogeneous older and healthy population, and possible insufficient study duration to allow for changes in fracture and fall risk, the researchers noted. Other potential limitations include that the dose of aspirin used in the study was too low to affect bone remodeling and the lack of data on bone density, rheumatoid arthritis, and osteoporosis, they said.
However, the results were strengthened by the large sample size and high participant retention rate, and represent the first known examination of data from a randomized, controlled trial of the effect of aspirin on fractures, they added.
Setting the stage for more research
Overall, “This study adds to the growing body of evidence from other studies that the use of aspirin in people who do not have a risk of cardiovascular disease or stroke provides little benefit,” said Dr. Barker, who is also a professor at Monash University, Melbourne, Australia. However, “Older adults with a medical reason to take aspirin should continue to do so,” she emphasized.
“The most important thing the study showed is the primary endpoint, which was that aspirin use does not have an effect on fracture risk,” said Neil Skolnik, MD, of Sidney Kimmel Medical College, Philadelphia, in an interview.
“The increase in serious falls, as defined by a fall resulting in a visit to a hospital, is likely due to an increased risk of bleeding after a fall on aspirin,” said Dr. Skolnik, who was not involved in the study. Dr. Skolnik added that the current study findings support the current recommendations of the United States Preventive Services Task Force, which he quoted as follows, “The USPSTF recommends against initiating low-dose aspirin use for the primary prevention of CVD in adults 60 years or older.”
The study was supported by the National Institute on Aging and the National Cancer Institute at the National Institutes of Health; the National Health and Medical Research Council (Australia); Monash University; and the Victorian Cancer Agency. Lead author Dr. Barker was supported in part by the NHMRC and also disclosed grants from the NHMRC outside the current study. The ASPREE substudy also was supported by the University of Pittsburgh Claude D. Pepper Older American Independence Center and the Wake Forest University Claude D. Pepper Older Americans Independence Center. Bayer AG provided the aspirin used in the study but had no other role. Dr. Skolnik had no financial conflicts to disclose, but he serves on the editorial advisory board of Family Practice News.
FROM JAMA INTERNAL MEDICINE
Novel drug eases Parkinson’s-related constipation in early trial
The findings are based on 135 patients who completed 7-25 days of treatment with a daily oral dose of the drug, ENT-01, or a placebo. Complete spontaneous bowel movements (CSBMs), the primary efficacy endpoint, increased from a mean of 0.7 per week to 3.2 in individuals who took ENT-01 versus 1.2 in the placebo group.
The phase 2, multicenter, randomized trial showed that the drug “is safe and that it rapidly normalized bowel function in a dose-dependent fashion, with an effect that seems to persist for several weeks beyond the treatment period,” the researchers wrote in their paper on the research, which was published in Annals of Internal Medicine.
The researchers hypothesized that displacing aggregated alpha-synuclein from nerve cells in the gastrointestinal tract may also “slow progression of neurologic symptoms” in patients with PD by arresting the abnormal development of alpha-nucleic aggregates in the brain.
Denise Barbut, MD, cofounder, president and chief medical officer of Enterin, the company developing ENT-01, said the next step is another phase 2 trial to determine whether the drug reverses dementia or psychosis in patients with PD, before conducting a phase 3 study.
“We want to treat all nonmotor symptoms of Parkinson’s disease, not just constipation,” she said.
Constipation is an early PD symptom
Constipation is a common and persistent symptom of PD that often emerges years earlier than other symptoms such as motor deficits. Recent research has linked it to aggregates of alpha-synuclein that bind to cells in the enteric nervous system and may spread to the brain via the vagus nerve.
According to the researchers, ENT-01, a synthetic derivative of the antimicrobial compound squalamine, improves neural signaling in the gut by displacing alpha-synuclein aggregates.
In their double-blinded study, patients were randomized 3:1 to receive ENT-01 or a placebo and stratified by constipation severity to one of two starting doses: 75 mg or three placebo pills or 150 mg or six placebo pills. Doses increased until a patient reached a “prokinetic” dose, a maximum of 250 mg or 10 placebo pills, or the individual’s tolerability limit.
Dosing was fixed for the remainder of the 25 days, after which all patients took a placebo for 2 weeks followed by a 4-week washout.
In addition to more CSBMs, the treatment group had greater improvements in secondary endpoints of weekly spontaneous bowel movements (P = .002), better stool consistency (P < .001), improved ease of passage (P = .006), and less laxative use (P = .041).
There were no significant differences between the groups in scores on the Patient Assessment of Constipation Symptoms or the Patient Assessment of Constipation Quality of Life.
No deaths occurred, and there were no serious adverse events attributed to ENT-01. However, adverse events occurred in 61 (65.6%) of patients who took the drug versus 27 (47.4%) of those who took a placebo.
The most common problems were nausea, experienced by 32 (34%) in the ENT-01 group and 3 (5.3%) in the placebo group, and diarrhea, which occurred in 18 (9.4%) of those in the ENT-01 group and three (5.3%) who took the placebo.
Of 93 patients randomized to the drug (25.8%), 24 discontinued treatment before therapy ended, mostly because of nausea or diarrhea. That compared with 8 of 57 (14.1%) patients in the placebo group who stopped taking their pills before the end of the therapy period.
The researchers suggested that nausea and diarrhea might be alleviated by more gradual dosing escalation and the use of antinausea medication.
Dr. Barbut noted that a previous open-label trial of 50 patients with PD showed that ENT-01 acts locally in the gastrointestinal tract, which means it would not be absorbed into the bloodstream or interfere with other medications.
Targeting the underlying disease
Researchers noted that, in small subsets of patients with dementia or psychosis, greater improvements in those symptoms occurred among those who took ENT-01 versus those who took a placebo.
According to the study, among 11 patients with psychosis, average scores on the Scale for the Assessment of Positive Symptoms adapted for PD dropped from 6.5 to 1.8 on a 45-point scale at the end of treatment in the ENT-01 group (n = 5) and from 6.3 to 3.4 in the placebo group (n = 6).
In 28 patients with dementia, scores on the Mini-Mental State Examination improved by 2.4 points on a 30-point scale, from 24.1 to 26.5, during the treatment period for the ENT-01 group (n = 14) versus an improvement of 0.9 points, from 24.8 to 25.7, in the placebo group (n = 14).
The researchers said the findings must be evaluated in future trials dedicated to studying ENT-01’s effects on PD-related psychosis and dementia.
Satish Rao, MD, PhD, a professor of medicine at the Medical College of Georgia, Augusta, who was not involved in the study, cautioned that long-term efficacy and tolerability have yet to be shown but lauded the study’s rigor including a “very robust endpoint” in CSBMs.
He added that, if findings are reproduced in a large study, the drug could have “a major impact” not just in treating constipation, for which there are no PD-specific drugs, but also in addressing neurological dysfunctions that are cardinal features of PD. “That is what is exciting to me, because we’re now talking about reversing the disease itself,” he said.
However, Dr. Barbut said it’s been difficult to get across to the medical community and to investors that a drug that acts on nerve cells in the gut might reverse neurologic symptoms by improving direct gut-brain communication. “That’s a concept that is alien to most people’s thinking,” she said.
Enterin funded the study and was responsible for the design, data collection and analysis. Its employees also participated in the interpretation of data, writing of the report, and the decision to submit the manuscript for publication. Dr. Barbut reported stock options in Enterin and patent interests in ENT-01. Fifteen other study investigators reported financial ties to Enterin and/or ENT-01 including employment, stock options, research funding, consulting fees and patent application ownership. Dr. Rao reported receiving honoraria from multiple companies that market drugs for general constipation.
The findings are based on 135 patients who completed 7-25 days of treatment with a daily oral dose of the drug, ENT-01, or a placebo. Complete spontaneous bowel movements (CSBMs), the primary efficacy endpoint, increased from a mean of 0.7 per week to 3.2 in individuals who took ENT-01 versus 1.2 in the placebo group.
The phase 2, multicenter, randomized trial showed that the drug “is safe and that it rapidly normalized bowel function in a dose-dependent fashion, with an effect that seems to persist for several weeks beyond the treatment period,” the researchers wrote in their paper on the research, which was published in Annals of Internal Medicine.
The researchers hypothesized that displacing aggregated alpha-synuclein from nerve cells in the gastrointestinal tract may also “slow progression of neurologic symptoms” in patients with PD by arresting the abnormal development of alpha-nucleic aggregates in the brain.
Denise Barbut, MD, cofounder, president and chief medical officer of Enterin, the company developing ENT-01, said the next step is another phase 2 trial to determine whether the drug reverses dementia or psychosis in patients with PD, before conducting a phase 3 study.
“We want to treat all nonmotor symptoms of Parkinson’s disease, not just constipation,” she said.
Constipation is an early PD symptom
Constipation is a common and persistent symptom of PD that often emerges years earlier than other symptoms such as motor deficits. Recent research has linked it to aggregates of alpha-synuclein that bind to cells in the enteric nervous system and may spread to the brain via the vagus nerve.
According to the researchers, ENT-01, a synthetic derivative of the antimicrobial compound squalamine, improves neural signaling in the gut by displacing alpha-synuclein aggregates.
In their double-blinded study, patients were randomized 3:1 to receive ENT-01 or a placebo and stratified by constipation severity to one of two starting doses: 75 mg or three placebo pills or 150 mg or six placebo pills. Doses increased until a patient reached a “prokinetic” dose, a maximum of 250 mg or 10 placebo pills, or the individual’s tolerability limit.
Dosing was fixed for the remainder of the 25 days, after which all patients took a placebo for 2 weeks followed by a 4-week washout.
In addition to more CSBMs, the treatment group had greater improvements in secondary endpoints of weekly spontaneous bowel movements (P = .002), better stool consistency (P < .001), improved ease of passage (P = .006), and less laxative use (P = .041).
There were no significant differences between the groups in scores on the Patient Assessment of Constipation Symptoms or the Patient Assessment of Constipation Quality of Life.
No deaths occurred, and there were no serious adverse events attributed to ENT-01. However, adverse events occurred in 61 (65.6%) of patients who took the drug versus 27 (47.4%) of those who took a placebo.
The most common problems were nausea, experienced by 32 (34%) in the ENT-01 group and 3 (5.3%) in the placebo group, and diarrhea, which occurred in 18 (9.4%) of those in the ENT-01 group and three (5.3%) who took the placebo.
Of 93 patients randomized to the drug (25.8%), 24 discontinued treatment before therapy ended, mostly because of nausea or diarrhea. That compared with 8 of 57 (14.1%) patients in the placebo group who stopped taking their pills before the end of the therapy period.
The researchers suggested that nausea and diarrhea might be alleviated by more gradual dosing escalation and the use of antinausea medication.
Dr. Barbut noted that a previous open-label trial of 50 patients with PD showed that ENT-01 acts locally in the gastrointestinal tract, which means it would not be absorbed into the bloodstream or interfere with other medications.
Targeting the underlying disease
Researchers noted that, in small subsets of patients with dementia or psychosis, greater improvements in those symptoms occurred among those who took ENT-01 versus those who took a placebo.
According to the study, among 11 patients with psychosis, average scores on the Scale for the Assessment of Positive Symptoms adapted for PD dropped from 6.5 to 1.8 on a 45-point scale at the end of treatment in the ENT-01 group (n = 5) and from 6.3 to 3.4 in the placebo group (n = 6).
In 28 patients with dementia, scores on the Mini-Mental State Examination improved by 2.4 points on a 30-point scale, from 24.1 to 26.5, during the treatment period for the ENT-01 group (n = 14) versus an improvement of 0.9 points, from 24.8 to 25.7, in the placebo group (n = 14).
The researchers said the findings must be evaluated in future trials dedicated to studying ENT-01’s effects on PD-related psychosis and dementia.
Satish Rao, MD, PhD, a professor of medicine at the Medical College of Georgia, Augusta, who was not involved in the study, cautioned that long-term efficacy and tolerability have yet to be shown but lauded the study’s rigor including a “very robust endpoint” in CSBMs.
He added that, if findings are reproduced in a large study, the drug could have “a major impact” not just in treating constipation, for which there are no PD-specific drugs, but also in addressing neurological dysfunctions that are cardinal features of PD. “That is what is exciting to me, because we’re now talking about reversing the disease itself,” he said.
However, Dr. Barbut said it’s been difficult to get across to the medical community and to investors that a drug that acts on nerve cells in the gut might reverse neurologic symptoms by improving direct gut-brain communication. “That’s a concept that is alien to most people’s thinking,” she said.
Enterin funded the study and was responsible for the design, data collection and analysis. Its employees also participated in the interpretation of data, writing of the report, and the decision to submit the manuscript for publication. Dr. Barbut reported stock options in Enterin and patent interests in ENT-01. Fifteen other study investigators reported financial ties to Enterin and/or ENT-01 including employment, stock options, research funding, consulting fees and patent application ownership. Dr. Rao reported receiving honoraria from multiple companies that market drugs for general constipation.
The findings are based on 135 patients who completed 7-25 days of treatment with a daily oral dose of the drug, ENT-01, or a placebo. Complete spontaneous bowel movements (CSBMs), the primary efficacy endpoint, increased from a mean of 0.7 per week to 3.2 in individuals who took ENT-01 versus 1.2 in the placebo group.
The phase 2, multicenter, randomized trial showed that the drug “is safe and that it rapidly normalized bowel function in a dose-dependent fashion, with an effect that seems to persist for several weeks beyond the treatment period,” the researchers wrote in their paper on the research, which was published in Annals of Internal Medicine.
The researchers hypothesized that displacing aggregated alpha-synuclein from nerve cells in the gastrointestinal tract may also “slow progression of neurologic symptoms” in patients with PD by arresting the abnormal development of alpha-nucleic aggregates in the brain.
Denise Barbut, MD, cofounder, president and chief medical officer of Enterin, the company developing ENT-01, said the next step is another phase 2 trial to determine whether the drug reverses dementia or psychosis in patients with PD, before conducting a phase 3 study.
“We want to treat all nonmotor symptoms of Parkinson’s disease, not just constipation,” she said.
Constipation is an early PD symptom
Constipation is a common and persistent symptom of PD that often emerges years earlier than other symptoms such as motor deficits. Recent research has linked it to aggregates of alpha-synuclein that bind to cells in the enteric nervous system and may spread to the brain via the vagus nerve.
According to the researchers, ENT-01, a synthetic derivative of the antimicrobial compound squalamine, improves neural signaling in the gut by displacing alpha-synuclein aggregates.
In their double-blinded study, patients were randomized 3:1 to receive ENT-01 or a placebo and stratified by constipation severity to one of two starting doses: 75 mg or three placebo pills or 150 mg or six placebo pills. Doses increased until a patient reached a “prokinetic” dose, a maximum of 250 mg or 10 placebo pills, or the individual’s tolerability limit.
Dosing was fixed for the remainder of the 25 days, after which all patients took a placebo for 2 weeks followed by a 4-week washout.
In addition to more CSBMs, the treatment group had greater improvements in secondary endpoints of weekly spontaneous bowel movements (P = .002), better stool consistency (P < .001), improved ease of passage (P = .006), and less laxative use (P = .041).
There were no significant differences between the groups in scores on the Patient Assessment of Constipation Symptoms or the Patient Assessment of Constipation Quality of Life.
No deaths occurred, and there were no serious adverse events attributed to ENT-01. However, adverse events occurred in 61 (65.6%) of patients who took the drug versus 27 (47.4%) of those who took a placebo.
The most common problems were nausea, experienced by 32 (34%) in the ENT-01 group and 3 (5.3%) in the placebo group, and diarrhea, which occurred in 18 (9.4%) of those in the ENT-01 group and three (5.3%) who took the placebo.
Of 93 patients randomized to the drug (25.8%), 24 discontinued treatment before therapy ended, mostly because of nausea or diarrhea. That compared with 8 of 57 (14.1%) patients in the placebo group who stopped taking their pills before the end of the therapy period.
The researchers suggested that nausea and diarrhea might be alleviated by more gradual dosing escalation and the use of antinausea medication.
Dr. Barbut noted that a previous open-label trial of 50 patients with PD showed that ENT-01 acts locally in the gastrointestinal tract, which means it would not be absorbed into the bloodstream or interfere with other medications.
Targeting the underlying disease
Researchers noted that, in small subsets of patients with dementia or psychosis, greater improvements in those symptoms occurred among those who took ENT-01 versus those who took a placebo.
According to the study, among 11 patients with psychosis, average scores on the Scale for the Assessment of Positive Symptoms adapted for PD dropped from 6.5 to 1.8 on a 45-point scale at the end of treatment in the ENT-01 group (n = 5) and from 6.3 to 3.4 in the placebo group (n = 6).
In 28 patients with dementia, scores on the Mini-Mental State Examination improved by 2.4 points on a 30-point scale, from 24.1 to 26.5, during the treatment period for the ENT-01 group (n = 14) versus an improvement of 0.9 points, from 24.8 to 25.7, in the placebo group (n = 14).
The researchers said the findings must be evaluated in future trials dedicated to studying ENT-01’s effects on PD-related psychosis and dementia.
Satish Rao, MD, PhD, a professor of medicine at the Medical College of Georgia, Augusta, who was not involved in the study, cautioned that long-term efficacy and tolerability have yet to be shown but lauded the study’s rigor including a “very robust endpoint” in CSBMs.
He added that, if findings are reproduced in a large study, the drug could have “a major impact” not just in treating constipation, for which there are no PD-specific drugs, but also in addressing neurological dysfunctions that are cardinal features of PD. “That is what is exciting to me, because we’re now talking about reversing the disease itself,” he said.
However, Dr. Barbut said it’s been difficult to get across to the medical community and to investors that a drug that acts on nerve cells in the gut might reverse neurologic symptoms by improving direct gut-brain communication. “That’s a concept that is alien to most people’s thinking,” she said.
Enterin funded the study and was responsible for the design, data collection and analysis. Its employees also participated in the interpretation of data, writing of the report, and the decision to submit the manuscript for publication. Dr. Barbut reported stock options in Enterin and patent interests in ENT-01. Fifteen other study investigators reported financial ties to Enterin and/or ENT-01 including employment, stock options, research funding, consulting fees and patent application ownership. Dr. Rao reported receiving honoraria from multiple companies that market drugs for general constipation.
FROM ANNALS OF INTERNAL MEDICINE
Sexual activities in seniors: Experts advise on what to ask
Sexual activity in older adults is something of a taboo, rarely discussed and largely ignored by researchers.
But failing to address human sexuality in old age can lead doctors to ask seniors the wrong questions about sex – if they ask at all.
When researchers do look at the issue, they find surprises, as Janie Steckenrider, PhD, has learned. In a new study presented at the annual scientific meeting of the Gerontological Society of America, Dr. Steckenrider, a professor of political science at Loyola Marymount University, Los Angeles, found that previous attempts to qualify the sexual activities of seniors appear to be limited largely to partnered sex – despite the fact that many older people tend to practice “solo sex,” another term for masturbation.
“Maybe they don’t have a partner, or their partner has sexual dysfunction, or has died. There could be pain involved,” Dr. Steckenrider said. “In the hierarchy of sexual activity, penetrative sex is the cultural norm. As people get older, penetrative sex becomes less important. The hierarchy shifts to include more emotional intimacy like touching and fondling.”
Of the 17 survey questionnaires Dr. Steckenrider analyzed, 11 had questions that focused exclusively on sex with a partner. Nine defined sexual activity and just five included questions about masturbation.
Take, for example, a 2018 poll by researchers at the University of Michigan, Ann Arbor, who found that 40% of people ages 65-80 said they were sexually active. Meanwhile, nearly two thirds of older adults said they were interested in sex, and more than half said sex was important to their quality of life.
But Dr. Steckenrider said this poll, like others, left the term “sexually active” undefined – raising questions about the meaning of the findings.
Sheryl A. Kingsberg, PhD, chief of behavioral medicine in the department of obstetrics and gynecology at University Hospitals Cleveland Medical Center, said she was surprised so few of the studies analyzed by Dr. Steckenrider included masturbation in their definition of sex.
“Clinical trials of potential treatments for female sexual problems, like hypoactive sexual desire disorder or painful sex, include both definitions of sexual activity and questions about masturbation, she said. “Definitions also should not assume partnered sex is male or female,” she added.
Dr. Steckenrider and Dr. Kingsberg encouraged healthcare providers to address the sexual health of their patients by asking questions about their sexual health and concerns.
“Health care professionals cannot address sexual concerns if they don’t acknowledge their patients as sexual beings and inquire about sexual problems,” Dr. Kingsberg said.
The key, according to Dr. Steckenrider, is for clinicians to ask the right questions. But which ones?
Detail is crucial.
“I think that’s far better than asking whether they are sexually active, yes or no,” she said. “Ask: ‘How often have you engaged in these types of sexual activities?’ If you are looking for frequency, and be specific about the types of sex: kissing, fondling, or masturbation.”
A version of this article first appeared on Medscape.com.
Sexual activity in older adults is something of a taboo, rarely discussed and largely ignored by researchers.
But failing to address human sexuality in old age can lead doctors to ask seniors the wrong questions about sex – if they ask at all.
When researchers do look at the issue, they find surprises, as Janie Steckenrider, PhD, has learned. In a new study presented at the annual scientific meeting of the Gerontological Society of America, Dr. Steckenrider, a professor of political science at Loyola Marymount University, Los Angeles, found that previous attempts to qualify the sexual activities of seniors appear to be limited largely to partnered sex – despite the fact that many older people tend to practice “solo sex,” another term for masturbation.
“Maybe they don’t have a partner, or their partner has sexual dysfunction, or has died. There could be pain involved,” Dr. Steckenrider said. “In the hierarchy of sexual activity, penetrative sex is the cultural norm. As people get older, penetrative sex becomes less important. The hierarchy shifts to include more emotional intimacy like touching and fondling.”
Of the 17 survey questionnaires Dr. Steckenrider analyzed, 11 had questions that focused exclusively on sex with a partner. Nine defined sexual activity and just five included questions about masturbation.
Take, for example, a 2018 poll by researchers at the University of Michigan, Ann Arbor, who found that 40% of people ages 65-80 said they were sexually active. Meanwhile, nearly two thirds of older adults said they were interested in sex, and more than half said sex was important to their quality of life.
But Dr. Steckenrider said this poll, like others, left the term “sexually active” undefined – raising questions about the meaning of the findings.
Sheryl A. Kingsberg, PhD, chief of behavioral medicine in the department of obstetrics and gynecology at University Hospitals Cleveland Medical Center, said she was surprised so few of the studies analyzed by Dr. Steckenrider included masturbation in their definition of sex.
“Clinical trials of potential treatments for female sexual problems, like hypoactive sexual desire disorder or painful sex, include both definitions of sexual activity and questions about masturbation, she said. “Definitions also should not assume partnered sex is male or female,” she added.
Dr. Steckenrider and Dr. Kingsberg encouraged healthcare providers to address the sexual health of their patients by asking questions about their sexual health and concerns.
“Health care professionals cannot address sexual concerns if they don’t acknowledge their patients as sexual beings and inquire about sexual problems,” Dr. Kingsberg said.
The key, according to Dr. Steckenrider, is for clinicians to ask the right questions. But which ones?
Detail is crucial.
“I think that’s far better than asking whether they are sexually active, yes or no,” she said. “Ask: ‘How often have you engaged in these types of sexual activities?’ If you are looking for frequency, and be specific about the types of sex: kissing, fondling, or masturbation.”
A version of this article first appeared on Medscape.com.
Sexual activity in older adults is something of a taboo, rarely discussed and largely ignored by researchers.
But failing to address human sexuality in old age can lead doctors to ask seniors the wrong questions about sex – if they ask at all.
When researchers do look at the issue, they find surprises, as Janie Steckenrider, PhD, has learned. In a new study presented at the annual scientific meeting of the Gerontological Society of America, Dr. Steckenrider, a professor of political science at Loyola Marymount University, Los Angeles, found that previous attempts to qualify the sexual activities of seniors appear to be limited largely to partnered sex – despite the fact that many older people tend to practice “solo sex,” another term for masturbation.
“Maybe they don’t have a partner, or their partner has sexual dysfunction, or has died. There could be pain involved,” Dr. Steckenrider said. “In the hierarchy of sexual activity, penetrative sex is the cultural norm. As people get older, penetrative sex becomes less important. The hierarchy shifts to include more emotional intimacy like touching and fondling.”
Of the 17 survey questionnaires Dr. Steckenrider analyzed, 11 had questions that focused exclusively on sex with a partner. Nine defined sexual activity and just five included questions about masturbation.
Take, for example, a 2018 poll by researchers at the University of Michigan, Ann Arbor, who found that 40% of people ages 65-80 said they were sexually active. Meanwhile, nearly two thirds of older adults said they were interested in sex, and more than half said sex was important to their quality of life.
But Dr. Steckenrider said this poll, like others, left the term “sexually active” undefined – raising questions about the meaning of the findings.
Sheryl A. Kingsberg, PhD, chief of behavioral medicine in the department of obstetrics and gynecology at University Hospitals Cleveland Medical Center, said she was surprised so few of the studies analyzed by Dr. Steckenrider included masturbation in their definition of sex.
“Clinical trials of potential treatments for female sexual problems, like hypoactive sexual desire disorder or painful sex, include both definitions of sexual activity and questions about masturbation, she said. “Definitions also should not assume partnered sex is male or female,” she added.
Dr. Steckenrider and Dr. Kingsberg encouraged healthcare providers to address the sexual health of their patients by asking questions about their sexual health and concerns.
“Health care professionals cannot address sexual concerns if they don’t acknowledge their patients as sexual beings and inquire about sexual problems,” Dr. Kingsberg said.
The key, according to Dr. Steckenrider, is for clinicians to ask the right questions. But which ones?
Detail is crucial.
“I think that’s far better than asking whether they are sexually active, yes or no,” she said. “Ask: ‘How often have you engaged in these types of sexual activities?’ If you are looking for frequency, and be specific about the types of sex: kissing, fondling, or masturbation.”
A version of this article first appeared on Medscape.com.