User login
Could Bedside Training Help End the US Neurologist Shortage?
DENVER — , a new report suggested.
Bedside Rounding Alliance for Internal Medicine and Neurology Residents (BRAINs) moves training from the lecture hall to the bedside, offering instruction on obtaining a focused neurologic history and performing a focused neurologic physical exam for common neurologic symptoms.
Almost 100% of trainees surveyed gave the program a favorable rating, citing patient exposure and bedside training from neurology educators as keys to its success.
As internal medicine providers are often “the first to lay eyes” on patients with a neurology complaint, it’s important they “have a basic level of comfort” in addressing patients’ common questions and concerns, study author Prashanth Rajarajan, MD, PhD, a resident in the Department of Neurology at Brigham and Women’s Hospital, Boston, told this news organization.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Addressing ‘Neurophobia’
Neurology is often viewed by medical trainees as the most difficult subspecialty, Dr. Rajarajan said. Many have what he calls “neurophobia,” which he defines as “a discomfort with assessing and treating neurologic complaints.”
A survey at his institution showed 62% of internal medicine residents lacked the confidence to diagnose and treat neurologic diseases, he reported.
BRAINs is a structured neurology trainee-led, inpatient bedside teaching session for internal medicine residents, medical students, and others that aims to increase trainees’ confidence in assessing patients with common neurologic symptoms.
The program includes a biweekly 45-minute session. Most of the session is spent at the bedside and involves demonstrations and practice of a focused neurologic history and physical exam.
Participants receive feedback from educators, typically neurology residents or fellows in epilepsy, stroke, or some other neurology subspecialty. It also includes a short discussion on pertinent diagnostics, management, and other topics.
Surveys evaluating the program and teaching skill development were completed by 59 residents and 15 neurology educators who participated in BRAINs between 2022 and 2024.
Over 90% of trainees (54) agreed BRAINs sessions met the program’s objective (5 were neutral); 49 agreed it increased confidence in taking a neuro history (9 were neutral and 1 disagreed); 56 felt it boosted their confidence in doing a neuro exam (3 were neutral); and 56 said BRAINs is more effective than traditional lecture-based didactics (3 were neutral).
All the residents rated the material covered as appropriate for their level of training; 88% considered the 45-minute session length appropriate; and 98% had a favorable impression of the program as a whole.
When asked to identify the most helpful aspect of the program, 82% cited more patient exposure and 81% more bedside teaching.
All educators reported that the sessions were an effective way to practice near-peer teaching skills. Most (87%) felt the experience was more effective at accomplishing learning objectives than preparing and giving traditional didactic lectures, and 80% agreed it also gave them an opportunity to get to know their medical colleagues.
Use It or Lose It
Dr. Rajarajan noted that the program doesn’t require significant planning or extra staff, is not resource-intensive, and can be adapted to different services such as emergency departments and other learner populations.
But time will tell if the newfound confidence of those taking the program actually lasts.
“You have to keep using it,” he said. “You use it or lose it when comes to these skills.”
Commenting on the initiative, Denney Zimmerman, DO, Neurocritical Care Faculty, Blount Memorial Hospital, Maryville, Tennessee, and cochair of the AAN session featuring the study, called the program a good example of one way to counteract “neurophobia” and address the widespread neurologist shortage in the United States.
A 2019 AAN report showed that by 2025, almost every state in the United States will have a mismatch between the number of practicing neurologists and the demand from patients with neurologic conditions. The report offered several ways to address the shortage, including more neurology-focused training for internal medicine doctors during their residency.
“They’re usually on the front line, both in the hospital and in the clinics, and can help expedite patients who need to be seen by neurology sooner rather than later,” Dr. Zimmerman said.
Dr. Zimmerman noted that the study assessed how well participants perceived the program but not whether it improved their skills.
He pointed out that different groups may assess different diseases during their training session. “I think it’s important to ensure you’re hitting all the major topics.”
The study received funding from MGB Centers of Expertise Education Grant. Drs. Rajarajan and Zimmerman reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
DENVER — , a new report suggested.
Bedside Rounding Alliance for Internal Medicine and Neurology Residents (BRAINs) moves training from the lecture hall to the bedside, offering instruction on obtaining a focused neurologic history and performing a focused neurologic physical exam for common neurologic symptoms.
Almost 100% of trainees surveyed gave the program a favorable rating, citing patient exposure and bedside training from neurology educators as keys to its success.
As internal medicine providers are often “the first to lay eyes” on patients with a neurology complaint, it’s important they “have a basic level of comfort” in addressing patients’ common questions and concerns, study author Prashanth Rajarajan, MD, PhD, a resident in the Department of Neurology at Brigham and Women’s Hospital, Boston, told this news organization.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Addressing ‘Neurophobia’
Neurology is often viewed by medical trainees as the most difficult subspecialty, Dr. Rajarajan said. Many have what he calls “neurophobia,” which he defines as “a discomfort with assessing and treating neurologic complaints.”
A survey at his institution showed 62% of internal medicine residents lacked the confidence to diagnose and treat neurologic diseases, he reported.
BRAINs is a structured neurology trainee-led, inpatient bedside teaching session for internal medicine residents, medical students, and others that aims to increase trainees’ confidence in assessing patients with common neurologic symptoms.
The program includes a biweekly 45-minute session. Most of the session is spent at the bedside and involves demonstrations and practice of a focused neurologic history and physical exam.
Participants receive feedback from educators, typically neurology residents or fellows in epilepsy, stroke, or some other neurology subspecialty. It also includes a short discussion on pertinent diagnostics, management, and other topics.
Surveys evaluating the program and teaching skill development were completed by 59 residents and 15 neurology educators who participated in BRAINs between 2022 and 2024.
Over 90% of trainees (54) agreed BRAINs sessions met the program’s objective (5 were neutral); 49 agreed it increased confidence in taking a neuro history (9 were neutral and 1 disagreed); 56 felt it boosted their confidence in doing a neuro exam (3 were neutral); and 56 said BRAINs is more effective than traditional lecture-based didactics (3 were neutral).
All the residents rated the material covered as appropriate for their level of training; 88% considered the 45-minute session length appropriate; and 98% had a favorable impression of the program as a whole.
When asked to identify the most helpful aspect of the program, 82% cited more patient exposure and 81% more bedside teaching.
All educators reported that the sessions were an effective way to practice near-peer teaching skills. Most (87%) felt the experience was more effective at accomplishing learning objectives than preparing and giving traditional didactic lectures, and 80% agreed it also gave them an opportunity to get to know their medical colleagues.
Use It or Lose It
Dr. Rajarajan noted that the program doesn’t require significant planning or extra staff, is not resource-intensive, and can be adapted to different services such as emergency departments and other learner populations.
But time will tell if the newfound confidence of those taking the program actually lasts.
“You have to keep using it,” he said. “You use it or lose it when comes to these skills.”
Commenting on the initiative, Denney Zimmerman, DO, Neurocritical Care Faculty, Blount Memorial Hospital, Maryville, Tennessee, and cochair of the AAN session featuring the study, called the program a good example of one way to counteract “neurophobia” and address the widespread neurologist shortage in the United States.
A 2019 AAN report showed that by 2025, almost every state in the United States will have a mismatch between the number of practicing neurologists and the demand from patients with neurologic conditions. The report offered several ways to address the shortage, including more neurology-focused training for internal medicine doctors during their residency.
“They’re usually on the front line, both in the hospital and in the clinics, and can help expedite patients who need to be seen by neurology sooner rather than later,” Dr. Zimmerman said.
Dr. Zimmerman noted that the study assessed how well participants perceived the program but not whether it improved their skills.
He pointed out that different groups may assess different diseases during their training session. “I think it’s important to ensure you’re hitting all the major topics.”
The study received funding from MGB Centers of Expertise Education Grant. Drs. Rajarajan and Zimmerman reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
DENVER — , a new report suggested.
Bedside Rounding Alliance for Internal Medicine and Neurology Residents (BRAINs) moves training from the lecture hall to the bedside, offering instruction on obtaining a focused neurologic history and performing a focused neurologic physical exam for common neurologic symptoms.
Almost 100% of trainees surveyed gave the program a favorable rating, citing patient exposure and bedside training from neurology educators as keys to its success.
As internal medicine providers are often “the first to lay eyes” on patients with a neurology complaint, it’s important they “have a basic level of comfort” in addressing patients’ common questions and concerns, study author Prashanth Rajarajan, MD, PhD, a resident in the Department of Neurology at Brigham and Women’s Hospital, Boston, told this news organization.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Addressing ‘Neurophobia’
Neurology is often viewed by medical trainees as the most difficult subspecialty, Dr. Rajarajan said. Many have what he calls “neurophobia,” which he defines as “a discomfort with assessing and treating neurologic complaints.”
A survey at his institution showed 62% of internal medicine residents lacked the confidence to diagnose and treat neurologic diseases, he reported.
BRAINs is a structured neurology trainee-led, inpatient bedside teaching session for internal medicine residents, medical students, and others that aims to increase trainees’ confidence in assessing patients with common neurologic symptoms.
The program includes a biweekly 45-minute session. Most of the session is spent at the bedside and involves demonstrations and practice of a focused neurologic history and physical exam.
Participants receive feedback from educators, typically neurology residents or fellows in epilepsy, stroke, or some other neurology subspecialty. It also includes a short discussion on pertinent diagnostics, management, and other topics.
Surveys evaluating the program and teaching skill development were completed by 59 residents and 15 neurology educators who participated in BRAINs between 2022 and 2024.
Over 90% of trainees (54) agreed BRAINs sessions met the program’s objective (5 were neutral); 49 agreed it increased confidence in taking a neuro history (9 were neutral and 1 disagreed); 56 felt it boosted their confidence in doing a neuro exam (3 were neutral); and 56 said BRAINs is more effective than traditional lecture-based didactics (3 were neutral).
All the residents rated the material covered as appropriate for their level of training; 88% considered the 45-minute session length appropriate; and 98% had a favorable impression of the program as a whole.
When asked to identify the most helpful aspect of the program, 82% cited more patient exposure and 81% more bedside teaching.
All educators reported that the sessions were an effective way to practice near-peer teaching skills. Most (87%) felt the experience was more effective at accomplishing learning objectives than preparing and giving traditional didactic lectures, and 80% agreed it also gave them an opportunity to get to know their medical colleagues.
Use It or Lose It
Dr. Rajarajan noted that the program doesn’t require significant planning or extra staff, is not resource-intensive, and can be adapted to different services such as emergency departments and other learner populations.
But time will tell if the newfound confidence of those taking the program actually lasts.
“You have to keep using it,” he said. “You use it or lose it when comes to these skills.”
Commenting on the initiative, Denney Zimmerman, DO, Neurocritical Care Faculty, Blount Memorial Hospital, Maryville, Tennessee, and cochair of the AAN session featuring the study, called the program a good example of one way to counteract “neurophobia” and address the widespread neurologist shortage in the United States.
A 2019 AAN report showed that by 2025, almost every state in the United States will have a mismatch between the number of practicing neurologists and the demand from patients with neurologic conditions. The report offered several ways to address the shortage, including more neurology-focused training for internal medicine doctors during their residency.
“They’re usually on the front line, both in the hospital and in the clinics, and can help expedite patients who need to be seen by neurology sooner rather than later,” Dr. Zimmerman said.
Dr. Zimmerman noted that the study assessed how well participants perceived the program but not whether it improved their skills.
He pointed out that different groups may assess different diseases during their training session. “I think it’s important to ensure you’re hitting all the major topics.”
The study received funding from MGB Centers of Expertise Education Grant. Drs. Rajarajan and Zimmerman reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Teleneurology for Suspected Stroke Speeds Treatment
, new research showed.
“This preliminary evidence supports adopting teleneurology prenotification as a best practice within health systems that have telestroke capabilities,” said study investigator Mark McDonald, MD, a neurologist at TeleSpecialists, Fort Myers, Florida.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Best Practices
The impact of emergency medical services prenotification, which refers to paramedics alerting receiving hospital emergency departments (EDs) of a suspected stroke on the way for appropriate preparations to be made, is well-defined, said Dr. McDonald.
“What we’re proposing as a best practice is not only should the ED or ED provider be aware, but there needs to be a system in place for standardizing communication to the neurology team so they’re aware, too.”
Prenotification allows a neurologist to “get on the screen to begin coordinating with the ED team to adequately prepare for the possibility of thrombolytic treatment,” he added.
Currently, teleneurology prenotification, he said, is variable and its benefits unclear.
Dr. McDonald said “his organization, TeleSpecialists, maintains a large detailed medical records database for emergency-related, teleneurology, and other cases. For stroke, it recommends 15 best practices” for facilities including prenotification of teleneurology.
Other best practices include evaluating and administering thrombolysis in the CT imaging suite, a preassembled stroke kit that includes antihypertensives and thrombolytic agents, ensuring a weigh bed is available to determine the exact dose of thrombolysis treatment, and implementing “mock” stroke alerts, said Dr. McDonald.
From the database, researchers extracted acute telestroke consultations seen in the ED in 103 facilities in 15 states. Facilities that did not adhere to the 14 best practices other than teleneurologist prenotification were excluded from the analysis.
Of 9290 patients included in the study, 731 were treated with thrombolysis at prenotification facilities (median age, 69 years; median National Institutes of Health Stroke Score [NIHSS], 8) and 31 were treated at facilities without prenotification (median age, 63 years; median NIHSS score, 4). The thrombolytic treatment rate was 8.5% at prenotification facilities versus 4.8% at facilities without prenotification — a difference that was statistically significant.
Prenotification facilities had a significantly shorter median door-to-needle (DTN) time than those without such a process at 35 versus 43 minutes. In addition, there was a statistically significant difference in the percentage of patients with times less than 60 minutes at approximately 88% at prenotification facilities versus about 68% at the facilities without prenotification.
Case-Level Analysis
However, just because a facility adheres to teleneurology prenotification as a whole, doesn’t mean it occurs in every case. Researchers explored the impact of teleneurology prenotification at the case level rather than the facility level.
“That gave us a bit more insight into the real impact because it’s not just being at a facility with the best practice; it’s actually working case by case to see whether it happened or not and that’s where we get the most compelling findings,” said Dr. McDonald.
Of 761 treatment cases, there was prenotification to the neurology team in 401 cases. In 360 cases, prenotification did not occur.
The median DTN time was 29 minutes in the group with actual prenotification vs 41.5 minutes in the group without actual prenotification, a difference that was statistically significant, Dr. McDonald said.
As for treatment within 30 minutes of arrival, 50.4% of patients in the teleneurology prenotification group versus 18.9% in the no prenotification group — a statistically significant difference.
DTN time of less than 30 minutes is increasingly used as a target. “Being treated within this time frame improves outcomes and reduces length of hospital stay,” said Dr. McDonald.
The prenotification group also had a statistically significant higher percentage of treatment within 60 minutes of hospital arrival (93.5% vs 80%).
These new findings should help convince health and telestroke systems that teleneurology prenotification is worth implementing. “We want to achieve consensus on this as a best practice,” said Dr. McDonald.
Prenotification, he added, “coordinates the process and eliminates unnecessary and time-consuming steps.”
Dr. McDonald plans to prospectively study prenotification by collecting data on a facility before and after implementing a prenotification process.
Compelling Evidence
Commenting on the research, David L. Tirschwell, MD, Harborview Medical Center, Department of Neurology, Seattle, who cochaired the AAN session featuring the research, said the study provides compelling evidence that teleneurologist prenotification improves DTN time.
“Prenotifications are often standard of care in many healthcare settings and should likely be considered a best practice. When possible, extending such prenotification to a teleconsultant would make sense, and these preliminary data support that approach.”
However, more details are needed “to consider whether the intervention is possibly generalizable to other telestroke practices across the United States,” said Dr. Tirschwell.
Dr. McDonald reported receiving personal compensation for serving as a consultant for Syntrillo Inc. and has stock in Syntrillo Inc. Dr. Tirschwell reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, new research showed.
“This preliminary evidence supports adopting teleneurology prenotification as a best practice within health systems that have telestroke capabilities,” said study investigator Mark McDonald, MD, a neurologist at TeleSpecialists, Fort Myers, Florida.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Best Practices
The impact of emergency medical services prenotification, which refers to paramedics alerting receiving hospital emergency departments (EDs) of a suspected stroke on the way for appropriate preparations to be made, is well-defined, said Dr. McDonald.
“What we’re proposing as a best practice is not only should the ED or ED provider be aware, but there needs to be a system in place for standardizing communication to the neurology team so they’re aware, too.”
Prenotification allows a neurologist to “get on the screen to begin coordinating with the ED team to adequately prepare for the possibility of thrombolytic treatment,” he added.
Currently, teleneurology prenotification, he said, is variable and its benefits unclear.
Dr. McDonald said “his organization, TeleSpecialists, maintains a large detailed medical records database for emergency-related, teleneurology, and other cases. For stroke, it recommends 15 best practices” for facilities including prenotification of teleneurology.
Other best practices include evaluating and administering thrombolysis in the CT imaging suite, a preassembled stroke kit that includes antihypertensives and thrombolytic agents, ensuring a weigh bed is available to determine the exact dose of thrombolysis treatment, and implementing “mock” stroke alerts, said Dr. McDonald.
From the database, researchers extracted acute telestroke consultations seen in the ED in 103 facilities in 15 states. Facilities that did not adhere to the 14 best practices other than teleneurologist prenotification were excluded from the analysis.
Of 9290 patients included in the study, 731 were treated with thrombolysis at prenotification facilities (median age, 69 years; median National Institutes of Health Stroke Score [NIHSS], 8) and 31 were treated at facilities without prenotification (median age, 63 years; median NIHSS score, 4). The thrombolytic treatment rate was 8.5% at prenotification facilities versus 4.8% at facilities without prenotification — a difference that was statistically significant.
Prenotification facilities had a significantly shorter median door-to-needle (DTN) time than those without such a process at 35 versus 43 minutes. In addition, there was a statistically significant difference in the percentage of patients with times less than 60 minutes at approximately 88% at prenotification facilities versus about 68% at the facilities without prenotification.
Case-Level Analysis
However, just because a facility adheres to teleneurology prenotification as a whole, doesn’t mean it occurs in every case. Researchers explored the impact of teleneurology prenotification at the case level rather than the facility level.
“That gave us a bit more insight into the real impact because it’s not just being at a facility with the best practice; it’s actually working case by case to see whether it happened or not and that’s where we get the most compelling findings,” said Dr. McDonald.
Of 761 treatment cases, there was prenotification to the neurology team in 401 cases. In 360 cases, prenotification did not occur.
The median DTN time was 29 minutes in the group with actual prenotification vs 41.5 minutes in the group without actual prenotification, a difference that was statistically significant, Dr. McDonald said.
As for treatment within 30 minutes of arrival, 50.4% of patients in the teleneurology prenotification group versus 18.9% in the no prenotification group — a statistically significant difference.
DTN time of less than 30 minutes is increasingly used as a target. “Being treated within this time frame improves outcomes and reduces length of hospital stay,” said Dr. McDonald.
The prenotification group also had a statistically significant higher percentage of treatment within 60 minutes of hospital arrival (93.5% vs 80%).
These new findings should help convince health and telestroke systems that teleneurology prenotification is worth implementing. “We want to achieve consensus on this as a best practice,” said Dr. McDonald.
Prenotification, he added, “coordinates the process and eliminates unnecessary and time-consuming steps.”
Dr. McDonald plans to prospectively study prenotification by collecting data on a facility before and after implementing a prenotification process.
Compelling Evidence
Commenting on the research, David L. Tirschwell, MD, Harborview Medical Center, Department of Neurology, Seattle, who cochaired the AAN session featuring the research, said the study provides compelling evidence that teleneurologist prenotification improves DTN time.
“Prenotifications are often standard of care in many healthcare settings and should likely be considered a best practice. When possible, extending such prenotification to a teleconsultant would make sense, and these preliminary data support that approach.”
However, more details are needed “to consider whether the intervention is possibly generalizable to other telestroke practices across the United States,” said Dr. Tirschwell.
Dr. McDonald reported receiving personal compensation for serving as a consultant for Syntrillo Inc. and has stock in Syntrillo Inc. Dr. Tirschwell reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, new research showed.
“This preliminary evidence supports adopting teleneurology prenotification as a best practice within health systems that have telestroke capabilities,” said study investigator Mark McDonald, MD, a neurologist at TeleSpecialists, Fort Myers, Florida.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology.
Best Practices
The impact of emergency medical services prenotification, which refers to paramedics alerting receiving hospital emergency departments (EDs) of a suspected stroke on the way for appropriate preparations to be made, is well-defined, said Dr. McDonald.
“What we’re proposing as a best practice is not only should the ED or ED provider be aware, but there needs to be a system in place for standardizing communication to the neurology team so they’re aware, too.”
Prenotification allows a neurologist to “get on the screen to begin coordinating with the ED team to adequately prepare for the possibility of thrombolytic treatment,” he added.
Currently, teleneurology prenotification, he said, is variable and its benefits unclear.
Dr. McDonald said “his organization, TeleSpecialists, maintains a large detailed medical records database for emergency-related, teleneurology, and other cases. For stroke, it recommends 15 best practices” for facilities including prenotification of teleneurology.
Other best practices include evaluating and administering thrombolysis in the CT imaging suite, a preassembled stroke kit that includes antihypertensives and thrombolytic agents, ensuring a weigh bed is available to determine the exact dose of thrombolysis treatment, and implementing “mock” stroke alerts, said Dr. McDonald.
From the database, researchers extracted acute telestroke consultations seen in the ED in 103 facilities in 15 states. Facilities that did not adhere to the 14 best practices other than teleneurologist prenotification were excluded from the analysis.
Of 9290 patients included in the study, 731 were treated with thrombolysis at prenotification facilities (median age, 69 years; median National Institutes of Health Stroke Score [NIHSS], 8) and 31 were treated at facilities without prenotification (median age, 63 years; median NIHSS score, 4). The thrombolytic treatment rate was 8.5% at prenotification facilities versus 4.8% at facilities without prenotification — a difference that was statistically significant.
Prenotification facilities had a significantly shorter median door-to-needle (DTN) time than those without such a process at 35 versus 43 minutes. In addition, there was a statistically significant difference in the percentage of patients with times less than 60 minutes at approximately 88% at prenotification facilities versus about 68% at the facilities without prenotification.
Case-Level Analysis
However, just because a facility adheres to teleneurology prenotification as a whole, doesn’t mean it occurs in every case. Researchers explored the impact of teleneurology prenotification at the case level rather than the facility level.
“That gave us a bit more insight into the real impact because it’s not just being at a facility with the best practice; it’s actually working case by case to see whether it happened or not and that’s where we get the most compelling findings,” said Dr. McDonald.
Of 761 treatment cases, there was prenotification to the neurology team in 401 cases. In 360 cases, prenotification did not occur.
The median DTN time was 29 minutes in the group with actual prenotification vs 41.5 minutes in the group without actual prenotification, a difference that was statistically significant, Dr. McDonald said.
As for treatment within 30 minutes of arrival, 50.4% of patients in the teleneurology prenotification group versus 18.9% in the no prenotification group — a statistically significant difference.
DTN time of less than 30 minutes is increasingly used as a target. “Being treated within this time frame improves outcomes and reduces length of hospital stay,” said Dr. McDonald.
The prenotification group also had a statistically significant higher percentage of treatment within 60 minutes of hospital arrival (93.5% vs 80%).
These new findings should help convince health and telestroke systems that teleneurology prenotification is worth implementing. “We want to achieve consensus on this as a best practice,” said Dr. McDonald.
Prenotification, he added, “coordinates the process and eliminates unnecessary and time-consuming steps.”
Dr. McDonald plans to prospectively study prenotification by collecting data on a facility before and after implementing a prenotification process.
Compelling Evidence
Commenting on the research, David L. Tirschwell, MD, Harborview Medical Center, Department of Neurology, Seattle, who cochaired the AAN session featuring the research, said the study provides compelling evidence that teleneurologist prenotification improves DTN time.
“Prenotifications are often standard of care in many healthcare settings and should likely be considered a best practice. When possible, extending such prenotification to a teleconsultant would make sense, and these preliminary data support that approach.”
However, more details are needed “to consider whether the intervention is possibly generalizable to other telestroke practices across the United States,” said Dr. Tirschwell.
Dr. McDonald reported receiving personal compensation for serving as a consultant for Syntrillo Inc. and has stock in Syntrillo Inc. Dr. Tirschwell reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Novel PCSK9 Inhibitor Reduced LDL by 50%
Lerodalcibep, a novel, third-generation proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor, reduced low-density lipoprotein cholesterol (LDL-C) by more than 50% after 1 year in patients with or at a high risk for cardiovascular disease (CVD), new phase 3 results showed.
Newer, more stringent LDL targets in 90% of patients receiving lerodalcibep vs only 16% of those on placebo, despite concurrent treatment with a statin or statin plus ezetimibe.
“This hopefully gives doctors a more practical PCSK9 antagonist that’s small volume, can be administered monthly, and is an alternative to the every 2 week injection of monoclonal antibodies and probably more effective in LDL cholesterol–lowering compared to the small interfering RNA” medicines, study author Eric Klug, MBBCh, MMed, associate professor, Division of Cardiology, University of the Witwatersrand, Johannesburg, South Africa, told this news organization.
The findings from the LIBerate-HR trial were presented at the American College of Cardiology (ACC) Scientific Session 2024.
Additional Therapy Needed
The first goal is to get at least a 50% reduction in LDL-C, said Dr. Klug. The ACC, the American Heart Association, and the European Society of Cardiology recommended LDL-C of no more than 55 mg/dL as a goal for patients with CVD or who are at a very high risk for myocardial infarction or stroke and no more than 70 mg/dL for high-risk patients.
Most patients don’t get to that combined goal with statins and ezetimibe and need additional therapy, “and it appears the earlier you give the therapy the better,” said Dr. Klug.
Lerodalcibep is given as a low-dose (1.2-mL) monthly injection and is more convenient than other LDL-C–lowering options, said Dr. Klug. “This is a small-volume molecule that can be delivered subcutaneously once a month and can be kept on the shelf so it doesn’t need to be kept in the fridge, and you can travel with it.”
LIBerate-HR included 922 patients with CVD or at a high or very high risk for myocardial infarction or stroke at 66 centers in 11 countries. Over half (52%) fell into the at-risk category.
The mean age of participants was 64.5 years, 77% were White, and, notably, about 45% were women. Some 84% were taking a statin, 16.6% ezetimibe, a quarter had diabetes, and 10% had the more severe inherited familial hypercholesterolemia (FH).
Patients were randomly assigned to receive monthly 300-mg (1.2-mL) subcutaneous injections of lerodalcibep (n = 615) or placebo (n = 307) for 52 weeks.
The mean LDL-C at baseline was 116.9 mg/dL in the placebo group and 116.3 mg/dL in the treatment group.
The co-primary efficacy endpoints were the percent change from baseline in LDL-C at week 52 and the mean of weeks 50 and 52 (average of the peak and trough dose).
Compared with placebo, lerodalcibep reduced LDL-C by 56.19% at week 52 (P < .0001) and by 62.69% at mean week 50/52 (P < .0001). The absolute decreases were 60.6 mg/dL at week 52 and 74.5 mg/dL for mean week 50/52.
Rule of Thumb
“There’s a sort of rule of thumb that for every 40 mg/dL that LDL-C is reduced, you reduce major adverse cardiovascular events (MACE) by 20%-23%,” said Dr. Klug. “So, by reducing LDL-C by 60 mg/dL at week 52, you’re reducing your risk of MACE maybe by 30% or 35%.”
All subgroups reaped the same benefit from the intervention, noted Dr. Klug. “Whether you were male or female, under age 65, over age 65, baseline BMI less than median or more than median, White, Black or other, baseline statin intensity, diabetic or not diabetic, diagnosis of FH or not, it made no difference.”
As for secondary outcomes, most patients attained the newer, more stringent guideline-recommended LDL targets.
The treatment also reduced non–high-density lipoprotein cholesterol by 47%, apolipoprotein B by 43%, and Lp(a) by 33%.
Lerodalcibep was well-tolerated, with the number of patients with at least one adverse event similar to placebo (71.6% vs 68.1%) as was the case for the number with at least one serious adverse event (12.4% vs 13.4%).
Injection site reactions were mild to moderate. There was no difference in discontinuation rates due to these reactions (4.2% for the treatment and 4.6% for placebo).
A larger and longer trial to begin later this year should determine if the amount of LDL-C–lowering seen with lerodalcibep translates to greater reductions in cardiovascular events.
The company plans to file an application for approval to the US Food and Drug Administration in the next 2-4 months, said Dr. Klug.
Still Work to Do
During a press briefing, Dave L, Dixon, PharmD, professor and chair, Virginia Commonwealth University School of Pharmacy, Richmond, and member of the ACC Prevention of Cardiovascular Disease Council, congratulated the investigators “on moving this product forward and demonstrating the LDL-lowering efficacy, as well as providing some additional safety and tolerability data.”
He added it’s “clear” from the baseline LDL characteristics that “we have a lot of work to do in terms of helping patients achieve their lipid goals.”
Dr. Dixon noted up to about 30% of patients have some form of statin intolerance. “So, we really have to utilize our non-statin therapies, and unfortunately, we’re not doing a great job of that.”
That the trial enrolled so many women is “fantastic,” said Dr. Dixon, adding the investigators also “did a great job” of enrolling underrepresented minorities.
Having a once-a-month self-injection option “is great” and “fills a nice niche” for patients, said Dr. Dixon.
The study was funded by LIB Therapeutics, which manufactures lerodalcibep. Dr. Klug had no conflicts relevant to this study (he received honoraria from Novartis, Amgen, and Sanofi-Aventis).
A version of this article appeared on Medscape.com.
Lerodalcibep, a novel, third-generation proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor, reduced low-density lipoprotein cholesterol (LDL-C) by more than 50% after 1 year in patients with or at a high risk for cardiovascular disease (CVD), new phase 3 results showed.
Newer, more stringent LDL targets in 90% of patients receiving lerodalcibep vs only 16% of those on placebo, despite concurrent treatment with a statin or statin plus ezetimibe.
“This hopefully gives doctors a more practical PCSK9 antagonist that’s small volume, can be administered monthly, and is an alternative to the every 2 week injection of monoclonal antibodies and probably more effective in LDL cholesterol–lowering compared to the small interfering RNA” medicines, study author Eric Klug, MBBCh, MMed, associate professor, Division of Cardiology, University of the Witwatersrand, Johannesburg, South Africa, told this news organization.
The findings from the LIBerate-HR trial were presented at the American College of Cardiology (ACC) Scientific Session 2024.
Additional Therapy Needed
The first goal is to get at least a 50% reduction in LDL-C, said Dr. Klug. The ACC, the American Heart Association, and the European Society of Cardiology recommended LDL-C of no more than 55 mg/dL as a goal for patients with CVD or who are at a very high risk for myocardial infarction or stroke and no more than 70 mg/dL for high-risk patients.
Most patients don’t get to that combined goal with statins and ezetimibe and need additional therapy, “and it appears the earlier you give the therapy the better,” said Dr. Klug.
Lerodalcibep is given as a low-dose (1.2-mL) monthly injection and is more convenient than other LDL-C–lowering options, said Dr. Klug. “This is a small-volume molecule that can be delivered subcutaneously once a month and can be kept on the shelf so it doesn’t need to be kept in the fridge, and you can travel with it.”
LIBerate-HR included 922 patients with CVD or at a high or very high risk for myocardial infarction or stroke at 66 centers in 11 countries. Over half (52%) fell into the at-risk category.
The mean age of participants was 64.5 years, 77% were White, and, notably, about 45% were women. Some 84% were taking a statin, 16.6% ezetimibe, a quarter had diabetes, and 10% had the more severe inherited familial hypercholesterolemia (FH).
Patients were randomly assigned to receive monthly 300-mg (1.2-mL) subcutaneous injections of lerodalcibep (n = 615) or placebo (n = 307) for 52 weeks.
The mean LDL-C at baseline was 116.9 mg/dL in the placebo group and 116.3 mg/dL in the treatment group.
The co-primary efficacy endpoints were the percent change from baseline in LDL-C at week 52 and the mean of weeks 50 and 52 (average of the peak and trough dose).
Compared with placebo, lerodalcibep reduced LDL-C by 56.19% at week 52 (P < .0001) and by 62.69% at mean week 50/52 (P < .0001). The absolute decreases were 60.6 mg/dL at week 52 and 74.5 mg/dL for mean week 50/52.
Rule of Thumb
“There’s a sort of rule of thumb that for every 40 mg/dL that LDL-C is reduced, you reduce major adverse cardiovascular events (MACE) by 20%-23%,” said Dr. Klug. “So, by reducing LDL-C by 60 mg/dL at week 52, you’re reducing your risk of MACE maybe by 30% or 35%.”
All subgroups reaped the same benefit from the intervention, noted Dr. Klug. “Whether you were male or female, under age 65, over age 65, baseline BMI less than median or more than median, White, Black or other, baseline statin intensity, diabetic or not diabetic, diagnosis of FH or not, it made no difference.”
As for secondary outcomes, most patients attained the newer, more stringent guideline-recommended LDL targets.
The treatment also reduced non–high-density lipoprotein cholesterol by 47%, apolipoprotein B by 43%, and Lp(a) by 33%.
Lerodalcibep was well-tolerated, with the number of patients with at least one adverse event similar to placebo (71.6% vs 68.1%) as was the case for the number with at least one serious adverse event (12.4% vs 13.4%).
Injection site reactions were mild to moderate. There was no difference in discontinuation rates due to these reactions (4.2% for the treatment and 4.6% for placebo).
A larger and longer trial to begin later this year should determine if the amount of LDL-C–lowering seen with lerodalcibep translates to greater reductions in cardiovascular events.
The company plans to file an application for approval to the US Food and Drug Administration in the next 2-4 months, said Dr. Klug.
Still Work to Do
During a press briefing, Dave L, Dixon, PharmD, professor and chair, Virginia Commonwealth University School of Pharmacy, Richmond, and member of the ACC Prevention of Cardiovascular Disease Council, congratulated the investigators “on moving this product forward and demonstrating the LDL-lowering efficacy, as well as providing some additional safety and tolerability data.”
He added it’s “clear” from the baseline LDL characteristics that “we have a lot of work to do in terms of helping patients achieve their lipid goals.”
Dr. Dixon noted up to about 30% of patients have some form of statin intolerance. “So, we really have to utilize our non-statin therapies, and unfortunately, we’re not doing a great job of that.”
That the trial enrolled so many women is “fantastic,” said Dr. Dixon, adding the investigators also “did a great job” of enrolling underrepresented minorities.
Having a once-a-month self-injection option “is great” and “fills a nice niche” for patients, said Dr. Dixon.
The study was funded by LIB Therapeutics, which manufactures lerodalcibep. Dr. Klug had no conflicts relevant to this study (he received honoraria from Novartis, Amgen, and Sanofi-Aventis).
A version of this article appeared on Medscape.com.
Lerodalcibep, a novel, third-generation proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor, reduced low-density lipoprotein cholesterol (LDL-C) by more than 50% after 1 year in patients with or at a high risk for cardiovascular disease (CVD), new phase 3 results showed.
Newer, more stringent LDL targets in 90% of patients receiving lerodalcibep vs only 16% of those on placebo, despite concurrent treatment with a statin or statin plus ezetimibe.
“This hopefully gives doctors a more practical PCSK9 antagonist that’s small volume, can be administered monthly, and is an alternative to the every 2 week injection of monoclonal antibodies and probably more effective in LDL cholesterol–lowering compared to the small interfering RNA” medicines, study author Eric Klug, MBBCh, MMed, associate professor, Division of Cardiology, University of the Witwatersrand, Johannesburg, South Africa, told this news organization.
The findings from the LIBerate-HR trial were presented at the American College of Cardiology (ACC) Scientific Session 2024.
Additional Therapy Needed
The first goal is to get at least a 50% reduction in LDL-C, said Dr. Klug. The ACC, the American Heart Association, and the European Society of Cardiology recommended LDL-C of no more than 55 mg/dL as a goal for patients with CVD or who are at a very high risk for myocardial infarction or stroke and no more than 70 mg/dL for high-risk patients.
Most patients don’t get to that combined goal with statins and ezetimibe and need additional therapy, “and it appears the earlier you give the therapy the better,” said Dr. Klug.
Lerodalcibep is given as a low-dose (1.2-mL) monthly injection and is more convenient than other LDL-C–lowering options, said Dr. Klug. “This is a small-volume molecule that can be delivered subcutaneously once a month and can be kept on the shelf so it doesn’t need to be kept in the fridge, and you can travel with it.”
LIBerate-HR included 922 patients with CVD or at a high or very high risk for myocardial infarction or stroke at 66 centers in 11 countries. Over half (52%) fell into the at-risk category.
The mean age of participants was 64.5 years, 77% were White, and, notably, about 45% were women. Some 84% were taking a statin, 16.6% ezetimibe, a quarter had diabetes, and 10% had the more severe inherited familial hypercholesterolemia (FH).
Patients were randomly assigned to receive monthly 300-mg (1.2-mL) subcutaneous injections of lerodalcibep (n = 615) or placebo (n = 307) for 52 weeks.
The mean LDL-C at baseline was 116.9 mg/dL in the placebo group and 116.3 mg/dL in the treatment group.
The co-primary efficacy endpoints were the percent change from baseline in LDL-C at week 52 and the mean of weeks 50 and 52 (average of the peak and trough dose).
Compared with placebo, lerodalcibep reduced LDL-C by 56.19% at week 52 (P < .0001) and by 62.69% at mean week 50/52 (P < .0001). The absolute decreases were 60.6 mg/dL at week 52 and 74.5 mg/dL for mean week 50/52.
Rule of Thumb
“There’s a sort of rule of thumb that for every 40 mg/dL that LDL-C is reduced, you reduce major adverse cardiovascular events (MACE) by 20%-23%,” said Dr. Klug. “So, by reducing LDL-C by 60 mg/dL at week 52, you’re reducing your risk of MACE maybe by 30% or 35%.”
All subgroups reaped the same benefit from the intervention, noted Dr. Klug. “Whether you were male or female, under age 65, over age 65, baseline BMI less than median or more than median, White, Black or other, baseline statin intensity, diabetic or not diabetic, diagnosis of FH or not, it made no difference.”
As for secondary outcomes, most patients attained the newer, more stringent guideline-recommended LDL targets.
The treatment also reduced non–high-density lipoprotein cholesterol by 47%, apolipoprotein B by 43%, and Lp(a) by 33%.
Lerodalcibep was well-tolerated, with the number of patients with at least one adverse event similar to placebo (71.6% vs 68.1%) as was the case for the number with at least one serious adverse event (12.4% vs 13.4%).
Injection site reactions were mild to moderate. There was no difference in discontinuation rates due to these reactions (4.2% for the treatment and 4.6% for placebo).
A larger and longer trial to begin later this year should determine if the amount of LDL-C–lowering seen with lerodalcibep translates to greater reductions in cardiovascular events.
The company plans to file an application for approval to the US Food and Drug Administration in the next 2-4 months, said Dr. Klug.
Still Work to Do
During a press briefing, Dave L, Dixon, PharmD, professor and chair, Virginia Commonwealth University School of Pharmacy, Richmond, and member of the ACC Prevention of Cardiovascular Disease Council, congratulated the investigators “on moving this product forward and demonstrating the LDL-lowering efficacy, as well as providing some additional safety and tolerability data.”
He added it’s “clear” from the baseline LDL characteristics that “we have a lot of work to do in terms of helping patients achieve their lipid goals.”
Dr. Dixon noted up to about 30% of patients have some form of statin intolerance. “So, we really have to utilize our non-statin therapies, and unfortunately, we’re not doing a great job of that.”
That the trial enrolled so many women is “fantastic,” said Dr. Dixon, adding the investigators also “did a great job” of enrolling underrepresented minorities.
Having a once-a-month self-injection option “is great” and “fills a nice niche” for patients, said Dr. Dixon.
The study was funded by LIB Therapeutics, which manufactures lerodalcibep. Dr. Klug had no conflicts relevant to this study (he received honoraria from Novartis, Amgen, and Sanofi-Aventis).
A version of this article appeared on Medscape.com.
FROM ACC 2024
Adding ACEI to Chemotherapy Does Not Prevent Cardiotoxicity
a new randomized trial showed.
The results suggested adding an ACE inhibitor doesn’t affect cardiac injury or cardiac function outcomes “and should not be used as a preventative strategy” in these patients, David Austin, MD, consultant cardiologist, Academic Cardiovascular Unit, The James Cook University Hospital, Middlesbrough, England, and chief investigator for the PROACT study, told this news organization.
But while these negative results are disappointing, he said, “we now have a definitive result in a robustly conducted trial that will take the field forward.”
The findings were presented on April 8, 2024, at the American College of Cardiology (ACC) Scientific Session 2024.
Anthracyclines, which are extracted from Streptomyces bacterium, are chemotherapy drugs widely used to treat several types of cancer. Doxorubicin is among the most clinically important anthracyclines.
While extremely effective, anthracyclines can cause irreversible damage to cardiac cells and ultimately impair cardiac function and even cause heart failure, which may only be evident years after exposure. “Cardiac injury is very common in patients treated with high dose anthracyclines,” noted Dr. Austin.
The open-label PROACT study included 111 adult patients, mean age 58 years and predominantly White and women, being treated for breast cancer (62%) or NHL (38%) at National Health Service hospitals in England with high-dose anthracycline-based chemotherapy.
Patients were randomized to standard care (six cycles of high-dose doxorubicin-equivalent anthracycline-based chemotherapy) plus the ACE inhibitor enalapril maleate or standard care alone. The mean chemotherapy dose was 328 mg/m2; any dose greater than 300 is considered high.
The starting dose of enalapril was 2.5 mg twice a day, which was titrated up to a maximum of 10 mg twice a day. The ACE inhibitor was started at least 2 days before chemotherapy began and finished 3 weeks after the last anthracycline dose.
During the study, enalapril was titrated to 20 mg in more than 75% of patients, with the mean dose being 17.7 mg.
Myocardial Injury Outcome
The primary outcome was myocardial injury measured by the presence (≥ 14 ng/L) of high sensitivity cardiac troponin T (cTnT) during anthracycline treatment and 1 month after the last dose of anthracycline.
cTnT is highly expressed in cardiomyocytes and has become a preferred biomarker for detecting acute myocardial infarction and other causes of myocardial injury.
Blood sampling for cTnT and cardiac troponin I (cTnI) was performed at baseline, within 72 hours prior to chemotherapy and at trial completion. All patients had negative troponin results at baseline, indicating no heart damage.
A majority of patients experienced elevations in troponin (78% in the enalapril group and 83% in the standard of care group), but there was no statistically significant difference between groups (adjusted odds ratio [OR], 0.65; 95% CI, 0.23-1.78; P = .405).
There was also no significant difference between groups in terms of cTnI, a secondary endpoint. However, the proportion of patients testing positive for cTnI (47% in the enalapril group and 45% in controls) was substantially lower than that for cTnT.
Large Discrepancy
The “large discrepancy in the rate of injury” with cTnT “has implications for the clinical interpretation of cardiac biomarkers in routine practice, and we should proceed with caution,” Dr. Austin told this news organization.
The finding has implications because guidelines don’t currently differentiate based on the type of troponin, Dr. Austin said in a press release. “I was surprised by the difference, and I think this raises the question of what troponin we should be using.”
Secondary outcomes focused on cardiac function, measured using echocardiography and included left ventricular global longitudinal strain (LVGLS) and left ventricular ejection fraction (LVEF). These were measured at baseline, 4 weeks after the last anthracycline dose and 1 year after the final chemotherapy.
There was no between-group difference in LVGLS cardiac function (21% for enalapril vs 22% for standard of care; adjusted OR, 0.95; 95% CI, 0.33-2.74; P = .921). This was also true for LVEF (4% for enalapril vs 0% for standard of care group; adjusted OR, 4.89; 95% CI, 0.40-674.62; P = .236).
Asked what the research team plans to do next, Dr. Austin said “the immediate first step” is to continue following PROACT patients. “We know heart failure events and cardiac dysfunction can occur later down the line.”
Due to the challenge of enrolling patients into trials like PROACT, “we should come together as a sort of a broader cardiovascular/oncology academic community to try to understand how we can better recruit patients into these studies,” said Dr. Austin.
“We need to solve that problem before we then go on to maybe examine other potential preventative therapies.”
He doesn’t think an alternative ACE inhibitor would prove beneficial. “We need to look elsewhere for effective therapies in this area.”
He noted these new findings are “broadly consistent” with other trials that investigated angiotensin receptor blockers.
Tough Population
Commenting on the study during a media briefing, Anita Deswal, chair, medicine, Department of Cardiology, Division of Internal Medicine, The University of Texas, commended the researchers for managing to enroll patients with cancer as this is “a tough” population to get to agree to being in a clinical trial.
“These patients are often overwhelmed financially, physically, and emotionally with the cancer diagnosis, as well as the cancer therapy and, therefore, to enroll them in something to prevent, maybe, some potential cardiac toxicity down the line, is really hard.”
Past trials investigating neuro-hormonal blockers to prevent cardiotoxicity have been criticized for enrolling patients at “too low risk,” said Dr. Deswal. “But investigators here went that step beyond and enrolled patients who were going to receive higher doses of anthracyclines, so kudos to that.”
And she noted investigators managed to get patients on almost the maximum dose of enalapril. “So, the drug was poised to have an effect — if it was there.”
The negative results may have something to do with endpoints. “Maybe we haven’t quite figured out what are the cutoffs for high sensitivity troponin I that identify patients truly at risk” of developing heart failure in the future.
Commenting on the study for this news organization, Anu Lala, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, said the results may come as a surprise to some.
“ACE inhibitors are considered cardioprotective and for this reason are often used prophylactically in patients receiving chemotherapy.”
Dr. Lala agrees troponin may not be the right endpoint. “Another question is whether clinical outcomes should be followed in addition to symptoms or onset of any heart failure symptoms, which may hold greater prognostic significance.”
The study was funded by the National Institute for Health and Care Research.
A version of this article appeared on Medscape.com.
a new randomized trial showed.
The results suggested adding an ACE inhibitor doesn’t affect cardiac injury or cardiac function outcomes “and should not be used as a preventative strategy” in these patients, David Austin, MD, consultant cardiologist, Academic Cardiovascular Unit, The James Cook University Hospital, Middlesbrough, England, and chief investigator for the PROACT study, told this news organization.
But while these negative results are disappointing, he said, “we now have a definitive result in a robustly conducted trial that will take the field forward.”
The findings were presented on April 8, 2024, at the American College of Cardiology (ACC) Scientific Session 2024.
Anthracyclines, which are extracted from Streptomyces bacterium, are chemotherapy drugs widely used to treat several types of cancer. Doxorubicin is among the most clinically important anthracyclines.
While extremely effective, anthracyclines can cause irreversible damage to cardiac cells and ultimately impair cardiac function and even cause heart failure, which may only be evident years after exposure. “Cardiac injury is very common in patients treated with high dose anthracyclines,” noted Dr. Austin.
The open-label PROACT study included 111 adult patients, mean age 58 years and predominantly White and women, being treated for breast cancer (62%) or NHL (38%) at National Health Service hospitals in England with high-dose anthracycline-based chemotherapy.
Patients were randomized to standard care (six cycles of high-dose doxorubicin-equivalent anthracycline-based chemotherapy) plus the ACE inhibitor enalapril maleate or standard care alone. The mean chemotherapy dose was 328 mg/m2; any dose greater than 300 is considered high.
The starting dose of enalapril was 2.5 mg twice a day, which was titrated up to a maximum of 10 mg twice a day. The ACE inhibitor was started at least 2 days before chemotherapy began and finished 3 weeks after the last anthracycline dose.
During the study, enalapril was titrated to 20 mg in more than 75% of patients, with the mean dose being 17.7 mg.
Myocardial Injury Outcome
The primary outcome was myocardial injury measured by the presence (≥ 14 ng/L) of high sensitivity cardiac troponin T (cTnT) during anthracycline treatment and 1 month after the last dose of anthracycline.
cTnT is highly expressed in cardiomyocytes and has become a preferred biomarker for detecting acute myocardial infarction and other causes of myocardial injury.
Blood sampling for cTnT and cardiac troponin I (cTnI) was performed at baseline, within 72 hours prior to chemotherapy and at trial completion. All patients had negative troponin results at baseline, indicating no heart damage.
A majority of patients experienced elevations in troponin (78% in the enalapril group and 83% in the standard of care group), but there was no statistically significant difference between groups (adjusted odds ratio [OR], 0.65; 95% CI, 0.23-1.78; P = .405).
There was also no significant difference between groups in terms of cTnI, a secondary endpoint. However, the proportion of patients testing positive for cTnI (47% in the enalapril group and 45% in controls) was substantially lower than that for cTnT.
Large Discrepancy
The “large discrepancy in the rate of injury” with cTnT “has implications for the clinical interpretation of cardiac biomarkers in routine practice, and we should proceed with caution,” Dr. Austin told this news organization.
The finding has implications because guidelines don’t currently differentiate based on the type of troponin, Dr. Austin said in a press release. “I was surprised by the difference, and I think this raises the question of what troponin we should be using.”
Secondary outcomes focused on cardiac function, measured using echocardiography and included left ventricular global longitudinal strain (LVGLS) and left ventricular ejection fraction (LVEF). These were measured at baseline, 4 weeks after the last anthracycline dose and 1 year after the final chemotherapy.
There was no between-group difference in LVGLS cardiac function (21% for enalapril vs 22% for standard of care; adjusted OR, 0.95; 95% CI, 0.33-2.74; P = .921). This was also true for LVEF (4% for enalapril vs 0% for standard of care group; adjusted OR, 4.89; 95% CI, 0.40-674.62; P = .236).
Asked what the research team plans to do next, Dr. Austin said “the immediate first step” is to continue following PROACT patients. “We know heart failure events and cardiac dysfunction can occur later down the line.”
Due to the challenge of enrolling patients into trials like PROACT, “we should come together as a sort of a broader cardiovascular/oncology academic community to try to understand how we can better recruit patients into these studies,” said Dr. Austin.
“We need to solve that problem before we then go on to maybe examine other potential preventative therapies.”
He doesn’t think an alternative ACE inhibitor would prove beneficial. “We need to look elsewhere for effective therapies in this area.”
He noted these new findings are “broadly consistent” with other trials that investigated angiotensin receptor blockers.
Tough Population
Commenting on the study during a media briefing, Anita Deswal, chair, medicine, Department of Cardiology, Division of Internal Medicine, The University of Texas, commended the researchers for managing to enroll patients with cancer as this is “a tough” population to get to agree to being in a clinical trial.
“These patients are often overwhelmed financially, physically, and emotionally with the cancer diagnosis, as well as the cancer therapy and, therefore, to enroll them in something to prevent, maybe, some potential cardiac toxicity down the line, is really hard.”
Past trials investigating neuro-hormonal blockers to prevent cardiotoxicity have been criticized for enrolling patients at “too low risk,” said Dr. Deswal. “But investigators here went that step beyond and enrolled patients who were going to receive higher doses of anthracyclines, so kudos to that.”
And she noted investigators managed to get patients on almost the maximum dose of enalapril. “So, the drug was poised to have an effect — if it was there.”
The negative results may have something to do with endpoints. “Maybe we haven’t quite figured out what are the cutoffs for high sensitivity troponin I that identify patients truly at risk” of developing heart failure in the future.
Commenting on the study for this news organization, Anu Lala, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, said the results may come as a surprise to some.
“ACE inhibitors are considered cardioprotective and for this reason are often used prophylactically in patients receiving chemotherapy.”
Dr. Lala agrees troponin may not be the right endpoint. “Another question is whether clinical outcomes should be followed in addition to symptoms or onset of any heart failure symptoms, which may hold greater prognostic significance.”
The study was funded by the National Institute for Health and Care Research.
A version of this article appeared on Medscape.com.
a new randomized trial showed.
The results suggested adding an ACE inhibitor doesn’t affect cardiac injury or cardiac function outcomes “and should not be used as a preventative strategy” in these patients, David Austin, MD, consultant cardiologist, Academic Cardiovascular Unit, The James Cook University Hospital, Middlesbrough, England, and chief investigator for the PROACT study, told this news organization.
But while these negative results are disappointing, he said, “we now have a definitive result in a robustly conducted trial that will take the field forward.”
The findings were presented on April 8, 2024, at the American College of Cardiology (ACC) Scientific Session 2024.
Anthracyclines, which are extracted from Streptomyces bacterium, are chemotherapy drugs widely used to treat several types of cancer. Doxorubicin is among the most clinically important anthracyclines.
While extremely effective, anthracyclines can cause irreversible damage to cardiac cells and ultimately impair cardiac function and even cause heart failure, which may only be evident years after exposure. “Cardiac injury is very common in patients treated with high dose anthracyclines,” noted Dr. Austin.
The open-label PROACT study included 111 adult patients, mean age 58 years and predominantly White and women, being treated for breast cancer (62%) or NHL (38%) at National Health Service hospitals in England with high-dose anthracycline-based chemotherapy.
Patients were randomized to standard care (six cycles of high-dose doxorubicin-equivalent anthracycline-based chemotherapy) plus the ACE inhibitor enalapril maleate or standard care alone. The mean chemotherapy dose was 328 mg/m2; any dose greater than 300 is considered high.
The starting dose of enalapril was 2.5 mg twice a day, which was titrated up to a maximum of 10 mg twice a day. The ACE inhibitor was started at least 2 days before chemotherapy began and finished 3 weeks after the last anthracycline dose.
During the study, enalapril was titrated to 20 mg in more than 75% of patients, with the mean dose being 17.7 mg.
Myocardial Injury Outcome
The primary outcome was myocardial injury measured by the presence (≥ 14 ng/L) of high sensitivity cardiac troponin T (cTnT) during anthracycline treatment and 1 month after the last dose of anthracycline.
cTnT is highly expressed in cardiomyocytes and has become a preferred biomarker for detecting acute myocardial infarction and other causes of myocardial injury.
Blood sampling for cTnT and cardiac troponin I (cTnI) was performed at baseline, within 72 hours prior to chemotherapy and at trial completion. All patients had negative troponin results at baseline, indicating no heart damage.
A majority of patients experienced elevations in troponin (78% in the enalapril group and 83% in the standard of care group), but there was no statistically significant difference between groups (adjusted odds ratio [OR], 0.65; 95% CI, 0.23-1.78; P = .405).
There was also no significant difference between groups in terms of cTnI, a secondary endpoint. However, the proportion of patients testing positive for cTnI (47% in the enalapril group and 45% in controls) was substantially lower than that for cTnT.
Large Discrepancy
The “large discrepancy in the rate of injury” with cTnT “has implications for the clinical interpretation of cardiac biomarkers in routine practice, and we should proceed with caution,” Dr. Austin told this news organization.
The finding has implications because guidelines don’t currently differentiate based on the type of troponin, Dr. Austin said in a press release. “I was surprised by the difference, and I think this raises the question of what troponin we should be using.”
Secondary outcomes focused on cardiac function, measured using echocardiography and included left ventricular global longitudinal strain (LVGLS) and left ventricular ejection fraction (LVEF). These were measured at baseline, 4 weeks after the last anthracycline dose and 1 year after the final chemotherapy.
There was no between-group difference in LVGLS cardiac function (21% for enalapril vs 22% for standard of care; adjusted OR, 0.95; 95% CI, 0.33-2.74; P = .921). This was also true for LVEF (4% for enalapril vs 0% for standard of care group; adjusted OR, 4.89; 95% CI, 0.40-674.62; P = .236).
Asked what the research team plans to do next, Dr. Austin said “the immediate first step” is to continue following PROACT patients. “We know heart failure events and cardiac dysfunction can occur later down the line.”
Due to the challenge of enrolling patients into trials like PROACT, “we should come together as a sort of a broader cardiovascular/oncology academic community to try to understand how we can better recruit patients into these studies,” said Dr. Austin.
“We need to solve that problem before we then go on to maybe examine other potential preventative therapies.”
He doesn’t think an alternative ACE inhibitor would prove beneficial. “We need to look elsewhere for effective therapies in this area.”
He noted these new findings are “broadly consistent” with other trials that investigated angiotensin receptor blockers.
Tough Population
Commenting on the study during a media briefing, Anita Deswal, chair, medicine, Department of Cardiology, Division of Internal Medicine, The University of Texas, commended the researchers for managing to enroll patients with cancer as this is “a tough” population to get to agree to being in a clinical trial.
“These patients are often overwhelmed financially, physically, and emotionally with the cancer diagnosis, as well as the cancer therapy and, therefore, to enroll them in something to prevent, maybe, some potential cardiac toxicity down the line, is really hard.”
Past trials investigating neuro-hormonal blockers to prevent cardiotoxicity have been criticized for enrolling patients at “too low risk,” said Dr. Deswal. “But investigators here went that step beyond and enrolled patients who were going to receive higher doses of anthracyclines, so kudos to that.”
And she noted investigators managed to get patients on almost the maximum dose of enalapril. “So, the drug was poised to have an effect — if it was there.”
The negative results may have something to do with endpoints. “Maybe we haven’t quite figured out what are the cutoffs for high sensitivity troponin I that identify patients truly at risk” of developing heart failure in the future.
Commenting on the study for this news organization, Anu Lala, MD, assistant professor of medicine at the Icahn School of Medicine at Mount Sinai, New York City, said the results may come as a surprise to some.
“ACE inhibitors are considered cardioprotective and for this reason are often used prophylactically in patients receiving chemotherapy.”
Dr. Lala agrees troponin may not be the right endpoint. “Another question is whether clinical outcomes should be followed in addition to symptoms or onset of any heart failure symptoms, which may hold greater prognostic significance.”
The study was funded by the National Institute for Health and Care Research.
A version of this article appeared on Medscape.com.
FROM THE ACC 2024
New and Improved Option for Detecting Neurologic Pathogens?
DENVER — , results of a real-world analysis show.
Metagenomic next-generation sequencing (mNGS) of RNA and DNA from cerebrospinal fluid (CSF) simultaneously tests for a wide range of infectious agents and identifies individual pathogens, including viruses, bacteria, fungi, and parasites. About half of patients with a suspected central nervous system (CNS) infection may go undiagnosed due to a lack of tools that detect rare pathogens. Although mNGS is currently available only in specialized laboratories, expanding access to the diagnostic could address this problem, investigators noted.
“Our results justify incorporation of CSF mNGS testing as part of the routine diagnostic workup in hospitalized patients who present with potential central nervous system infections,” study investigator Charles Chiu, MD, PhD, professor in the Department of Laboratory Medicine as well as Medicine and Department of Medicine – Infectious Diseases and director of the Clinical Microbiology Laboratory, University of California San Fransisco (UCSF), said at a press conference.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology (AAN).
‘Real-World’ Performance
Accurate diagnosis of CNS infections on the basis of CSF, imaging, patient history, and presentation is challenging, the researchers noted. “Roughly 50% of patients who present with a presumed central nervous system infection actually end up without a diagnosis,” Dr. Chiu said.
This is due to the lack of diagnostic tests for rare pathogens and because noninfectious conditions like cancer, autoantibody syndrome, or vasculitis can mimic an infection, he added.
CSF is “very limiting,” Dr. Chiu noted. “We are unable, practically, from a volume perspective, as well as a cost and turnaround time perspective, to be able to send off every possible test for every possible organism.”
The inability to rapidly pinpoint the cause of an infectious disease like meningitis or encephalitis can cause delays in appropriate treatment.
To assess the “real-world” performance of mNGS, researchers collected 4828 samples from mainly hospitalized patients across the United States and elsewhere from 2016 to 2023.
Overall, the test detected at least one pathogen in 16.6% of cases. More than 70% were DNA or RNA viruses, followed by bacteria, fungi, and parasites.
High Sensitivity
The technology was also able to detect novel or emerging neurotropic pathogens, including a yellow fever virus responsible for a transfusion-transmitted encephalitis outbreak and Fusarium solani, which caused a fungal meningitis outbreak.
Investigators also conducted a chart review on a subset of 1052 patients at UCSF to compare the performance of CSF nMGS testing with commonly used in-hospital diagnostic tests.
“We showed that as a single test, spinal fluid mNGS has an overall sensitivity of 63%, specificity of 99%, and accuracy of 90%,” said Dr. Chiu.
The sensitivity of mNGS was significantly higher compared with direct-detection testing from CSF (46%); direct-detection testing performed on samples other than CSF, such as blood (15%); and indirect serologic testing looking for antibodies (29%) (P < .001 for all).
This suggests that mNGS could potentially “detect the hundreds of different pathogens that cause clinically indistinguishable infections,” Dr. Chui said.
mNGS testing is currently confined to large specialized or reference laboratories. For greater access to the test, routine clinical labs or hospital labs would have to implement it, said Dr. Chiu.
“If you can bring the technology to the point of care, directly to the hospital lab that’s running the test, we can produce results that would have a more rapid impact on patients,” he said.
Guiding Therapy
Ultimately, he added, the purpose of a diagnostic test is to “generate actionable information that could potentially guide therapy.”
Researchers are now evaluating medical charts of the same subcohort of patients to determine whether the test made a clinical difference.
“We want to know if it had a positive or negative or no clinical impact on the management and treatment of patients,” said Dr. Chiu. “Producing data like this will help us define the role of this test in the future as part of the diagnostic paradigm.”
The researchers are also working on a cost/benefit analysis, and Dr. Chiu said that US Food and Drug Administration approval of the test is needed “to establish a blueprint for reimbursement.”
Commenting on the findings, Jessica Robinson-Papp, MD, professor and vice chair of clinical research, Department of Neurology, Icahn School of Medicine, New York, said that the technology could be useful, especially in developing countries with higher rates of CNS infections.
“What’s really exciting about it is you can take a very small CSF sample, like 1 mL, and in an unbiased way just screen for all different kinds of pathogens including both DNA and RNA viruses, parasites, bacteria, and fungi, and sort of come up with whether there’s a pathogen there or whether there is no pathogen there,” she said.
However, there’s a chance that this sensitive technique will pick up contaminants, she added. “For example, if there’s a little environmental bacterium either on the skin or in the water used for processing, it can get reads on that.”
The study received support from Delve Bio and the Chan-Zuckerberg Biohub.
Dr. Chiu has received personal compensation for serving on a Scientific Advisory or Data Safety Monitoring Board for Biomeme and has stock in Delve Bio, Poppy Health, Mammoth Biosciences, and BiomeSense and has received intellectual property interests from a discovery or technology relating to healthcare. Dr. Robinson-Papp has no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
DENVER — , results of a real-world analysis show.
Metagenomic next-generation sequencing (mNGS) of RNA and DNA from cerebrospinal fluid (CSF) simultaneously tests for a wide range of infectious agents and identifies individual pathogens, including viruses, bacteria, fungi, and parasites. About half of patients with a suspected central nervous system (CNS) infection may go undiagnosed due to a lack of tools that detect rare pathogens. Although mNGS is currently available only in specialized laboratories, expanding access to the diagnostic could address this problem, investigators noted.
“Our results justify incorporation of CSF mNGS testing as part of the routine diagnostic workup in hospitalized patients who present with potential central nervous system infections,” study investigator Charles Chiu, MD, PhD, professor in the Department of Laboratory Medicine as well as Medicine and Department of Medicine – Infectious Diseases and director of the Clinical Microbiology Laboratory, University of California San Fransisco (UCSF), said at a press conference.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology (AAN).
‘Real-World’ Performance
Accurate diagnosis of CNS infections on the basis of CSF, imaging, patient history, and presentation is challenging, the researchers noted. “Roughly 50% of patients who present with a presumed central nervous system infection actually end up without a diagnosis,” Dr. Chiu said.
This is due to the lack of diagnostic tests for rare pathogens and because noninfectious conditions like cancer, autoantibody syndrome, or vasculitis can mimic an infection, he added.
CSF is “very limiting,” Dr. Chiu noted. “We are unable, practically, from a volume perspective, as well as a cost and turnaround time perspective, to be able to send off every possible test for every possible organism.”
The inability to rapidly pinpoint the cause of an infectious disease like meningitis or encephalitis can cause delays in appropriate treatment.
To assess the “real-world” performance of mNGS, researchers collected 4828 samples from mainly hospitalized patients across the United States and elsewhere from 2016 to 2023.
Overall, the test detected at least one pathogen in 16.6% of cases. More than 70% were DNA or RNA viruses, followed by bacteria, fungi, and parasites.
High Sensitivity
The technology was also able to detect novel or emerging neurotropic pathogens, including a yellow fever virus responsible for a transfusion-transmitted encephalitis outbreak and Fusarium solani, which caused a fungal meningitis outbreak.
Investigators also conducted a chart review on a subset of 1052 patients at UCSF to compare the performance of CSF nMGS testing with commonly used in-hospital diagnostic tests.
“We showed that as a single test, spinal fluid mNGS has an overall sensitivity of 63%, specificity of 99%, and accuracy of 90%,” said Dr. Chiu.
The sensitivity of mNGS was significantly higher compared with direct-detection testing from CSF (46%); direct-detection testing performed on samples other than CSF, such as blood (15%); and indirect serologic testing looking for antibodies (29%) (P < .001 for all).
This suggests that mNGS could potentially “detect the hundreds of different pathogens that cause clinically indistinguishable infections,” Dr. Chui said.
mNGS testing is currently confined to large specialized or reference laboratories. For greater access to the test, routine clinical labs or hospital labs would have to implement it, said Dr. Chiu.
“If you can bring the technology to the point of care, directly to the hospital lab that’s running the test, we can produce results that would have a more rapid impact on patients,” he said.
Guiding Therapy
Ultimately, he added, the purpose of a diagnostic test is to “generate actionable information that could potentially guide therapy.”
Researchers are now evaluating medical charts of the same subcohort of patients to determine whether the test made a clinical difference.
“We want to know if it had a positive or negative or no clinical impact on the management and treatment of patients,” said Dr. Chiu. “Producing data like this will help us define the role of this test in the future as part of the diagnostic paradigm.”
The researchers are also working on a cost/benefit analysis, and Dr. Chiu said that US Food and Drug Administration approval of the test is needed “to establish a blueprint for reimbursement.”
Commenting on the findings, Jessica Robinson-Papp, MD, professor and vice chair of clinical research, Department of Neurology, Icahn School of Medicine, New York, said that the technology could be useful, especially in developing countries with higher rates of CNS infections.
“What’s really exciting about it is you can take a very small CSF sample, like 1 mL, and in an unbiased way just screen for all different kinds of pathogens including both DNA and RNA viruses, parasites, bacteria, and fungi, and sort of come up with whether there’s a pathogen there or whether there is no pathogen there,” she said.
However, there’s a chance that this sensitive technique will pick up contaminants, she added. “For example, if there’s a little environmental bacterium either on the skin or in the water used for processing, it can get reads on that.”
The study received support from Delve Bio and the Chan-Zuckerberg Biohub.
Dr. Chiu has received personal compensation for serving on a Scientific Advisory or Data Safety Monitoring Board for Biomeme and has stock in Delve Bio, Poppy Health, Mammoth Biosciences, and BiomeSense and has received intellectual property interests from a discovery or technology relating to healthcare. Dr. Robinson-Papp has no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
DENVER — , results of a real-world analysis show.
Metagenomic next-generation sequencing (mNGS) of RNA and DNA from cerebrospinal fluid (CSF) simultaneously tests for a wide range of infectious agents and identifies individual pathogens, including viruses, bacteria, fungi, and parasites. About half of patients with a suspected central nervous system (CNS) infection may go undiagnosed due to a lack of tools that detect rare pathogens. Although mNGS is currently available only in specialized laboratories, expanding access to the diagnostic could address this problem, investigators noted.
“Our results justify incorporation of CSF mNGS testing as part of the routine diagnostic workup in hospitalized patients who present with potential central nervous system infections,” study investigator Charles Chiu, MD, PhD, professor in the Department of Laboratory Medicine as well as Medicine and Department of Medicine – Infectious Diseases and director of the Clinical Microbiology Laboratory, University of California San Fransisco (UCSF), said at a press conference.
The findings were presented at the 2024 annual meeting of the American Academy of Neurology (AAN).
‘Real-World’ Performance
Accurate diagnosis of CNS infections on the basis of CSF, imaging, patient history, and presentation is challenging, the researchers noted. “Roughly 50% of patients who present with a presumed central nervous system infection actually end up without a diagnosis,” Dr. Chiu said.
This is due to the lack of diagnostic tests for rare pathogens and because noninfectious conditions like cancer, autoantibody syndrome, or vasculitis can mimic an infection, he added.
CSF is “very limiting,” Dr. Chiu noted. “We are unable, practically, from a volume perspective, as well as a cost and turnaround time perspective, to be able to send off every possible test for every possible organism.”
The inability to rapidly pinpoint the cause of an infectious disease like meningitis or encephalitis can cause delays in appropriate treatment.
To assess the “real-world” performance of mNGS, researchers collected 4828 samples from mainly hospitalized patients across the United States and elsewhere from 2016 to 2023.
Overall, the test detected at least one pathogen in 16.6% of cases. More than 70% were DNA or RNA viruses, followed by bacteria, fungi, and parasites.
High Sensitivity
The technology was also able to detect novel or emerging neurotropic pathogens, including a yellow fever virus responsible for a transfusion-transmitted encephalitis outbreak and Fusarium solani, which caused a fungal meningitis outbreak.
Investigators also conducted a chart review on a subset of 1052 patients at UCSF to compare the performance of CSF nMGS testing with commonly used in-hospital diagnostic tests.
“We showed that as a single test, spinal fluid mNGS has an overall sensitivity of 63%, specificity of 99%, and accuracy of 90%,” said Dr. Chiu.
The sensitivity of mNGS was significantly higher compared with direct-detection testing from CSF (46%); direct-detection testing performed on samples other than CSF, such as blood (15%); and indirect serologic testing looking for antibodies (29%) (P < .001 for all).
This suggests that mNGS could potentially “detect the hundreds of different pathogens that cause clinically indistinguishable infections,” Dr. Chui said.
mNGS testing is currently confined to large specialized or reference laboratories. For greater access to the test, routine clinical labs or hospital labs would have to implement it, said Dr. Chiu.
“If you can bring the technology to the point of care, directly to the hospital lab that’s running the test, we can produce results that would have a more rapid impact on patients,” he said.
Guiding Therapy
Ultimately, he added, the purpose of a diagnostic test is to “generate actionable information that could potentially guide therapy.”
Researchers are now evaluating medical charts of the same subcohort of patients to determine whether the test made a clinical difference.
“We want to know if it had a positive or negative or no clinical impact on the management and treatment of patients,” said Dr. Chiu. “Producing data like this will help us define the role of this test in the future as part of the diagnostic paradigm.”
The researchers are also working on a cost/benefit analysis, and Dr. Chiu said that US Food and Drug Administration approval of the test is needed “to establish a blueprint for reimbursement.”
Commenting on the findings, Jessica Robinson-Papp, MD, professor and vice chair of clinical research, Department of Neurology, Icahn School of Medicine, New York, said that the technology could be useful, especially in developing countries with higher rates of CNS infections.
“What’s really exciting about it is you can take a very small CSF sample, like 1 mL, and in an unbiased way just screen for all different kinds of pathogens including both DNA and RNA viruses, parasites, bacteria, and fungi, and sort of come up with whether there’s a pathogen there or whether there is no pathogen there,” she said.
However, there’s a chance that this sensitive technique will pick up contaminants, she added. “For example, if there’s a little environmental bacterium either on the skin or in the water used for processing, it can get reads on that.”
The study received support from Delve Bio and the Chan-Zuckerberg Biohub.
Dr. Chiu has received personal compensation for serving on a Scientific Advisory or Data Safety Monitoring Board for Biomeme and has stock in Delve Bio, Poppy Health, Mammoth Biosciences, and BiomeSense and has received intellectual property interests from a discovery or technology relating to healthcare. Dr. Robinson-Papp has no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Tension, Other Headache Types Robustly Linked to Attempted, Completed Suicide
DENVER –
, results of a large study suggest.The risk for suicide attempt was four times higher in people with trigeminal and autonomic cephalalgias (TAC), and the risk for completed suicide was double among those with posttraumatic headache compared with individuals with no headache.
The retrospective analysis included data on more than 100,000 headache patients from a Danish registry.
“The results suggest there’s a unique risk among headache patients for attempted and completed suicide,” lead investigator Holly Elser, MD, MPH, PhD, resident, Department of Neurology, University of Pennsylvania, Philadelphia, said at the 2024 annual meeting of the American Academy of Neurology, where the findings were presented. “This really underscores the potential importance of complementary psychiatric evaluation and treatment for individuals diagnosed with headache.”
Underestimated Problem
Headache disorders affect about half of working-age adults and are among the leading causes of productivity loss, absence from work, and disability.
Prior research suggests headache disorders often co-occur with psychiatric illness including depression, anxiety, posttraumatic stress disorder, and even attempted suicide.
However, previous studies that showed an increased risk for attempted suicide in patients with headache relied heavily on survey data and mostly focused on patients with migraine. There was little information on other headache types and on the risk for completed suicide.
Researchers used Danish registries to identify 64,057 patients with migraine, 40,160 with tension-type headache (TTH), 5743 with TAC, and 4253 with posttraumatic headache, all diagnosed from 1995 to 2019.
Some 5.8% of those with migraine, 6.3% with TAC, 7.2% with TTH, and 7.2% with posttraumatic headache, had a mood disorder (depression and anxiety combined) at baseline.
Those without a headache diagnosis were matched 5:1 to those with a headache diagnosis by sex and birth year.
Across all headache disorders, baseline prevalence of mood disorder was higher among those with headache versus population-matched controls. Dr. Elser emphasized that these are people diagnosed with a mood disorder in the inpatient, emergency department, or outpatient specialist clinic setting, “which means we are almost certainly underestimating the true burden of mood symptoms in our cohort,” she said.
Researchers identified attempted suicides using diagnostic codes. For completed suicide, they determined whether those who attempted suicide died within 30 days of the attempt.
For each headache type, investigators examined both the absolute and relative risk for attempted and completed suicides and estimated the risk at intervals of 5, 10, and 20 years after initial headache diagnosis.
Robust Link
The “power of this study is that we asked a simple, but important question, and answered it with simple, but appropriate, methodologic techniques,” Dr. Elser said.
The estimated risk differences (RDs) for attempted suicide were strongest for TAC and posttraumatic headache and for longer follow-ups. The RDs for completed suicide were largely the same but of a smaller magnitude and were “relatively less precise,” reflecting the “rarity of this outcome,” said Dr. Elser.
After adjusting for sex, age, education, income, comorbidities, and baseline medical and psychiatric diagnoses, researchers found the strongest association or attempted suicide was among those with TAC (adjusted hazard ratio [aHR], 4.25; 95% CI, 2.85-6.33).
“A hazard ratio of 4 is enormous” for this type of comparison, Dr. Elser noted.
For completed suicide, the strongest association was with posttraumatic headache (aHR, 2.19; 95% CI, 0.78-6.16).
The study revealed a robust association with attempted and completed suicide across all headache types, including TTH, noted Dr. Elser. The link between tension headaches and suicide “was the most striking finding to me because I think of that as sort of a benign and common headache disorder,” she said.
The was an observational study, so “it’s not clear whether headache is playing an etiological role in the relationship with suicide,” she said. “It’s possible there are common shared risk factors or confounders that explain the relationship in full or in part that aren’t accounted for in this study.”
Ask About Mood
The results underscore the need for psychiatric evaluations in patients with a headache disorder. “For me, this is just going to make me that much more likely to ask my patients about their mood when I see them in clinic,” Dr. Elser said.
After asking patients with headache about their mood and stress at home and at work, physicians should have a “low threshold to refer to a behavioral health provider,” she added.
Future research should aim to better understand the link between headache and suicide risk, with a focus on the mechanisms behind low- and high-risk subgroups, said Dr. Elser.
A limitation of the study was that headache diagnoses were based on inpatient, emergency department, and outpatient specialist visits but not on visits to primary care practitioners. The study didn’t include information on headache severity or frequency and included only people who sought treatment for their headaches.
Though it’s unlikely the results “are perfectly generalizable” with respect to other geographical or cultural contexts, “I don’t think this relationship is unique to Denmark based on the literature to date,” Dr. Elser said.
Commenting on the study, session co-chair Todd J. Schwedt, MD, professor of neurology, Mayo Clinic Arizona, Phoenix, and president-elect of the American Headache Society, noted that the study offers important findings “that demonstrate the enormous negative impact that headaches can exert.”
It’s “a strong reminder” that clinicians should assess the mental health of their patients with headaches and offer treatment when appropriate, he said.
The study received support from Aarhus University. No relevant conflicts of interest were reported.
A version of this article appeared on Medscape.com.
DENVER –
, results of a large study suggest.The risk for suicide attempt was four times higher in people with trigeminal and autonomic cephalalgias (TAC), and the risk for completed suicide was double among those with posttraumatic headache compared with individuals with no headache.
The retrospective analysis included data on more than 100,000 headache patients from a Danish registry.
“The results suggest there’s a unique risk among headache patients for attempted and completed suicide,” lead investigator Holly Elser, MD, MPH, PhD, resident, Department of Neurology, University of Pennsylvania, Philadelphia, said at the 2024 annual meeting of the American Academy of Neurology, where the findings were presented. “This really underscores the potential importance of complementary psychiatric evaluation and treatment for individuals diagnosed with headache.”
Underestimated Problem
Headache disorders affect about half of working-age adults and are among the leading causes of productivity loss, absence from work, and disability.
Prior research suggests headache disorders often co-occur with psychiatric illness including depression, anxiety, posttraumatic stress disorder, and even attempted suicide.
However, previous studies that showed an increased risk for attempted suicide in patients with headache relied heavily on survey data and mostly focused on patients with migraine. There was little information on other headache types and on the risk for completed suicide.
Researchers used Danish registries to identify 64,057 patients with migraine, 40,160 with tension-type headache (TTH), 5743 with TAC, and 4253 with posttraumatic headache, all diagnosed from 1995 to 2019.
Some 5.8% of those with migraine, 6.3% with TAC, 7.2% with TTH, and 7.2% with posttraumatic headache, had a mood disorder (depression and anxiety combined) at baseline.
Those without a headache diagnosis were matched 5:1 to those with a headache diagnosis by sex and birth year.
Across all headache disorders, baseline prevalence of mood disorder was higher among those with headache versus population-matched controls. Dr. Elser emphasized that these are people diagnosed with a mood disorder in the inpatient, emergency department, or outpatient specialist clinic setting, “which means we are almost certainly underestimating the true burden of mood symptoms in our cohort,” she said.
Researchers identified attempted suicides using diagnostic codes. For completed suicide, they determined whether those who attempted suicide died within 30 days of the attempt.
For each headache type, investigators examined both the absolute and relative risk for attempted and completed suicides and estimated the risk at intervals of 5, 10, and 20 years after initial headache diagnosis.
Robust Link
The “power of this study is that we asked a simple, but important question, and answered it with simple, but appropriate, methodologic techniques,” Dr. Elser said.
The estimated risk differences (RDs) for attempted suicide were strongest for TAC and posttraumatic headache and for longer follow-ups. The RDs for completed suicide were largely the same but of a smaller magnitude and were “relatively less precise,” reflecting the “rarity of this outcome,” said Dr. Elser.
After adjusting for sex, age, education, income, comorbidities, and baseline medical and psychiatric diagnoses, researchers found the strongest association or attempted suicide was among those with TAC (adjusted hazard ratio [aHR], 4.25; 95% CI, 2.85-6.33).
“A hazard ratio of 4 is enormous” for this type of comparison, Dr. Elser noted.
For completed suicide, the strongest association was with posttraumatic headache (aHR, 2.19; 95% CI, 0.78-6.16).
The study revealed a robust association with attempted and completed suicide across all headache types, including TTH, noted Dr. Elser. The link between tension headaches and suicide “was the most striking finding to me because I think of that as sort of a benign and common headache disorder,” she said.
The was an observational study, so “it’s not clear whether headache is playing an etiological role in the relationship with suicide,” she said. “It’s possible there are common shared risk factors or confounders that explain the relationship in full or in part that aren’t accounted for in this study.”
Ask About Mood
The results underscore the need for psychiatric evaluations in patients with a headache disorder. “For me, this is just going to make me that much more likely to ask my patients about their mood when I see them in clinic,” Dr. Elser said.
After asking patients with headache about their mood and stress at home and at work, physicians should have a “low threshold to refer to a behavioral health provider,” she added.
Future research should aim to better understand the link between headache and suicide risk, with a focus on the mechanisms behind low- and high-risk subgroups, said Dr. Elser.
A limitation of the study was that headache diagnoses were based on inpatient, emergency department, and outpatient specialist visits but not on visits to primary care practitioners. The study didn’t include information on headache severity or frequency and included only people who sought treatment for their headaches.
Though it’s unlikely the results “are perfectly generalizable” with respect to other geographical or cultural contexts, “I don’t think this relationship is unique to Denmark based on the literature to date,” Dr. Elser said.
Commenting on the study, session co-chair Todd J. Schwedt, MD, professor of neurology, Mayo Clinic Arizona, Phoenix, and president-elect of the American Headache Society, noted that the study offers important findings “that demonstrate the enormous negative impact that headaches can exert.”
It’s “a strong reminder” that clinicians should assess the mental health of their patients with headaches and offer treatment when appropriate, he said.
The study received support from Aarhus University. No relevant conflicts of interest were reported.
A version of this article appeared on Medscape.com.
DENVER –
, results of a large study suggest.The risk for suicide attempt was four times higher in people with trigeminal and autonomic cephalalgias (TAC), and the risk for completed suicide was double among those with posttraumatic headache compared with individuals with no headache.
The retrospective analysis included data on more than 100,000 headache patients from a Danish registry.
“The results suggest there’s a unique risk among headache patients for attempted and completed suicide,” lead investigator Holly Elser, MD, MPH, PhD, resident, Department of Neurology, University of Pennsylvania, Philadelphia, said at the 2024 annual meeting of the American Academy of Neurology, where the findings were presented. “This really underscores the potential importance of complementary psychiatric evaluation and treatment for individuals diagnosed with headache.”
Underestimated Problem
Headache disorders affect about half of working-age adults and are among the leading causes of productivity loss, absence from work, and disability.
Prior research suggests headache disorders often co-occur with psychiatric illness including depression, anxiety, posttraumatic stress disorder, and even attempted suicide.
However, previous studies that showed an increased risk for attempted suicide in patients with headache relied heavily on survey data and mostly focused on patients with migraine. There was little information on other headache types and on the risk for completed suicide.
Researchers used Danish registries to identify 64,057 patients with migraine, 40,160 with tension-type headache (TTH), 5743 with TAC, and 4253 with posttraumatic headache, all diagnosed from 1995 to 2019.
Some 5.8% of those with migraine, 6.3% with TAC, 7.2% with TTH, and 7.2% with posttraumatic headache, had a mood disorder (depression and anxiety combined) at baseline.
Those without a headache diagnosis were matched 5:1 to those with a headache diagnosis by sex and birth year.
Across all headache disorders, baseline prevalence of mood disorder was higher among those with headache versus population-matched controls. Dr. Elser emphasized that these are people diagnosed with a mood disorder in the inpatient, emergency department, or outpatient specialist clinic setting, “which means we are almost certainly underestimating the true burden of mood symptoms in our cohort,” she said.
Researchers identified attempted suicides using diagnostic codes. For completed suicide, they determined whether those who attempted suicide died within 30 days of the attempt.
For each headache type, investigators examined both the absolute and relative risk for attempted and completed suicides and estimated the risk at intervals of 5, 10, and 20 years after initial headache diagnosis.
Robust Link
The “power of this study is that we asked a simple, but important question, and answered it with simple, but appropriate, methodologic techniques,” Dr. Elser said.
The estimated risk differences (RDs) for attempted suicide were strongest for TAC and posttraumatic headache and for longer follow-ups. The RDs for completed suicide were largely the same but of a smaller magnitude and were “relatively less precise,” reflecting the “rarity of this outcome,” said Dr. Elser.
After adjusting for sex, age, education, income, comorbidities, and baseline medical and psychiatric diagnoses, researchers found the strongest association or attempted suicide was among those with TAC (adjusted hazard ratio [aHR], 4.25; 95% CI, 2.85-6.33).
“A hazard ratio of 4 is enormous” for this type of comparison, Dr. Elser noted.
For completed suicide, the strongest association was with posttraumatic headache (aHR, 2.19; 95% CI, 0.78-6.16).
The study revealed a robust association with attempted and completed suicide across all headache types, including TTH, noted Dr. Elser. The link between tension headaches and suicide “was the most striking finding to me because I think of that as sort of a benign and common headache disorder,” she said.
The was an observational study, so “it’s not clear whether headache is playing an etiological role in the relationship with suicide,” she said. “It’s possible there are common shared risk factors or confounders that explain the relationship in full or in part that aren’t accounted for in this study.”
Ask About Mood
The results underscore the need for psychiatric evaluations in patients with a headache disorder. “For me, this is just going to make me that much more likely to ask my patients about their mood when I see them in clinic,” Dr. Elser said.
After asking patients with headache about their mood and stress at home and at work, physicians should have a “low threshold to refer to a behavioral health provider,” she added.
Future research should aim to better understand the link between headache and suicide risk, with a focus on the mechanisms behind low- and high-risk subgroups, said Dr. Elser.
A limitation of the study was that headache diagnoses were based on inpatient, emergency department, and outpatient specialist visits but not on visits to primary care practitioners. The study didn’t include information on headache severity or frequency and included only people who sought treatment for their headaches.
Though it’s unlikely the results “are perfectly generalizable” with respect to other geographical or cultural contexts, “I don’t think this relationship is unique to Denmark based on the literature to date,” Dr. Elser said.
Commenting on the study, session co-chair Todd J. Schwedt, MD, professor of neurology, Mayo Clinic Arizona, Phoenix, and president-elect of the American Headache Society, noted that the study offers important findings “that demonstrate the enormous negative impact that headaches can exert.”
It’s “a strong reminder” that clinicians should assess the mental health of their patients with headaches and offer treatment when appropriate, he said.
The study received support from Aarhus University. No relevant conflicts of interest were reported.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Delirium Linked to a Threefold Increased Risk for Dementia
, new research showed.
Incident dementia risk was more than three times higher in those who experienced just one episode of delirium, with each additional episode linked to a further 20% increase in dementia risk. The association was strongest in men.
Patients with delirium also had a 39% higher mortality risk than those with no history of delirium.
“We have known for a long time that delirium is dangerous, and this provides evidence that it’s even more dangerous than perhaps we had appreciated,” said study investigator Emily H. Gordon, PhD, MBBS, a geriatrician and senior lecturer at the University of Queensland, Brisbane, Australia.
“But we also know delirium is preventable. There are no excuses anymore; we really need to work together to improve the hospital system, to implement what are known to be effective interventions,” she added.
The findings were published online in The BMJ.
Close Matching
Prior studies that suggested an association between delirium and dementia were relatively small with short follow-up and varied in their adjustment for confounders. They also didn’t account for the competing risk for death, researchers noted.
Investigators used a linked New South Wales (NSW) statewide dataset that includes records of care episodes from all NSW hospitals as well as personal, administrative, clinical, and death information.
The study included an eligible sample of 626,467 older adults without dementia at baseline with at least one hospital admission between 2009 and 2014. For these patients, researchers calculated a hospital frailty risk score and collected other information including primary diagnosis and mean length of hospital stay and stay in the intensive care unit. From diagnostic codes, they categorized patients into no delirium and delirium groups and determined the number of delirium episodes.
Investigators matched patients in the delirium group to patients with no delirium according to characteristics with potential to confound the association between delirium and risk for dementia, including age, gender, frailty, reason for hospitalization, and length of stay in hospital and intensive care.
The matched study sample included 55,211 (mean age, 83 years) each in the delirium and the no delirium groups. Despite matching, the length of hospital stay for the index episode was longer for the delirium group than the no delirium group (mean, 9 days vs 6 days).
The primary outcomes were death and incident dementia, determined via diagnostic codes. During a follow-up of 5.25 years, 58% of patients died, and 17% had a new dementia diagnosis.
Among patients with at least one episode of delirium, the rate of incident dementia was 3.4 times higher than in those without delirium. After accounting for the competing risk for death, incident dementia risk remained three times higher among the delirium group (hazard ratio [HR], 3.00; 95% CI, 2.91-3.10).
This association was stronger for men than women (HR, 3.17 and 2.88, respectively; P = .004).
Sex Differences
The study is thought to be the first to identify a difference between sexes in dementia risk and delirium, Dr. Gordon said. It’s possible delirium in men is more severe in intensity or lasts longer than in women, or the male brain is, for whatever reason, more vulnerable to the effects of delirium, said Dr. Gordon. But she stressed these are only theories.
Investigators also found a mortality rate 1.4 times higher in the delirium group versus those without delirium, equating to a 39% increased risk for death (HR, 1.39; 95% CI, 1.37-1.41). The risk was similar for men and women (interaction P = .62).
When researchers categorized delirium by number of episodes, they found each additional episode was associated with a 10% increased risk for death (HR, 1.10; 95% CI, 1.09-1.12).
In addition to its large size, long follow-up, and close matching, what sets this new study apart from previous research is it accounted for the competing risk for death, said Dr. Gordon.
“This is really important because you’re not going to get dementia if you die, and in this population, the rate of death is incredibly high,” she said. “If we just assume people who died didn’t get dementia, then that screws the results.”
Causal Link?
For those who experienced at least one episode of delirium within the first 12 months, each additional episode of delirium was associated with a 20% increased risk for dementia (HR, 1.20; 95% CI, 1.18-1.23).
That dose-response association suggests a causal link between the two, Dr. Gordon said.
“The number one way to prove causality is to do a randomized controlled trial,” which isn’t feasible with delirium, she said. “By demonstrating a dose-response relationship suggests that it could be a causal pathway.”
Exact mechanisms linking delirium with dementia are unclear. Delirium might uncover preexisting or preclinical dementia, or it might cause dementia by accelerating underlying neuropathologic processes or de novo mechanisms, the authors noted.
Study limitations included the potential for residual confounding from unmeasured variables in the matching criteria. Delirium and dementia diagnoses depended on clinical coding of medical information recorded in the administrative dataset, and under-coding of dementia during hospitalization is well-recognized.
Although the study controlled for length of stay in hospital and in intensive care, this may not have fully captured differences in severity of medical conditions. Data about the duration and severity of delirium episodes were also unavailable, which limited the dose-response analysis.
Commenting on the findings, Christopher Weber, PhD, Alzheimer’s Association as director of Global Science Initiatives, said the results are consistent with other research on the association between delirium and incidents of dementia.
The increased risk for dementia following delirium in males is “an interesting finding,” said Dr. Weber. “This suggests a need for more research to understand the impact of sex differences in delirium, as well as research to see if preventing incidents of delirium could ultimately reduce rates of dementia.”
The study received support from the National Health and Medical Research Council: Partnership Centre for Health System Sustainability. Dr. Gordon and Dr. Weber reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, new research showed.
Incident dementia risk was more than three times higher in those who experienced just one episode of delirium, with each additional episode linked to a further 20% increase in dementia risk. The association was strongest in men.
Patients with delirium also had a 39% higher mortality risk than those with no history of delirium.
“We have known for a long time that delirium is dangerous, and this provides evidence that it’s even more dangerous than perhaps we had appreciated,” said study investigator Emily H. Gordon, PhD, MBBS, a geriatrician and senior lecturer at the University of Queensland, Brisbane, Australia.
“But we also know delirium is preventable. There are no excuses anymore; we really need to work together to improve the hospital system, to implement what are known to be effective interventions,” she added.
The findings were published online in The BMJ.
Close Matching
Prior studies that suggested an association between delirium and dementia were relatively small with short follow-up and varied in their adjustment for confounders. They also didn’t account for the competing risk for death, researchers noted.
Investigators used a linked New South Wales (NSW) statewide dataset that includes records of care episodes from all NSW hospitals as well as personal, administrative, clinical, and death information.
The study included an eligible sample of 626,467 older adults without dementia at baseline with at least one hospital admission between 2009 and 2014. For these patients, researchers calculated a hospital frailty risk score and collected other information including primary diagnosis and mean length of hospital stay and stay in the intensive care unit. From diagnostic codes, they categorized patients into no delirium and delirium groups and determined the number of delirium episodes.
Investigators matched patients in the delirium group to patients with no delirium according to characteristics with potential to confound the association between delirium and risk for dementia, including age, gender, frailty, reason for hospitalization, and length of stay in hospital and intensive care.
The matched study sample included 55,211 (mean age, 83 years) each in the delirium and the no delirium groups. Despite matching, the length of hospital stay for the index episode was longer for the delirium group than the no delirium group (mean, 9 days vs 6 days).
The primary outcomes were death and incident dementia, determined via diagnostic codes. During a follow-up of 5.25 years, 58% of patients died, and 17% had a new dementia diagnosis.
Among patients with at least one episode of delirium, the rate of incident dementia was 3.4 times higher than in those without delirium. After accounting for the competing risk for death, incident dementia risk remained three times higher among the delirium group (hazard ratio [HR], 3.00; 95% CI, 2.91-3.10).
This association was stronger for men than women (HR, 3.17 and 2.88, respectively; P = .004).
Sex Differences
The study is thought to be the first to identify a difference between sexes in dementia risk and delirium, Dr. Gordon said. It’s possible delirium in men is more severe in intensity or lasts longer than in women, or the male brain is, for whatever reason, more vulnerable to the effects of delirium, said Dr. Gordon. But she stressed these are only theories.
Investigators also found a mortality rate 1.4 times higher in the delirium group versus those without delirium, equating to a 39% increased risk for death (HR, 1.39; 95% CI, 1.37-1.41). The risk was similar for men and women (interaction P = .62).
When researchers categorized delirium by number of episodes, they found each additional episode was associated with a 10% increased risk for death (HR, 1.10; 95% CI, 1.09-1.12).
In addition to its large size, long follow-up, and close matching, what sets this new study apart from previous research is it accounted for the competing risk for death, said Dr. Gordon.
“This is really important because you’re not going to get dementia if you die, and in this population, the rate of death is incredibly high,” she said. “If we just assume people who died didn’t get dementia, then that screws the results.”
Causal Link?
For those who experienced at least one episode of delirium within the first 12 months, each additional episode of delirium was associated with a 20% increased risk for dementia (HR, 1.20; 95% CI, 1.18-1.23).
That dose-response association suggests a causal link between the two, Dr. Gordon said.
“The number one way to prove causality is to do a randomized controlled trial,” which isn’t feasible with delirium, she said. “By demonstrating a dose-response relationship suggests that it could be a causal pathway.”
Exact mechanisms linking delirium with dementia are unclear. Delirium might uncover preexisting or preclinical dementia, or it might cause dementia by accelerating underlying neuropathologic processes or de novo mechanisms, the authors noted.
Study limitations included the potential for residual confounding from unmeasured variables in the matching criteria. Delirium and dementia diagnoses depended on clinical coding of medical information recorded in the administrative dataset, and under-coding of dementia during hospitalization is well-recognized.
Although the study controlled for length of stay in hospital and in intensive care, this may not have fully captured differences in severity of medical conditions. Data about the duration and severity of delirium episodes were also unavailable, which limited the dose-response analysis.
Commenting on the findings, Christopher Weber, PhD, Alzheimer’s Association as director of Global Science Initiatives, said the results are consistent with other research on the association between delirium and incidents of dementia.
The increased risk for dementia following delirium in males is “an interesting finding,” said Dr. Weber. “This suggests a need for more research to understand the impact of sex differences in delirium, as well as research to see if preventing incidents of delirium could ultimately reduce rates of dementia.”
The study received support from the National Health and Medical Research Council: Partnership Centre for Health System Sustainability. Dr. Gordon and Dr. Weber reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, new research showed.
Incident dementia risk was more than three times higher in those who experienced just one episode of delirium, with each additional episode linked to a further 20% increase in dementia risk. The association was strongest in men.
Patients with delirium also had a 39% higher mortality risk than those with no history of delirium.
“We have known for a long time that delirium is dangerous, and this provides evidence that it’s even more dangerous than perhaps we had appreciated,” said study investigator Emily H. Gordon, PhD, MBBS, a geriatrician and senior lecturer at the University of Queensland, Brisbane, Australia.
“But we also know delirium is preventable. There are no excuses anymore; we really need to work together to improve the hospital system, to implement what are known to be effective interventions,” she added.
The findings were published online in The BMJ.
Close Matching
Prior studies that suggested an association between delirium and dementia were relatively small with short follow-up and varied in their adjustment for confounders. They also didn’t account for the competing risk for death, researchers noted.
Investigators used a linked New South Wales (NSW) statewide dataset that includes records of care episodes from all NSW hospitals as well as personal, administrative, clinical, and death information.
The study included an eligible sample of 626,467 older adults without dementia at baseline with at least one hospital admission between 2009 and 2014. For these patients, researchers calculated a hospital frailty risk score and collected other information including primary diagnosis and mean length of hospital stay and stay in the intensive care unit. From diagnostic codes, they categorized patients into no delirium and delirium groups and determined the number of delirium episodes.
Investigators matched patients in the delirium group to patients with no delirium according to characteristics with potential to confound the association between delirium and risk for dementia, including age, gender, frailty, reason for hospitalization, and length of stay in hospital and intensive care.
The matched study sample included 55,211 (mean age, 83 years) each in the delirium and the no delirium groups. Despite matching, the length of hospital stay for the index episode was longer for the delirium group than the no delirium group (mean, 9 days vs 6 days).
The primary outcomes were death and incident dementia, determined via diagnostic codes. During a follow-up of 5.25 years, 58% of patients died, and 17% had a new dementia diagnosis.
Among patients with at least one episode of delirium, the rate of incident dementia was 3.4 times higher than in those without delirium. After accounting for the competing risk for death, incident dementia risk remained three times higher among the delirium group (hazard ratio [HR], 3.00; 95% CI, 2.91-3.10).
This association was stronger for men than women (HR, 3.17 and 2.88, respectively; P = .004).
Sex Differences
The study is thought to be the first to identify a difference between sexes in dementia risk and delirium, Dr. Gordon said. It’s possible delirium in men is more severe in intensity or lasts longer than in women, or the male brain is, for whatever reason, more vulnerable to the effects of delirium, said Dr. Gordon. But she stressed these are only theories.
Investigators also found a mortality rate 1.4 times higher in the delirium group versus those without delirium, equating to a 39% increased risk for death (HR, 1.39; 95% CI, 1.37-1.41). The risk was similar for men and women (interaction P = .62).
When researchers categorized delirium by number of episodes, they found each additional episode was associated with a 10% increased risk for death (HR, 1.10; 95% CI, 1.09-1.12).
In addition to its large size, long follow-up, and close matching, what sets this new study apart from previous research is it accounted for the competing risk for death, said Dr. Gordon.
“This is really important because you’re not going to get dementia if you die, and in this population, the rate of death is incredibly high,” she said. “If we just assume people who died didn’t get dementia, then that screws the results.”
Causal Link?
For those who experienced at least one episode of delirium within the first 12 months, each additional episode of delirium was associated with a 20% increased risk for dementia (HR, 1.20; 95% CI, 1.18-1.23).
That dose-response association suggests a causal link between the two, Dr. Gordon said.
“The number one way to prove causality is to do a randomized controlled trial,” which isn’t feasible with delirium, she said. “By demonstrating a dose-response relationship suggests that it could be a causal pathway.”
Exact mechanisms linking delirium with dementia are unclear. Delirium might uncover preexisting or preclinical dementia, or it might cause dementia by accelerating underlying neuropathologic processes or de novo mechanisms, the authors noted.
Study limitations included the potential for residual confounding from unmeasured variables in the matching criteria. Delirium and dementia diagnoses depended on clinical coding of medical information recorded in the administrative dataset, and under-coding of dementia during hospitalization is well-recognized.
Although the study controlled for length of stay in hospital and in intensive care, this may not have fully captured differences in severity of medical conditions. Data about the duration and severity of delirium episodes were also unavailable, which limited the dose-response analysis.
Commenting on the findings, Christopher Weber, PhD, Alzheimer’s Association as director of Global Science Initiatives, said the results are consistent with other research on the association between delirium and incidents of dementia.
The increased risk for dementia following delirium in males is “an interesting finding,” said Dr. Weber. “This suggests a need for more research to understand the impact of sex differences in delirium, as well as research to see if preventing incidents of delirium could ultimately reduce rates of dementia.”
The study received support from the National Health and Medical Research Council: Partnership Centre for Health System Sustainability. Dr. Gordon and Dr. Weber reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM THE BMJ
New Insight Into ‘Demon’ Facial Visual Perception Disorder
Images generated by photographic computer software are the first to depict accurate images of facial distortions experienced by patients with prosopometamorphopsia (PMO), a rare visual disorder that is often mistaken for mental illness.
PMO is a rare, often misdiagnosed, visual disorder in which human faces appear distorted in shape, texture, position, or color. Most patients with PMO see these distorted facial features all the time, whether they are looking at a face in person, on a screen, or paper.
Unlike most cases of PMO, the patient reported seeing the distortions only when encountering someone in person but not on a screen or on paper.
This allowed researchers to use editing software to create an image on a computer screen that matched the patient’s distorted view.
“This new information should help healthcare professionals grasp the intensity of facial distortions experienced by people with PMO,” study investigator Brad Duchaine, PhD, professor, Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, told this news organization.
“A substantial number of people we have worked with have been misdiagnosed, often with schizophrenia or some sort of psychotic episode, and some have been put on antipsychotics despite the fact they’ve just had some little tweak in their visual system,” he added.
The report was published online on March 23 in The Lancet.
Prevalence Underestimated?
Although fewer than 100 cases of PMO have been reported in the literature, Dr. Duchaine said this is likely an underestimate. Based on a response to a website his team created to recruit affected patients, he said he believes “there are far more cases out there that we realize.”
PMO might be caused by a neurologic event that leads to a lesion in the right temporal lobe, near areas of facial processing, but in many cases, the cause is unclear.
PMO can occur in the context of head trauma, as well as cerebral infarction, epilepsy, migraine, and hallucinogen-persisting perception disorder, researchers noted. The condition can also manifest without detectable structural brain changes.
“We’re hearing from a lot of people through our website who haven’t had, or aren’t aware of having had, a neurologic event that coincided with the onset of face distortions,” Dr. Duchaine noted.
The patient in this study had a significant head injury at age 43 that led to hospitalization. He was exposed to high levels of carbon monoxide about 4 months before his symptoms began, but it’s not clear if the PMO and the incident are related.
He was not prescribed any medications and reported no history of illicit substance use.
The patient also had a history of bipolar affective disorder and posttraumatic stress disorder. His visions of distorted faces were not accompanied by delusional beliefs about the people he encountered, the investigators reported.
Neuropsychological tests were normal, and there were no deficits of visual acuity or color vision. Computer-based face perception tests indicated mild impairment in recognition of facial identity but normal recognition of facial expression.
The patient did not typically see distortions when looking at objects, such as a coffee mug or computer. However, said Dr. Duchaine, “if you get enough text together, the text will start to swirl for him.”
Eye-Opening Findings
The patient described the visual facial distortions as “severely stretched features, with deep grooves on the forehead, cheeks, and chin.” Even though these faces were distorted, he was able to recognize the people he saw.
Because the patient reported no distortion when viewing facial images on a screen, researchers asked him to compare what he saw when he looked at the face of a person in the room to a photograph of the same person on a computer screen.
The patient alternated between observing the in-person face, which he perceived as distorted, and the photo on the screen, which he perceived as normal.
Researchers used real-time feedback from the patient and photo-editing software to manipulate the photo on the screen until the photo and the patient’s visual perception of the person in the room matched.
“This is the first time we have actually been able to have a visualization where we are really confident that that’s what someone with PMO is experiencing,” said Dr. Duchaine. “If he were a typical PMO case, he would look at the face in real life and look at the face on the screen and the face on the screen would be distorting as well.”
The researchers discovered that the patient’s distortions are influenced by color; if he looks at faces through a red filter, the distortions are greatly intensified, but if he looks at them through a green filter, the distortions are greatly reduced. He now wears green-filtered glasses in certain situations.
Dr. Duchaine hopes this case will open the eyes of clinicians. “These sorts of visual distortions that your patient is telling you about are probably real, and they’re not a sign of broader mental illness; it’s a problem limited to the visual system,” he said.
The research was funded by the Hitchcock Foundation. The authors reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
Images generated by photographic computer software are the first to depict accurate images of facial distortions experienced by patients with prosopometamorphopsia (PMO), a rare visual disorder that is often mistaken for mental illness.
PMO is a rare, often misdiagnosed, visual disorder in which human faces appear distorted in shape, texture, position, or color. Most patients with PMO see these distorted facial features all the time, whether they are looking at a face in person, on a screen, or paper.
Unlike most cases of PMO, the patient reported seeing the distortions only when encountering someone in person but not on a screen or on paper.
This allowed researchers to use editing software to create an image on a computer screen that matched the patient’s distorted view.
“This new information should help healthcare professionals grasp the intensity of facial distortions experienced by people with PMO,” study investigator Brad Duchaine, PhD, professor, Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, told this news organization.
“A substantial number of people we have worked with have been misdiagnosed, often with schizophrenia or some sort of psychotic episode, and some have been put on antipsychotics despite the fact they’ve just had some little tweak in their visual system,” he added.
The report was published online on March 23 in The Lancet.
Prevalence Underestimated?
Although fewer than 100 cases of PMO have been reported in the literature, Dr. Duchaine said this is likely an underestimate. Based on a response to a website his team created to recruit affected patients, he said he believes “there are far more cases out there that we realize.”
PMO might be caused by a neurologic event that leads to a lesion in the right temporal lobe, near areas of facial processing, but in many cases, the cause is unclear.
PMO can occur in the context of head trauma, as well as cerebral infarction, epilepsy, migraine, and hallucinogen-persisting perception disorder, researchers noted. The condition can also manifest without detectable structural brain changes.
“We’re hearing from a lot of people through our website who haven’t had, or aren’t aware of having had, a neurologic event that coincided with the onset of face distortions,” Dr. Duchaine noted.
The patient in this study had a significant head injury at age 43 that led to hospitalization. He was exposed to high levels of carbon monoxide about 4 months before his symptoms began, but it’s not clear if the PMO and the incident are related.
He was not prescribed any medications and reported no history of illicit substance use.
The patient also had a history of bipolar affective disorder and posttraumatic stress disorder. His visions of distorted faces were not accompanied by delusional beliefs about the people he encountered, the investigators reported.
Neuropsychological tests were normal, and there were no deficits of visual acuity or color vision. Computer-based face perception tests indicated mild impairment in recognition of facial identity but normal recognition of facial expression.
The patient did not typically see distortions when looking at objects, such as a coffee mug or computer. However, said Dr. Duchaine, “if you get enough text together, the text will start to swirl for him.”
Eye-Opening Findings
The patient described the visual facial distortions as “severely stretched features, with deep grooves on the forehead, cheeks, and chin.” Even though these faces were distorted, he was able to recognize the people he saw.
Because the patient reported no distortion when viewing facial images on a screen, researchers asked him to compare what he saw when he looked at the face of a person in the room to a photograph of the same person on a computer screen.
The patient alternated between observing the in-person face, which he perceived as distorted, and the photo on the screen, which he perceived as normal.
Researchers used real-time feedback from the patient and photo-editing software to manipulate the photo on the screen until the photo and the patient’s visual perception of the person in the room matched.
“This is the first time we have actually been able to have a visualization where we are really confident that that’s what someone with PMO is experiencing,” said Dr. Duchaine. “If he were a typical PMO case, he would look at the face in real life and look at the face on the screen and the face on the screen would be distorting as well.”
The researchers discovered that the patient’s distortions are influenced by color; if he looks at faces through a red filter, the distortions are greatly intensified, but if he looks at them through a green filter, the distortions are greatly reduced. He now wears green-filtered glasses in certain situations.
Dr. Duchaine hopes this case will open the eyes of clinicians. “These sorts of visual distortions that your patient is telling you about are probably real, and they’re not a sign of broader mental illness; it’s a problem limited to the visual system,” he said.
The research was funded by the Hitchcock Foundation. The authors reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
Images generated by photographic computer software are the first to depict accurate images of facial distortions experienced by patients with prosopometamorphopsia (PMO), a rare visual disorder that is often mistaken for mental illness.
PMO is a rare, often misdiagnosed, visual disorder in which human faces appear distorted in shape, texture, position, or color. Most patients with PMO see these distorted facial features all the time, whether they are looking at a face in person, on a screen, or paper.
Unlike most cases of PMO, the patient reported seeing the distortions only when encountering someone in person but not on a screen or on paper.
This allowed researchers to use editing software to create an image on a computer screen that matched the patient’s distorted view.
“This new information should help healthcare professionals grasp the intensity of facial distortions experienced by people with PMO,” study investigator Brad Duchaine, PhD, professor, Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, told this news organization.
“A substantial number of people we have worked with have been misdiagnosed, often with schizophrenia or some sort of psychotic episode, and some have been put on antipsychotics despite the fact they’ve just had some little tweak in their visual system,” he added.
The report was published online on March 23 in The Lancet.
Prevalence Underestimated?
Although fewer than 100 cases of PMO have been reported in the literature, Dr. Duchaine said this is likely an underestimate. Based on a response to a website his team created to recruit affected patients, he said he believes “there are far more cases out there that we realize.”
PMO might be caused by a neurologic event that leads to a lesion in the right temporal lobe, near areas of facial processing, but in many cases, the cause is unclear.
PMO can occur in the context of head trauma, as well as cerebral infarction, epilepsy, migraine, and hallucinogen-persisting perception disorder, researchers noted. The condition can also manifest without detectable structural brain changes.
“We’re hearing from a lot of people through our website who haven’t had, or aren’t aware of having had, a neurologic event that coincided with the onset of face distortions,” Dr. Duchaine noted.
The patient in this study had a significant head injury at age 43 that led to hospitalization. He was exposed to high levels of carbon monoxide about 4 months before his symptoms began, but it’s not clear if the PMO and the incident are related.
He was not prescribed any medications and reported no history of illicit substance use.
The patient also had a history of bipolar affective disorder and posttraumatic stress disorder. His visions of distorted faces were not accompanied by delusional beliefs about the people he encountered, the investigators reported.
Neuropsychological tests were normal, and there were no deficits of visual acuity or color vision. Computer-based face perception tests indicated mild impairment in recognition of facial identity but normal recognition of facial expression.
The patient did not typically see distortions when looking at objects, such as a coffee mug or computer. However, said Dr. Duchaine, “if you get enough text together, the text will start to swirl for him.”
Eye-Opening Findings
The patient described the visual facial distortions as “severely stretched features, with deep grooves on the forehead, cheeks, and chin.” Even though these faces were distorted, he was able to recognize the people he saw.
Because the patient reported no distortion when viewing facial images on a screen, researchers asked him to compare what he saw when he looked at the face of a person in the room to a photograph of the same person on a computer screen.
The patient alternated between observing the in-person face, which he perceived as distorted, and the photo on the screen, which he perceived as normal.
Researchers used real-time feedback from the patient and photo-editing software to manipulate the photo on the screen until the photo and the patient’s visual perception of the person in the room matched.
“This is the first time we have actually been able to have a visualization where we are really confident that that’s what someone with PMO is experiencing,” said Dr. Duchaine. “If he were a typical PMO case, he would look at the face in real life and look at the face on the screen and the face on the screen would be distorting as well.”
The researchers discovered that the patient’s distortions are influenced by color; if he looks at faces through a red filter, the distortions are greatly intensified, but if he looks at them through a green filter, the distortions are greatly reduced. He now wears green-filtered glasses in certain situations.
Dr. Duchaine hopes this case will open the eyes of clinicians. “These sorts of visual distortions that your patient is telling you about are probably real, and they’re not a sign of broader mental illness; it’s a problem limited to the visual system,” he said.
The research was funded by the Hitchcock Foundation. The authors reported no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
Dogs Able to Sniff Out PTSD, Other Trauma, in Human Breath
Dogs can detect stress-related compounds in the breath of people experiencing early signs of trauma, including those with posttraumatic stress disorder (PTSD), a new proof-of-concept study suggested.
The research provides evidence that some service dogs with PTSD can be trained to detect episodes of pending distress through a person’s breath and perhaps prompt the individual to use coping skills to manage the episode.
“Ours is the first study to demonstrate that at least some dogs can detect putative stress-related volatile organic compounds in human breath that are associated with PTSD symptoms,” study author Laura Kiiroja, PhD candidate, department of psychology and neuroscience, faculty of science, Dalhousie University, Halifax, Nova Scotia, Canada, told this news organization.
The study was published online on March 28, 2024, in Frontiers of Allergy.
Heightened Sense of Smell
The lifetime prevalence of PTSD is about 8% in the general population, but data show it can reach 23% in veterans. In addition, many more trauma-exposed individuals experience subthreshold symptoms.
Research is investigating the application of dogs’ sense of smell — which is up to 100,000 times more sensitive than humans’ — to detect cancers, viruses, parasites, hypoglycemia, and seizures in humans.
The new study included 26 mostly civilian “donors” (mean age, 31 years; 18 females) who had experienced various types of trauma but had no severe mental illness. More than 50% met the criteria for PTSD.
Participants were recruited from a study examining neurocognitive mechanisms underlying the potential links between trauma and cannabis use. However, participants in the dog study abstained from using cannabis for at least 12 hours prior to the study experiments.
Breath Donors
Breath samples were collected via disposable medical-grade face masks at baseline and during ensuing experiments. In total, 40 breath sample sets were collected.
Two female companion dogs — Ivy, a red golden retriever, and Callie, a German shepherd/Belgian Malinois mix — were trained to identify target odors from the samples.
The animals were tested to determine whether they were able to discriminate between breath samples collected from these same “breath donors” during a relatively relaxed state and during induced stress testing which is known as the alternative forced choice discrimination test.
The dogs’ ability to discern trauma cues from breath samples of various individuals was tested by presenting one sample (baseline or trauma cue) at a time. The researchers used signal detection theory to evaluate the sensitivity and specificity of dogs in detecting human stress VOCs.
Investigators found the dogs had about a 90% accuracy rate across all sample sets in the discrimination experiment and 74% and 81% accuracy for Ivy and Callie, respectively, in the detection experiment.
“Our study contributed to the evidence showing that not only are dogs able to detect some physical health conditions in humans but also that some mental health conditions alter the released VOCs in a way that is detectable by dogs,” Ms. Kiiroja said.
Emotion Detectors
At baseline and during each cue exposure, donors reported their affect using the Positive and Negative Affect Schedule. Ivy’s performance correlated with the donors’ self-reported anxiety, and Callie’s performance correlated with the donors’ self-reported shame.
Based on these correlations, the researchers speculate Ivy detected VOCs that likely originated from the sympathetic-adrenomedullary axis, which involves adrenaline and noradrenaline.
VOCs detected by Callie likely originated in the hypothalamus-pituitary-adrenal axis, which involves cortisol and corticosterone. These two endocrine subsystems play a major role in reestablishing homeostasis in response to a stressor.
The results suggest some service dogs could alert to upcoming intrusion and hyperarousal symptoms even before physical signs manifest and before the person is even aware of the situation, said Ms. Kiiroja.
“This would enable earlier distraction and reminders to use skills learned in psychotherapy; this would have a better likelihood of increasing the efficacy of these skills and preventing further escalation of the arousal,” she said.
Most breath samples likely included both early and late stress VOCs, as the breath donors wore the trauma mask for a relatively long time, the authors noted. Future studies should test dogs’ olfactory acuity on samples collected minutes after the trauma cue, they added.
Another limitation is that all donors were regular cannabis users, so the results may not generalize to others. However, the fact the dogs demonstrated their detection ability even with cannabis users makes the proof-of-concept “more stringent,” Ms. Kiiroja said.
The goal of the study was to see if some dogs are capable of detecting stress VOCs from people with trauma histories in response to trauma cues, so the small number of dogs in the study isn’t a limitation, the authors noted.
‘Wonderful Work’
Commenting on the findings, Elspeth Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, described the research as “wonderful work.” Dr. Ritchie, who was not a part of this study, has also studied PTSD supports dogs.
The study is yet another illustration of the “amazing things dogs can do ... not just for veterans but for people with mental illness.” They can be a source of comfort and help people manage their anxiety.
Training PTSD service dogs can be expensive, with some well-accredited organizations charging about $50,000 for an animal, Dr. Ritchie said. Training a dog to detect VOCs could also be costly, she added.
Although such research has increased in recent years, it’s unclear how it would be applied in practice. Identifying funding for this sort of study and designing trials would also be challenging, Dr. Ritchie added.
“The idea is good, but when you try to operationalize it, it gets tricky,” she said.
The fact that all donors in the study used cannabis is a confounding factor and raises the question of what else might confound the results, Dr. Ritchie added.
Dr. Ritchie emphasized that although ideally veterans would learn to recognize the onset of stress symptoms themselves, a dog could serve as a valuable companion in this process. “That’s precisely why this research should progress,” she said.
The authors and Dr. Ritchie reported no relevant disclosures.
A version of this article appeared on Medscape.com.
Dogs can detect stress-related compounds in the breath of people experiencing early signs of trauma, including those with posttraumatic stress disorder (PTSD), a new proof-of-concept study suggested.
The research provides evidence that some service dogs with PTSD can be trained to detect episodes of pending distress through a person’s breath and perhaps prompt the individual to use coping skills to manage the episode.
“Ours is the first study to demonstrate that at least some dogs can detect putative stress-related volatile organic compounds in human breath that are associated with PTSD symptoms,” study author Laura Kiiroja, PhD candidate, department of psychology and neuroscience, faculty of science, Dalhousie University, Halifax, Nova Scotia, Canada, told this news organization.
The study was published online on March 28, 2024, in Frontiers of Allergy.
Heightened Sense of Smell
The lifetime prevalence of PTSD is about 8% in the general population, but data show it can reach 23% in veterans. In addition, many more trauma-exposed individuals experience subthreshold symptoms.
Research is investigating the application of dogs’ sense of smell — which is up to 100,000 times more sensitive than humans’ — to detect cancers, viruses, parasites, hypoglycemia, and seizures in humans.
The new study included 26 mostly civilian “donors” (mean age, 31 years; 18 females) who had experienced various types of trauma but had no severe mental illness. More than 50% met the criteria for PTSD.
Participants were recruited from a study examining neurocognitive mechanisms underlying the potential links between trauma and cannabis use. However, participants in the dog study abstained from using cannabis for at least 12 hours prior to the study experiments.
Breath Donors
Breath samples were collected via disposable medical-grade face masks at baseline and during ensuing experiments. In total, 40 breath sample sets were collected.
Two female companion dogs — Ivy, a red golden retriever, and Callie, a German shepherd/Belgian Malinois mix — were trained to identify target odors from the samples.
The animals were tested to determine whether they were able to discriminate between breath samples collected from these same “breath donors” during a relatively relaxed state and during induced stress testing which is known as the alternative forced choice discrimination test.
The dogs’ ability to discern trauma cues from breath samples of various individuals was tested by presenting one sample (baseline or trauma cue) at a time. The researchers used signal detection theory to evaluate the sensitivity and specificity of dogs in detecting human stress VOCs.
Investigators found the dogs had about a 90% accuracy rate across all sample sets in the discrimination experiment and 74% and 81% accuracy for Ivy and Callie, respectively, in the detection experiment.
“Our study contributed to the evidence showing that not only are dogs able to detect some physical health conditions in humans but also that some mental health conditions alter the released VOCs in a way that is detectable by dogs,” Ms. Kiiroja said.
Emotion Detectors
At baseline and during each cue exposure, donors reported their affect using the Positive and Negative Affect Schedule. Ivy’s performance correlated with the donors’ self-reported anxiety, and Callie’s performance correlated with the donors’ self-reported shame.
Based on these correlations, the researchers speculate Ivy detected VOCs that likely originated from the sympathetic-adrenomedullary axis, which involves adrenaline and noradrenaline.
VOCs detected by Callie likely originated in the hypothalamus-pituitary-adrenal axis, which involves cortisol and corticosterone. These two endocrine subsystems play a major role in reestablishing homeostasis in response to a stressor.
The results suggest some service dogs could alert to upcoming intrusion and hyperarousal symptoms even before physical signs manifest and before the person is even aware of the situation, said Ms. Kiiroja.
“This would enable earlier distraction and reminders to use skills learned in psychotherapy; this would have a better likelihood of increasing the efficacy of these skills and preventing further escalation of the arousal,” she said.
Most breath samples likely included both early and late stress VOCs, as the breath donors wore the trauma mask for a relatively long time, the authors noted. Future studies should test dogs’ olfactory acuity on samples collected minutes after the trauma cue, they added.
Another limitation is that all donors were regular cannabis users, so the results may not generalize to others. However, the fact the dogs demonstrated their detection ability even with cannabis users makes the proof-of-concept “more stringent,” Ms. Kiiroja said.
The goal of the study was to see if some dogs are capable of detecting stress VOCs from people with trauma histories in response to trauma cues, so the small number of dogs in the study isn’t a limitation, the authors noted.
‘Wonderful Work’
Commenting on the findings, Elspeth Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, described the research as “wonderful work.” Dr. Ritchie, who was not a part of this study, has also studied PTSD supports dogs.
The study is yet another illustration of the “amazing things dogs can do ... not just for veterans but for people with mental illness.” They can be a source of comfort and help people manage their anxiety.
Training PTSD service dogs can be expensive, with some well-accredited organizations charging about $50,000 for an animal, Dr. Ritchie said. Training a dog to detect VOCs could also be costly, she added.
Although such research has increased in recent years, it’s unclear how it would be applied in practice. Identifying funding for this sort of study and designing trials would also be challenging, Dr. Ritchie added.
“The idea is good, but when you try to operationalize it, it gets tricky,” she said.
The fact that all donors in the study used cannabis is a confounding factor and raises the question of what else might confound the results, Dr. Ritchie added.
Dr. Ritchie emphasized that although ideally veterans would learn to recognize the onset of stress symptoms themselves, a dog could serve as a valuable companion in this process. “That’s precisely why this research should progress,” she said.
The authors and Dr. Ritchie reported no relevant disclosures.
A version of this article appeared on Medscape.com.
Dogs can detect stress-related compounds in the breath of people experiencing early signs of trauma, including those with posttraumatic stress disorder (PTSD), a new proof-of-concept study suggested.
The research provides evidence that some service dogs with PTSD can be trained to detect episodes of pending distress through a person’s breath and perhaps prompt the individual to use coping skills to manage the episode.
“Ours is the first study to demonstrate that at least some dogs can detect putative stress-related volatile organic compounds in human breath that are associated with PTSD symptoms,” study author Laura Kiiroja, PhD candidate, department of psychology and neuroscience, faculty of science, Dalhousie University, Halifax, Nova Scotia, Canada, told this news organization.
The study was published online on March 28, 2024, in Frontiers of Allergy.
Heightened Sense of Smell
The lifetime prevalence of PTSD is about 8% in the general population, but data show it can reach 23% in veterans. In addition, many more trauma-exposed individuals experience subthreshold symptoms.
Research is investigating the application of dogs’ sense of smell — which is up to 100,000 times more sensitive than humans’ — to detect cancers, viruses, parasites, hypoglycemia, and seizures in humans.
The new study included 26 mostly civilian “donors” (mean age, 31 years; 18 females) who had experienced various types of trauma but had no severe mental illness. More than 50% met the criteria for PTSD.
Participants were recruited from a study examining neurocognitive mechanisms underlying the potential links between trauma and cannabis use. However, participants in the dog study abstained from using cannabis for at least 12 hours prior to the study experiments.
Breath Donors
Breath samples were collected via disposable medical-grade face masks at baseline and during ensuing experiments. In total, 40 breath sample sets were collected.
Two female companion dogs — Ivy, a red golden retriever, and Callie, a German shepherd/Belgian Malinois mix — were trained to identify target odors from the samples.
The animals were tested to determine whether they were able to discriminate between breath samples collected from these same “breath donors” during a relatively relaxed state and during induced stress testing which is known as the alternative forced choice discrimination test.
The dogs’ ability to discern trauma cues from breath samples of various individuals was tested by presenting one sample (baseline or trauma cue) at a time. The researchers used signal detection theory to evaluate the sensitivity and specificity of dogs in detecting human stress VOCs.
Investigators found the dogs had about a 90% accuracy rate across all sample sets in the discrimination experiment and 74% and 81% accuracy for Ivy and Callie, respectively, in the detection experiment.
“Our study contributed to the evidence showing that not only are dogs able to detect some physical health conditions in humans but also that some mental health conditions alter the released VOCs in a way that is detectable by dogs,” Ms. Kiiroja said.
Emotion Detectors
At baseline and during each cue exposure, donors reported their affect using the Positive and Negative Affect Schedule. Ivy’s performance correlated with the donors’ self-reported anxiety, and Callie’s performance correlated with the donors’ self-reported shame.
Based on these correlations, the researchers speculate Ivy detected VOCs that likely originated from the sympathetic-adrenomedullary axis, which involves adrenaline and noradrenaline.
VOCs detected by Callie likely originated in the hypothalamus-pituitary-adrenal axis, which involves cortisol and corticosterone. These two endocrine subsystems play a major role in reestablishing homeostasis in response to a stressor.
The results suggest some service dogs could alert to upcoming intrusion and hyperarousal symptoms even before physical signs manifest and before the person is even aware of the situation, said Ms. Kiiroja.
“This would enable earlier distraction and reminders to use skills learned in psychotherapy; this would have a better likelihood of increasing the efficacy of these skills and preventing further escalation of the arousal,” she said.
Most breath samples likely included both early and late stress VOCs, as the breath donors wore the trauma mask for a relatively long time, the authors noted. Future studies should test dogs’ olfactory acuity on samples collected minutes after the trauma cue, they added.
Another limitation is that all donors were regular cannabis users, so the results may not generalize to others. However, the fact the dogs demonstrated their detection ability even with cannabis users makes the proof-of-concept “more stringent,” Ms. Kiiroja said.
The goal of the study was to see if some dogs are capable of detecting stress VOCs from people with trauma histories in response to trauma cues, so the small number of dogs in the study isn’t a limitation, the authors noted.
‘Wonderful Work’
Commenting on the findings, Elspeth Ritchie, MD, chair of psychiatry, MedStar Washington Hospital Center, Washington, described the research as “wonderful work.” Dr. Ritchie, who was not a part of this study, has also studied PTSD supports dogs.
The study is yet another illustration of the “amazing things dogs can do ... not just for veterans but for people with mental illness.” They can be a source of comfort and help people manage their anxiety.
Training PTSD service dogs can be expensive, with some well-accredited organizations charging about $50,000 for an animal, Dr. Ritchie said. Training a dog to detect VOCs could also be costly, she added.
Although such research has increased in recent years, it’s unclear how it would be applied in practice. Identifying funding for this sort of study and designing trials would also be challenging, Dr. Ritchie added.
“The idea is good, but when you try to operationalize it, it gets tricky,” she said.
The fact that all donors in the study used cannabis is a confounding factor and raises the question of what else might confound the results, Dr. Ritchie added.
Dr. Ritchie emphasized that although ideally veterans would learn to recognize the onset of stress symptoms themselves, a dog could serve as a valuable companion in this process. “That’s precisely why this research should progress,” she said.
The authors and Dr. Ritchie reported no relevant disclosures.
A version of this article appeared on Medscape.com.
Most Disadvantaged Least Likely to Receive Thrombolysis
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
, early research shows.
“The findings should serve as an eye-opener that social determinants of health seem to be playing a role in who receives thrombolytic therapy, said study investigator Chanaka Kahathuduwa, MD, PhD, resident physician, Department of Neurology, School of Medicine, Texas Tech University Health Sciences Center, Lubbock.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Contributor to Poor Outcomes
Social determinants of health are important contributors to poor stroke-related outcomes, the investigators noted. They pointed out that previous research has yielded conflicting results as to the cause.
Whereas some studies suggest poor social determinants of health drive increased stroke incidence, others raise the question of whether there are disparities in acute stroke care.
To investigate, the researchers used a publicly available database and diagnostic and procedure codes to identify patients presenting at emergency departments in Texas from 2016 to 2019 with ischemic stroke who did and did not receive thrombolytic therapy.
“We focused on Texas, which has a very large area but few places where people have easy access to health care, which is a problem,” said study co-investigator Chathurika Dhanasekara, MD, PhD, research assistant professor in the Department of Surgery, School of Medicine, Texas Tech University Health Sciences Center.
The study included 63,983 stroke patients, of whom 51.6% were female, 66.6% were White, and 17.7% were Black. Of these, 7198 (11.2%) received thrombolytic therapy; such therapies include the tissue plasminogen activators (tPAs) alteplase and tenecteplace.
Researchers collected information on social determinants of health such as age, race, gender, insurance type, and residence based on zip codes. They computed risk ratios (RRs) of administering thrombolysis on the basis of these variables.
Results showed that Black patients were less likely than their White counterparts to receive thrombolysis (RR, 0.90; 95% CI, 0.85-0.96). In addition, patients older than 65 years were less likely those aged 18-45 years to receive thrombolysis (RR, 0.47; 95% CI, 0.44-0.51), and rural residents were less likely than urban dwellers to receive the intervention (RR, 0.60; 95% CI, 0.55-0.65).
It makes some sense, the researchers said, that rural stroke patients would be less likely to get thrombolysis because there’s a limited time window — within 4.5 hours — during which this therapy can be given, and such patients may live a long distance from a hospital.
Two other groups less likely to receive thrombolysis were Hispanic persons versus non-Hispanic persons (RR, 0.93; 95% CI, 0.87-0.98) and Medicare/Medicaid/Veterans Administration patients (RR, 0.77; 95% CI, 0.73-0.81) or uninsured patients (RR, 0.90; 95% CI, 0.94-0.87) vs those with private insurance.
Interestingly, male patients were less likely than female patients to receive thrombolysis (RR, 0.95; 95% CI, 0.90-0.99).
Surprising Findings
With the exception of the discrepancy in thrombolysis rates between rural versus urban dwellers, the study’s findings were surprising, said Dr. Kahathuduwa.
Researchers divided participants into quartiles, from least to most disadvantaged, based on the Social Vulnerability Index (SVI), created by the Centers for Disease Control and Prevention to determine social vulnerability or factors that can negatively affect a community’s health.
Among the 7930 individuals in the least disadvantaged group, 1037 received thrombolysis. In comparison, among the 7966 persons in the most disadvantaged group, 964 received thrombolysis.
After adjusting for age, sex, and education, investigators found that patients in the first quartile based on SVI were more likely to be associated with thrombolysis vs those in the second and third quartiles (RR, 1.13; 95% CI, 1.04-1.22).
The researchers also examined the impact of comorbidities using the Charlson Comorbidity Index. Patients with diabetes, hypertension, and high cholesterol in addition to signs of stroke would rouse a higher degree of suspicion and be more likely to be treated with tPA or tenecteplase, said Dr. Kahathuduwa.
“But even when we controlled for those comorbidities, the relationships we identified between health disparities and the likelihood of receiving thrombolysis remained the same,” said Dr. Kahathuduwa.
It’s not clear from this study what factors contribute to the disparities in stroke treatment. “All we know is these relationships exist,” said Dr. Kahathuduwa. “We should use this as a foundation to understand what’s really going on at the grassroots level.”
However, he added, it’s possible that accessibility plays a role. He noted that Lubbock has the only Level 1 stroke center in west Texas; most stroke centers in the state are concentrated in cities in east and central Texas.
The investigators are embarking on further research to assess the impact of determinants of health on receipt of endovascular therapy and the role of stroke severity.
“In an ideal world, all patients who need thrombolytic therapy would get thrombolytic therapy within the recommended time window because the benefits are very clear,” said Dr. Kahathuduwa.
The findings may not be generalizable because they come from a single database. “Our findings need to be validated in another independent dataset before we can confidently determine what’s going on,” said Dr. Kahathuduwa.
A limitation of the study was that it is unknown how many of the participants were seen at the hospital within the recommended time frame and would thus be eligible to receive the treatment.
Commenting on the research, Martinson Arnan, MD , a vascular neurologist at Bronson Neuroscience Center, Kalamazoo, Michigan, said the study’s “exploratory finding” is important and “illuminates the potential impact of social determinants of health on disparities in acute stroke treatment.”
Neurologists consistently emphasize the principle that “time is brain” — that timely restoration of blood flow is crucial for minimizing morbidity associated with ischemic stroke. This study offers a potential opportunity to investigate how social determinants of health may affect stroke care, said Dr. Arnan.
However, he added, further research is needed “to understand whether the differences in outcomes observed here are influenced by levels of health education, concordance between patients and their treating providers, or other issues related to access barriers.”
The investigators and Dr. Arnan report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM AAN 2024