User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'main-prefix')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
Revised Criteria for Alzheimer’s Diagnosis, Staging Released
, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system.
“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead.
The revised criteria are the first updates since 2018.
“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association.
“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added.
The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia.
Core Biomarkers Defined
The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.
The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.
Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2.
Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217.
“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote.
Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments.
Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted.
The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.”
The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages.
Formal Guidelines to Come
The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.
Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies.
This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote.
They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.
“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”
This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said.
The revised criteria also emphasize the role of the clinician.
“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine.
Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”
Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.
A version of this article appeared on Medscape.com.
, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system.
“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead.
The revised criteria are the first updates since 2018.
“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association.
“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added.
The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia.
Core Biomarkers Defined
The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.
The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.
Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2.
Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217.
“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote.
Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments.
Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted.
The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.”
The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages.
Formal Guidelines to Come
The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.
Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies.
This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote.
They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.
“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”
This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said.
The revised criteria also emphasize the role of the clinician.
“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine.
Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”
Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.
A version of this article appeared on Medscape.com.
, including a new biomarker classification system that incorporates fluid and imaging biomarkers as well as an updated disease staging system.
“Plasma markers are here now, and it’s very important to incorporate them into the criteria for diagnosis,” said senior author Maria C. Carrillo, PhD, Alzheimer’s Association chief science officer and medical affairs lead.
The revised criteria are the first updates since 2018.
“Defining diseases biologically, rather than based on syndromic presentation, has long been standard in many areas of medicine — including cancer, heart disease, and diabetes — and is becoming a unifying concept common to all neurodegenerative diseases,” lead author Clifford Jack Jr, MD, with Mayo Clinic, Rochester, Minnesota, said in a news release from the Alzheimer’s Association.
“These updates to the diagnostic criteria are needed now because we know more about the underlying biology of Alzheimer’s and we are able to measure those changes,” Dr. Jack added.
The 2024 revised criteria for diagnosis and staging of Alzheimer’s disease were published online in Alzheimer’s & Dementia.
Core Biomarkers Defined
The revised criteria define Alzheimer’s disease as a biologic process that begins with the appearance of Alzheimer’s disease neuropathologic change (ADNPC) in the absence of symptoms. Progression of the neuropathologic burden leads to the later appearance and progression of clinical symptoms.
The work group organized Alzheimer’s disease biomarkers into three broad categories: (1) core biomarkers of ADNPC, (2) nonspecific biomarkers that are important in Alzheimer’s disease but are also involved in other brain diseases, and (3) biomarkers of diseases or conditions that commonly coexist with Alzheimer’s disease.
Core Alzheimer’s biomarkers are subdivided into Core 1 and Core 2.
Core 1 biomarkers become abnormal early in the disease course and directly measure either amyloid plaques or phosphorylated tau (p-tau). They include amyloid PET; cerebrospinal fluid (CSF) amyloid beta 42/40 ratio, CSF p-tau181/amyloid beta 42 ratio, and CSF total (t)-tau/amyloid beta 42 ratio; and “accurate” plasma biomarkers, such as p-tau217.
“An abnormal Core 1 biomarker result is sufficient to establish a diagnosis of Alzheimer’s disease and to inform clinical decision making [sic] throughout the disease continuum,” the work group wrote.
Core 2 biomarkers become abnormal later in the disease process and are more closely linked with the onset of symptoms. Core 2 biomarkers include tau PET and certain soluble tau fragments associated with tau proteinopathy (eg, MTBR-tau243) but also pT205 and nonphosphorylated mid-region tau fragments.
Core 2 biomarkers, when combined with Core 1, may be used to stage biologic disease severity; abnormal Core 2 biomarkers “increase confidence that Alzheimer’s disease is contributing to symptoms,” the work group noted.
The revised criteria give clinicians “the flexibility to use plasma or PET scans or CSF,” Dr. Carrillo said. “They will have several tools that they can choose from and offer this variety of tools to their patients. We need different tools for different individuals. There will be differences in coverage and access to these diagnostics.”
The revised criteria also include an integrated biologic and clinical staging scheme that acknowledges the fact that common co-pathologies, cognitive reserve, and resistance may modify relationships between clinical and biologic Alzheimer’s disease stages.
Formal Guidelines to Come
The work group noted that currently, the clinical use of Alzheimer’s disease biomarkers is intended for the evaluation of symptomatic patients, not cognitively unimpaired individuals.
Disease-targeted therapies have not yet been approved for cognitively unimpaired individuals. For this reason, the work group currently recommends against diagnostic testing in cognitively unimpaired individuals outside the context of observational or therapeutic research studies.
This recommendation would change in the future if disease-targeted therapies that are currently being evaluated in trials demonstrate a benefit in preventing cognitive decline and are approved for use in preclinical Alzheimer’s disease, they wrote.
They emphasize that the revised criteria are not intended to provide step-by-step clinical practice guidelines for clinicians. Rather, they provide general principles to inform diagnosis and staging of Alzheimer’s disease that reflect current science.
“This is just the beginning,” said Dr. Carrillo. “This is a gathering of the evidence to date and putting it in one place so we can have a consensus and actually a way to test it and make it better as we add new science.”
This also serves as a “springboard” for the Alzheimer’s Association to create formal clinical guidelines. “That will come, hopefully, over the next 12 months. We’ll be working on it, and we hope to have that in 2025,” Dr. Carrillo said.
The revised criteria also emphasize the role of the clinician.
“The biologically based diagnosis of Alzheimer’s disease is meant to assist, rather than supplant, the clinical evaluation of individuals with cognitive impairment,” the work group wrote in a related commentary published online in Nature Medicine.
Recent diagnostics and therapeutic developments “herald a virtuous cycle in which improvements in diagnostic methods enable more sophisticated treatment approaches, which in turn steer advances in diagnostic methods,” they continued. “An unchanging principle, however, is that effective treatment will always rely on the ability to diagnose and stage the biology driving the disease process.”
Funding for this research was provided by the National Institutes of Health, Alexander family professorship, GHR Foundation, Alzheimer’s Association, Veterans Administration, Life Molecular Imaging, Michael J. Fox Foundation for Parkinson’s Research, Avid Radiopharmaceuticals, Eli Lilly, Gates Foundation, Biogen, C2N Diagnostics, Eisai, Fujirebio, GE Healthcare, Roche, National Institute on Aging, Roche/Genentech, BrightFocus Foundation, Hoffmann-La Roche, Novo Nordisk, Toyama, National MS Society, Alzheimer Drug Discovery Foundation, and others. A complete list of donors and disclosures is included in the original article.
A version of this article appeared on Medscape.com.
FROM ALZHEIMER’S & DEMENTIA
Common Cognitive Test Falls Short for Concussion Diagnosis
, a new study showed.
Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.
“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.
The study was published online in JAMA Network Open.
Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.
Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.
Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.
All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.
No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.
Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.
The most accurate predictor of concussion was participants’ responses to questions about their symptoms.
“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”
Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”
The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.
Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”
This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a new study showed.
Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.
“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.
The study was published online in JAMA Network Open.
Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.
Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.
Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.
All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.
No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.
Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.
The most accurate predictor of concussion was participants’ responses to questions about their symptoms.
“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”
Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”
The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.
Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”
This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a new study showed.
Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.
“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.
The study was published online in JAMA Network Open.
Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.
Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.
Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.
All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.
No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.
Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.
The most accurate predictor of concussion was participants’ responses to questions about their symptoms.
“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”
Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”
The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.
Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”
This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Form of B12 Deficiency Affecting the Central Nervous System May Be New Autoimmune Disease
Discovered while studying a puzzling case of one patient with inexplicable neurological systems, the same autoantibody was detected in a small percentage of healthy individuals and was nearly four times as prevalent in patients with neuropsychiatric systemic lupus erythematosus (SLE).
“I didn’t think this single investigation was going to yield a broader phenomenon with other patients,” lead author John V. Pluvinage, MD, PhD, a neurology resident at the University of California San Francisco, said in an interview. “It started as an N-of-one study just based on scientific curiosity.”
“It’s a beautifully done study,” added Betty Diamond, MD, director of the Institute of Molecular Medicine at the Feinstein Institutes for Medical Research in Manhasset, New York, commenting on the research. It uncovers “yet another example of a disease where antibodies getting into the brain are the problem.”
The research was published in Science Translational Medicine.
The Patient
The investigation began in 2014 with a 67-year-old woman presenting with difficulty speaking, ataxia, and tremor. Her blood tests showed no signs of B12 deficiency, and testing for known autoantibodies came back negative.
Solving this mystery required a more exhaustive approach. The patient enrolled in a research study focused on identifying novel autoantibodies in suspected neuroinflammatory disease, using a screening technology called phage immunoprecipitation sequencing.
“We adapted this technology to screen for autoantibodies in an unbiased manner by displaying every peptide across the human proteome and then mixing those peptides with patient antibodies in order to figure out what the antibodies are binding to,” explained Dr. Pluvinage.
Using this method, he and colleagues discovered that this woman had autoantibodies that target CD320 — a receptor important in the cellular uptake of B12. While her blood tests were normal, B12 in the patient’s cerebral spinal fluid (CSF) was “nearly undetectable,” Dr. Pluvinage said. Using an in vitro model of the blood-brain barrier (BBB), the researchers determined that anti-CD320 impaired the transport of B12 across the BBB by targeting receptors on the cell surface.
Treating the patient with a combination of immunosuppressant medication and high-dose B12 supplementation increased B12 levels in the patient’s CSF and improved clinical symptoms.
Identifying More Cases
Dr. Pluvinage and colleagues tested the 254 other individuals enrolled in the neuroinflammatory disease study and identified seven participants with CSF anti-CD320 autoantibodies — four of whom had low B12 in the CSF.
In a group of healthy controls, anti-CD320 seropositivity was 6%, similar to the positivity rate in 132 paired serum and CSF samples from a cohort of patients with multiple sclerosis (5.7%). In this group of patients with multiple sclerosis, anti-CD320 presence in the blood was highly predictive of high levels of CSF methylmalonic acid, a metabolic marker of B12 deficiency.
Researchers also screened for anti-CD320 seropositivity in 408 patients with non-neurologic SLE and 28 patients with neuropsychiatric SLE and found that the autoantibody was nearly four times as prevalent in patients with neurologic symptoms (21.4%) compared with in those with non-neurologic SLE (5.6%).
“The clinical relevance of anti-CD320 in healthy controls remains uncertain,” the authors wrote. However, it is not uncommon to have healthy patients with known autoantibodies.
“There are always people who have autoantibodies who don’t get disease, and why that is we don’t know,” said Dr. Diamond. Some individuals may develop clinical symptoms later, or there may be other reasons why they are protected against disease.
Pluvinage is eager to follow some seropositive healthy individuals to track their neurologic health overtime, to see if the presence of anti-CD320 “alters their neurologic trajectories.”
Alternative Pathways
Lastly, Dr. Pluvinage and colleagues set out to explain why patients with anti-CD320 in their blood did not show any signs of B12 deficiency. They hypothesized that another receptor may be compensating and still allowing blood cells to take up B12. Using CRISPR screening, the team identified the low-density lipoprotein receptor as an alternative pathway to B12 uptake.
“These findings suggest a model in which anti-CD320 impairs transport of B12 across the BBB, leading to autoimmune B12 central deficiency (ABCD) with varied neurologic manifestations but sparing peripheral manifestations of B12 deficiency,” the authors wrote.
The work was supported by the National Institute of Mental Health, National Center for Chronic Disease Prevention and Health Promotion, Department of Defense, UCSF Helen Diller Family Comprehensive Cancer Center Laboratory for Cell Analysis Shared Resource Facility, National Multiple Sclerosis Society, Valhalla Foundation, and the Westridge Foundation. Dr. Pluvinage is a co-inventor on a patent application related to this work. Dr. Diamond had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Discovered while studying a puzzling case of one patient with inexplicable neurological systems, the same autoantibody was detected in a small percentage of healthy individuals and was nearly four times as prevalent in patients with neuropsychiatric systemic lupus erythematosus (SLE).
“I didn’t think this single investigation was going to yield a broader phenomenon with other patients,” lead author John V. Pluvinage, MD, PhD, a neurology resident at the University of California San Francisco, said in an interview. “It started as an N-of-one study just based on scientific curiosity.”
“It’s a beautifully done study,” added Betty Diamond, MD, director of the Institute of Molecular Medicine at the Feinstein Institutes for Medical Research in Manhasset, New York, commenting on the research. It uncovers “yet another example of a disease where antibodies getting into the brain are the problem.”
The research was published in Science Translational Medicine.
The Patient
The investigation began in 2014 with a 67-year-old woman presenting with difficulty speaking, ataxia, and tremor. Her blood tests showed no signs of B12 deficiency, and testing for known autoantibodies came back negative.
Solving this mystery required a more exhaustive approach. The patient enrolled in a research study focused on identifying novel autoantibodies in suspected neuroinflammatory disease, using a screening technology called phage immunoprecipitation sequencing.
“We adapted this technology to screen for autoantibodies in an unbiased manner by displaying every peptide across the human proteome and then mixing those peptides with patient antibodies in order to figure out what the antibodies are binding to,” explained Dr. Pluvinage.
Using this method, he and colleagues discovered that this woman had autoantibodies that target CD320 — a receptor important in the cellular uptake of B12. While her blood tests were normal, B12 in the patient’s cerebral spinal fluid (CSF) was “nearly undetectable,” Dr. Pluvinage said. Using an in vitro model of the blood-brain barrier (BBB), the researchers determined that anti-CD320 impaired the transport of B12 across the BBB by targeting receptors on the cell surface.
Treating the patient with a combination of immunosuppressant medication and high-dose B12 supplementation increased B12 levels in the patient’s CSF and improved clinical symptoms.
Identifying More Cases
Dr. Pluvinage and colleagues tested the 254 other individuals enrolled in the neuroinflammatory disease study and identified seven participants with CSF anti-CD320 autoantibodies — four of whom had low B12 in the CSF.
In a group of healthy controls, anti-CD320 seropositivity was 6%, similar to the positivity rate in 132 paired serum and CSF samples from a cohort of patients with multiple sclerosis (5.7%). In this group of patients with multiple sclerosis, anti-CD320 presence in the blood was highly predictive of high levels of CSF methylmalonic acid, a metabolic marker of B12 deficiency.
Researchers also screened for anti-CD320 seropositivity in 408 patients with non-neurologic SLE and 28 patients with neuropsychiatric SLE and found that the autoantibody was nearly four times as prevalent in patients with neurologic symptoms (21.4%) compared with in those with non-neurologic SLE (5.6%).
“The clinical relevance of anti-CD320 in healthy controls remains uncertain,” the authors wrote. However, it is not uncommon to have healthy patients with known autoantibodies.
“There are always people who have autoantibodies who don’t get disease, and why that is we don’t know,” said Dr. Diamond. Some individuals may develop clinical symptoms later, or there may be other reasons why they are protected against disease.
Pluvinage is eager to follow some seropositive healthy individuals to track their neurologic health overtime, to see if the presence of anti-CD320 “alters their neurologic trajectories.”
Alternative Pathways
Lastly, Dr. Pluvinage and colleagues set out to explain why patients with anti-CD320 in their blood did not show any signs of B12 deficiency. They hypothesized that another receptor may be compensating and still allowing blood cells to take up B12. Using CRISPR screening, the team identified the low-density lipoprotein receptor as an alternative pathway to B12 uptake.
“These findings suggest a model in which anti-CD320 impairs transport of B12 across the BBB, leading to autoimmune B12 central deficiency (ABCD) with varied neurologic manifestations but sparing peripheral manifestations of B12 deficiency,” the authors wrote.
The work was supported by the National Institute of Mental Health, National Center for Chronic Disease Prevention and Health Promotion, Department of Defense, UCSF Helen Diller Family Comprehensive Cancer Center Laboratory for Cell Analysis Shared Resource Facility, National Multiple Sclerosis Society, Valhalla Foundation, and the Westridge Foundation. Dr. Pluvinage is a co-inventor on a patent application related to this work. Dr. Diamond had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Discovered while studying a puzzling case of one patient with inexplicable neurological systems, the same autoantibody was detected in a small percentage of healthy individuals and was nearly four times as prevalent in patients with neuropsychiatric systemic lupus erythematosus (SLE).
“I didn’t think this single investigation was going to yield a broader phenomenon with other patients,” lead author John V. Pluvinage, MD, PhD, a neurology resident at the University of California San Francisco, said in an interview. “It started as an N-of-one study just based on scientific curiosity.”
“It’s a beautifully done study,” added Betty Diamond, MD, director of the Institute of Molecular Medicine at the Feinstein Institutes for Medical Research in Manhasset, New York, commenting on the research. It uncovers “yet another example of a disease where antibodies getting into the brain are the problem.”
The research was published in Science Translational Medicine.
The Patient
The investigation began in 2014 with a 67-year-old woman presenting with difficulty speaking, ataxia, and tremor. Her blood tests showed no signs of B12 deficiency, and testing for known autoantibodies came back negative.
Solving this mystery required a more exhaustive approach. The patient enrolled in a research study focused on identifying novel autoantibodies in suspected neuroinflammatory disease, using a screening technology called phage immunoprecipitation sequencing.
“We adapted this technology to screen for autoantibodies in an unbiased manner by displaying every peptide across the human proteome and then mixing those peptides with patient antibodies in order to figure out what the antibodies are binding to,” explained Dr. Pluvinage.
Using this method, he and colleagues discovered that this woman had autoantibodies that target CD320 — a receptor important in the cellular uptake of B12. While her blood tests were normal, B12 in the patient’s cerebral spinal fluid (CSF) was “nearly undetectable,” Dr. Pluvinage said. Using an in vitro model of the blood-brain barrier (BBB), the researchers determined that anti-CD320 impaired the transport of B12 across the BBB by targeting receptors on the cell surface.
Treating the patient with a combination of immunosuppressant medication and high-dose B12 supplementation increased B12 levels in the patient’s CSF and improved clinical symptoms.
Identifying More Cases
Dr. Pluvinage and colleagues tested the 254 other individuals enrolled in the neuroinflammatory disease study and identified seven participants with CSF anti-CD320 autoantibodies — four of whom had low B12 in the CSF.
In a group of healthy controls, anti-CD320 seropositivity was 6%, similar to the positivity rate in 132 paired serum and CSF samples from a cohort of patients with multiple sclerosis (5.7%). In this group of patients with multiple sclerosis, anti-CD320 presence in the blood was highly predictive of high levels of CSF methylmalonic acid, a metabolic marker of B12 deficiency.
Researchers also screened for anti-CD320 seropositivity in 408 patients with non-neurologic SLE and 28 patients with neuropsychiatric SLE and found that the autoantibody was nearly four times as prevalent in patients with neurologic symptoms (21.4%) compared with in those with non-neurologic SLE (5.6%).
“The clinical relevance of anti-CD320 in healthy controls remains uncertain,” the authors wrote. However, it is not uncommon to have healthy patients with known autoantibodies.
“There are always people who have autoantibodies who don’t get disease, and why that is we don’t know,” said Dr. Diamond. Some individuals may develop clinical symptoms later, or there may be other reasons why they are protected against disease.
Pluvinage is eager to follow some seropositive healthy individuals to track their neurologic health overtime, to see if the presence of anti-CD320 “alters their neurologic trajectories.”
Alternative Pathways
Lastly, Dr. Pluvinage and colleagues set out to explain why patients with anti-CD320 in their blood did not show any signs of B12 deficiency. They hypothesized that another receptor may be compensating and still allowing blood cells to take up B12. Using CRISPR screening, the team identified the low-density lipoprotein receptor as an alternative pathway to B12 uptake.
“These findings suggest a model in which anti-CD320 impairs transport of B12 across the BBB, leading to autoimmune B12 central deficiency (ABCD) with varied neurologic manifestations but sparing peripheral manifestations of B12 deficiency,” the authors wrote.
The work was supported by the National Institute of Mental Health, National Center for Chronic Disease Prevention and Health Promotion, Department of Defense, UCSF Helen Diller Family Comprehensive Cancer Center Laboratory for Cell Analysis Shared Resource Facility, National Multiple Sclerosis Society, Valhalla Foundation, and the Westridge Foundation. Dr. Pluvinage is a co-inventor on a patent application related to this work. Dr. Diamond had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM SCIENCE TRANSLATIONAL MEDICINE
Study Links Suicide to Missed Early Care After Discharge
TOPLINE:
A study found that patients who die by suicide within a year after discharge from inpatient mental health care are less likely to have primary care consultation in the first 2 weeks, highlighting a gap during the high-risk transition period.
METHODOLOGY:
- Researchers used a nested case-control study design, analyzing the records of 613 people who died by suicide within a year of being discharged from an inpatient psychiatric facility in England between 2001 and 2019.
- Of these, 93 (15.4%) died within 2 weeks of discharge.
- Each patient was matched with up to 20 control individuals who were discharged at a similar time but were living.
- Researchers evaluated primary care consultations after discharge.
TAKEAWAY:
- People who died by suicide within a year were less likely to have had a primary care consultation within 2 weeks of discharge (adjusted odds ratio [aOR], 0.61; P = .01).
- Those who died by suicide had higher odds for a consultation in the week preceding their death (aOR, 1.71; P < .001) and the prescription of three or more psychotropic medications (aOR, 1.73; P < .001).
- Evidence of discharge communication between the facility and primary care clinician was infrequent, highlighting a gap in continuity of care.
- Approximately 40% of people who died within 2 weeks of discharge had a documented visit with a primary care clinician during that period.
IN PRACTICE:
“Primary care clinicians have opportunities to intervene and should prioritize patients experiencing transition from inpatient care,” the authors wrote.
SOURCE:
The study was led by Rebecca Musgrove, PhD, of the Centre for Mental Health and Safety at The University of Manchester in England, and published online on June 12 in BJGP Open.
LIMITATIONS:
The study’s reliance on individuals registered with the Clinical Practice Research Datalink may have caused some suicide cases to be excluded, limiting generalizability. Lack of linked up-to-date mental health records may have led to the omission of significant post-discharge care data. Incomplete discharge documentation may undercount informational continuity, affecting multivariable analysis.
DISCLOSURES:
The study was supported by the National Institute of Health and Care Research. Some authors declared serving as members of advisory groups and receiving grants and personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
A study found that patients who die by suicide within a year after discharge from inpatient mental health care are less likely to have primary care consultation in the first 2 weeks, highlighting a gap during the high-risk transition period.
METHODOLOGY:
- Researchers used a nested case-control study design, analyzing the records of 613 people who died by suicide within a year of being discharged from an inpatient psychiatric facility in England between 2001 and 2019.
- Of these, 93 (15.4%) died within 2 weeks of discharge.
- Each patient was matched with up to 20 control individuals who were discharged at a similar time but were living.
- Researchers evaluated primary care consultations after discharge.
TAKEAWAY:
- People who died by suicide within a year were less likely to have had a primary care consultation within 2 weeks of discharge (adjusted odds ratio [aOR], 0.61; P = .01).
- Those who died by suicide had higher odds for a consultation in the week preceding their death (aOR, 1.71; P < .001) and the prescription of three or more psychotropic medications (aOR, 1.73; P < .001).
- Evidence of discharge communication between the facility and primary care clinician was infrequent, highlighting a gap in continuity of care.
- Approximately 40% of people who died within 2 weeks of discharge had a documented visit with a primary care clinician during that period.
IN PRACTICE:
“Primary care clinicians have opportunities to intervene and should prioritize patients experiencing transition from inpatient care,” the authors wrote.
SOURCE:
The study was led by Rebecca Musgrove, PhD, of the Centre for Mental Health and Safety at The University of Manchester in England, and published online on June 12 in BJGP Open.
LIMITATIONS:
The study’s reliance on individuals registered with the Clinical Practice Research Datalink may have caused some suicide cases to be excluded, limiting generalizability. Lack of linked up-to-date mental health records may have led to the omission of significant post-discharge care data. Incomplete discharge documentation may undercount informational continuity, affecting multivariable analysis.
DISCLOSURES:
The study was supported by the National Institute of Health and Care Research. Some authors declared serving as members of advisory groups and receiving grants and personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
A study found that patients who die by suicide within a year after discharge from inpatient mental health care are less likely to have primary care consultation in the first 2 weeks, highlighting a gap during the high-risk transition period.
METHODOLOGY:
- Researchers used a nested case-control study design, analyzing the records of 613 people who died by suicide within a year of being discharged from an inpatient psychiatric facility in England between 2001 and 2019.
- Of these, 93 (15.4%) died within 2 weeks of discharge.
- Each patient was matched with up to 20 control individuals who were discharged at a similar time but were living.
- Researchers evaluated primary care consultations after discharge.
TAKEAWAY:
- People who died by suicide within a year were less likely to have had a primary care consultation within 2 weeks of discharge (adjusted odds ratio [aOR], 0.61; P = .01).
- Those who died by suicide had higher odds for a consultation in the week preceding their death (aOR, 1.71; P < .001) and the prescription of three or more psychotropic medications (aOR, 1.73; P < .001).
- Evidence of discharge communication between the facility and primary care clinician was infrequent, highlighting a gap in continuity of care.
- Approximately 40% of people who died within 2 weeks of discharge had a documented visit with a primary care clinician during that period.
IN PRACTICE:
“Primary care clinicians have opportunities to intervene and should prioritize patients experiencing transition from inpatient care,” the authors wrote.
SOURCE:
The study was led by Rebecca Musgrove, PhD, of the Centre for Mental Health and Safety at The University of Manchester in England, and published online on June 12 in BJGP Open.
LIMITATIONS:
The study’s reliance on individuals registered with the Clinical Practice Research Datalink may have caused some suicide cases to be excluded, limiting generalizability. Lack of linked up-to-date mental health records may have led to the omission of significant post-discharge care data. Incomplete discharge documentation may undercount informational continuity, affecting multivariable analysis.
DISCLOSURES:
The study was supported by the National Institute of Health and Care Research. Some authors declared serving as members of advisory groups and receiving grants and personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Two-Drug Combo Promising for Methamphetamine Use Disorder
Extended-release injectable naltrexone combined with extended-release oral bupropion (NTX + BUPN) for moderate or severe methamphetamine use disorder was associated with a significant decrease in use of the drug, a new study showed.
Investigators leading the randomized clinical trial found a 27% increase in negative methamphetamine urine tests in the treatment group — indicating reduced use — compared with an 11% increase in negative urine tests in control participants.
“These findings have important implications for pharmacological treatment for methamphetamine use disorder. There is no FDA-approved medication for it, yet methamphetamine-involved overdoses have greatly increased over the past decade,” lead author Michael Li, MD, assistant professor-in-residence of family medicine at the David Geffen School of Medicine at UCLA, Los Angeles, said in a news release.
The study was published online in Addiction.
Methamphetamine use has increased worldwide, from 33 million users in 2010 to 34 million in 2020, with overdose deaths rising fivefold in the United States over the past decade, the authors wrote.
A previous open-label study of NTX + BUPN showed efficacy for treating severe methamphetamine use disorder, and NTX and BUPN have each shown efficacy separately for this indication.
This new study is the second phase of the multicenter ADAPT-2 trial, conducted between 2017 and 2019 in 403 participants with methamphetamine use disorder. In the first stage, 109 people received NTX + BUPN and 294 received placebo.
The treatment group received extended-release NTX (380 mg) or placebo as an intramuscular injection on weeks 1, 4, 7, and 10. Extended-release BUPN or placebo tablets were administered weekly, with BUPN doses starting at 150 mg on day 1 and increasing to 450 mg by day 3. At week 13, participants received a tapering dose for 4 days before discontinuing.
As previously reported by this news organization, the two-drug combo was effective at reducing methamphetamine use at 6 weeks. The current analysis measured change in methamphetamine use during weeks 7-12 of the trial and in posttreatment weeks 13-16.
Participants in the intervention group during stage 1 showed an additional 9.2% increase (P = .038) during stage 2 in their probability of testing negative for methamphetamine. This represented a total increase of 27.1% in negative urine tests across the complete 12 weeks of treatment, compared with a total 11.4% increase in negative tests in the placebo group.
The 12-week increase in methamphetamine-negative urine tests in the intervention group was 15.8% greater (P = .006) than the increase in the placebo group.
There was no significant change in either group at posttreatment follow-up in weeks 13-16.
“Our findings suggest that ongoing NTX + BUPN treatment yields statistically significant reductions in methamphetamine use that continue from weeks 7 to 12,” the authors wrote. The lack of change in methamphetamine use from weeks 13-16 corresponds to the conclusion of treatment in week 12, they added.
It remains to be determined “whether continued use of NTX + BUPN treatment past 12 weeks would yield further reductions in use,” the authors wrote, noting that prior stimulant use disorder trials suggest that change in use is gradual and that sustained abstinence is unlikely in merely 12 weeks of a trial. Rather, it is dependent on treatment duration.
“This warrants future clinical trials to quantify changes in methamphetamine use beyond 12 weeks and to identify the optimal duration of treatment with this medication,” they concluded.
The study was funded by awards from the National Institute on Drug Abuse (NIDA), the US Department of Health and Human Services, the National Institute of Mental Health, and the O’Donnell Clinical Neuroscience Scholar Award from the University of Texas Southwestern Medical Center. Alkermes provided Vivitrol (naltrexone for extended-release injectable suspension) and matched placebo free of charge for use in this trial under a written agreement with NIDA. Dr. Li reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper.
A version of this article first appeared on Medscape.com.
Extended-release injectable naltrexone combined with extended-release oral bupropion (NTX + BUPN) for moderate or severe methamphetamine use disorder was associated with a significant decrease in use of the drug, a new study showed.
Investigators leading the randomized clinical trial found a 27% increase in negative methamphetamine urine tests in the treatment group — indicating reduced use — compared with an 11% increase in negative urine tests in control participants.
“These findings have important implications for pharmacological treatment for methamphetamine use disorder. There is no FDA-approved medication for it, yet methamphetamine-involved overdoses have greatly increased over the past decade,” lead author Michael Li, MD, assistant professor-in-residence of family medicine at the David Geffen School of Medicine at UCLA, Los Angeles, said in a news release.
The study was published online in Addiction.
Methamphetamine use has increased worldwide, from 33 million users in 2010 to 34 million in 2020, with overdose deaths rising fivefold in the United States over the past decade, the authors wrote.
A previous open-label study of NTX + BUPN showed efficacy for treating severe methamphetamine use disorder, and NTX and BUPN have each shown efficacy separately for this indication.
This new study is the second phase of the multicenter ADAPT-2 trial, conducted between 2017 and 2019 in 403 participants with methamphetamine use disorder. In the first stage, 109 people received NTX + BUPN and 294 received placebo.
The treatment group received extended-release NTX (380 mg) or placebo as an intramuscular injection on weeks 1, 4, 7, and 10. Extended-release BUPN or placebo tablets were administered weekly, with BUPN doses starting at 150 mg on day 1 and increasing to 450 mg by day 3. At week 13, participants received a tapering dose for 4 days before discontinuing.
As previously reported by this news organization, the two-drug combo was effective at reducing methamphetamine use at 6 weeks. The current analysis measured change in methamphetamine use during weeks 7-12 of the trial and in posttreatment weeks 13-16.
Participants in the intervention group during stage 1 showed an additional 9.2% increase (P = .038) during stage 2 in their probability of testing negative for methamphetamine. This represented a total increase of 27.1% in negative urine tests across the complete 12 weeks of treatment, compared with a total 11.4% increase in negative tests in the placebo group.
The 12-week increase in methamphetamine-negative urine tests in the intervention group was 15.8% greater (P = .006) than the increase in the placebo group.
There was no significant change in either group at posttreatment follow-up in weeks 13-16.
“Our findings suggest that ongoing NTX + BUPN treatment yields statistically significant reductions in methamphetamine use that continue from weeks 7 to 12,” the authors wrote. The lack of change in methamphetamine use from weeks 13-16 corresponds to the conclusion of treatment in week 12, they added.
It remains to be determined “whether continued use of NTX + BUPN treatment past 12 weeks would yield further reductions in use,” the authors wrote, noting that prior stimulant use disorder trials suggest that change in use is gradual and that sustained abstinence is unlikely in merely 12 weeks of a trial. Rather, it is dependent on treatment duration.
“This warrants future clinical trials to quantify changes in methamphetamine use beyond 12 weeks and to identify the optimal duration of treatment with this medication,” they concluded.
The study was funded by awards from the National Institute on Drug Abuse (NIDA), the US Department of Health and Human Services, the National Institute of Mental Health, and the O’Donnell Clinical Neuroscience Scholar Award from the University of Texas Southwestern Medical Center. Alkermes provided Vivitrol (naltrexone for extended-release injectable suspension) and matched placebo free of charge for use in this trial under a written agreement with NIDA. Dr. Li reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper.
A version of this article first appeared on Medscape.com.
Extended-release injectable naltrexone combined with extended-release oral bupropion (NTX + BUPN) for moderate or severe methamphetamine use disorder was associated with a significant decrease in use of the drug, a new study showed.
Investigators leading the randomized clinical trial found a 27% increase in negative methamphetamine urine tests in the treatment group — indicating reduced use — compared with an 11% increase in negative urine tests in control participants.
“These findings have important implications for pharmacological treatment for methamphetamine use disorder. There is no FDA-approved medication for it, yet methamphetamine-involved overdoses have greatly increased over the past decade,” lead author Michael Li, MD, assistant professor-in-residence of family medicine at the David Geffen School of Medicine at UCLA, Los Angeles, said in a news release.
The study was published online in Addiction.
Methamphetamine use has increased worldwide, from 33 million users in 2010 to 34 million in 2020, with overdose deaths rising fivefold in the United States over the past decade, the authors wrote.
A previous open-label study of NTX + BUPN showed efficacy for treating severe methamphetamine use disorder, and NTX and BUPN have each shown efficacy separately for this indication.
This new study is the second phase of the multicenter ADAPT-2 trial, conducted between 2017 and 2019 in 403 participants with methamphetamine use disorder. In the first stage, 109 people received NTX + BUPN and 294 received placebo.
The treatment group received extended-release NTX (380 mg) or placebo as an intramuscular injection on weeks 1, 4, 7, and 10. Extended-release BUPN or placebo tablets were administered weekly, with BUPN doses starting at 150 mg on day 1 and increasing to 450 mg by day 3. At week 13, participants received a tapering dose for 4 days before discontinuing.
As previously reported by this news organization, the two-drug combo was effective at reducing methamphetamine use at 6 weeks. The current analysis measured change in methamphetamine use during weeks 7-12 of the trial and in posttreatment weeks 13-16.
Participants in the intervention group during stage 1 showed an additional 9.2% increase (P = .038) during stage 2 in their probability of testing negative for methamphetamine. This represented a total increase of 27.1% in negative urine tests across the complete 12 weeks of treatment, compared with a total 11.4% increase in negative tests in the placebo group.
The 12-week increase in methamphetamine-negative urine tests in the intervention group was 15.8% greater (P = .006) than the increase in the placebo group.
There was no significant change in either group at posttreatment follow-up in weeks 13-16.
“Our findings suggest that ongoing NTX + BUPN treatment yields statistically significant reductions in methamphetamine use that continue from weeks 7 to 12,” the authors wrote. The lack of change in methamphetamine use from weeks 13-16 corresponds to the conclusion of treatment in week 12, they added.
It remains to be determined “whether continued use of NTX + BUPN treatment past 12 weeks would yield further reductions in use,” the authors wrote, noting that prior stimulant use disorder trials suggest that change in use is gradual and that sustained abstinence is unlikely in merely 12 weeks of a trial. Rather, it is dependent on treatment duration.
“This warrants future clinical trials to quantify changes in methamphetamine use beyond 12 weeks and to identify the optimal duration of treatment with this medication,” they concluded.
The study was funded by awards from the National Institute on Drug Abuse (NIDA), the US Department of Health and Human Services, the National Institute of Mental Health, and the O’Donnell Clinical Neuroscience Scholar Award from the University of Texas Southwestern Medical Center. Alkermes provided Vivitrol (naltrexone for extended-release injectable suspension) and matched placebo free of charge for use in this trial under a written agreement with NIDA. Dr. Li reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper.
A version of this article first appeared on Medscape.com.
More Evidence PTSD Tied to Obstructive Sleep Apnea Risk
Posttraumatic stress disorder (PTSD) may enhance the risk for obstructive sleep apnea (OSA) in older male veterans, the results of a cross-sectional twin study suggested. However, additional high-quality research is needed and may yield important mechanistic insights into both conditions and improve treatment, experts said.
“The strength of the association was a bit surprising,” said study investigator Amit J. Shah, MD, MSCR, Emory University, Atlanta, Georgia. “Many physicians and scientists may otherwise assume that the relationship between PTSD and sleep apnea would be primarily mediated by obesity, but we did not find that obesity explained our findings.”
The study was published online in JAMA Network Open.
A More Rigorous Evaluation
“Prior studies have shown an association between PTSD and sleep apnea, but the size of the association was not as strong,” Dr. Shah said, possibly because many were based on symptomatic patients referred for clinical evaluation of OSA and some relied on self-report of a sleep apnea diagnosis.
The current study involved 181 male twins, aged 61-71 years, including 66 pairs discordant for PTSD symptoms and 15 pairs discordant for PTSD diagnosis, who were recruited from the Vietnam Era Twin Registry and underwent a formal psychiatric and polysomnography evaluation as follow-up of the Emory Twin Study.
PTSD symptom severity was assessed using the self-administered Posttraumatic Stress Disorder Checklist (PCL). OSA was mild in 74% of participants, moderate to severe in 40%, and severe in 18%.
The mean apnea-hypopnea index (AHI) was 17.7 events per hour, and the mean proportion of the night with SaO2 less than 90% was 8.9%.
In fully adjusted models, each 15-point within-pair difference in PCL score was associated with a 4.6 events-per-hour higher AHI, a 6.4 events-per-hour higher oxygen desaturation index, and a 4.8% greater sleep duration with SaO2 less than 90%.
A current PTSD diagnosis is associated with an approximate 10-unit higher adjusted AHI in separate models involving potential cardiovascular mediators (10.5-unit; 95% CI, 5.7-15.3) and sociodemographic and psychiatric confounders (10.7-unit; 95% CI, 4.0-17.4).
The investigators called for more research into the underlying mechanisms but speculated that pharyngeal collapsibility and exaggerated loop gain, among others, may play a role.
“Our findings broaden the concept of OSA as one that may involve stress pathways in addition to the traditional mechanisms involving airway collapse and obesity,” Dr. Shah said. “We should be more suspicious of OSA as an important comorbidity in PTSD, given the high OSA prevalence that we found in PTSD veterans.”
Questions Remain
In an accompanying editorial, Steven H. Woodward, PhD, and Ruth M. Benca, MD, PhD, VA Palo Alto Health Care Systems, Palo Alto, California, noted the study affirmatively answers the decades-old question of whether rates of OSA are elevated in PTSD and “eliminates many potential confounders that might cast doubt on the PTSD-OSA association.”
However, they noted, it’s difficult to ascertain the directionality of this association and point out that, in terms of potential mechanisms, the oft-cited 1994 study linking sleep fragmentation with upper airway collapsibility has never been replicated and that a recent study found no difference in airway collapsibility or evidence of differential loop gain in combat veterans with and without PTSD.
Dr. Woodward and Dr. Benca also highlighted the large body of evidence that psychiatric disorders such as bipolar disorder, schizophrenia, and, in particular, major depressive disorder, are strongly associated with higher rates of OSA.
“In sum, we do not believe that a fair reading of the current literature supports a conclusion that PTSD bears an association with OSA that does not overlap with those manifested by other psychiatric disorders,” they wrote.
“This commentary is not intended to discourage any specific line of inquiry. Rather, we seek to keep the door open as wide as possible to hypotheses and research designs aimed at elucidating the relationships between OSA and psychiatric disorders,” Dr. Woodward and Dr. Benca concluded.
In response, Dr. Shah said the editorialists’ “point about psychiatric conditions other than PTSD also being important in OSA is well taken. In our own cohort, we did not see such an association, but that does not mean that this does not exist.
“Autonomic physiology, which we plan to study next, may underlie not only the PTSD-OSA relationship but also the relationship between other psychiatric factors and OSA,” he added.
The study was funded by grants from the National Institutes of Health (NIH). One study author reported receiving personal fees from Idorsia, and another reported receiving personal fees from Clinilabs, Eisai, Ferring Pharmaceuticals, Huxley, Idorsia, and Merck Sharp & Dohme. Dr. Benca reported receiving grants from the NIH and Eisai and personal fees from Eisai, Idorsia, Haleon, and Sage Therapeutics. Dr. Woodward reported having no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Posttraumatic stress disorder (PTSD) may enhance the risk for obstructive sleep apnea (OSA) in older male veterans, the results of a cross-sectional twin study suggested. However, additional high-quality research is needed and may yield important mechanistic insights into both conditions and improve treatment, experts said.
“The strength of the association was a bit surprising,” said study investigator Amit J. Shah, MD, MSCR, Emory University, Atlanta, Georgia. “Many physicians and scientists may otherwise assume that the relationship between PTSD and sleep apnea would be primarily mediated by obesity, but we did not find that obesity explained our findings.”
The study was published online in JAMA Network Open.
A More Rigorous Evaluation
“Prior studies have shown an association between PTSD and sleep apnea, but the size of the association was not as strong,” Dr. Shah said, possibly because many were based on symptomatic patients referred for clinical evaluation of OSA and some relied on self-report of a sleep apnea diagnosis.
The current study involved 181 male twins, aged 61-71 years, including 66 pairs discordant for PTSD symptoms and 15 pairs discordant for PTSD diagnosis, who were recruited from the Vietnam Era Twin Registry and underwent a formal psychiatric and polysomnography evaluation as follow-up of the Emory Twin Study.
PTSD symptom severity was assessed using the self-administered Posttraumatic Stress Disorder Checklist (PCL). OSA was mild in 74% of participants, moderate to severe in 40%, and severe in 18%.
The mean apnea-hypopnea index (AHI) was 17.7 events per hour, and the mean proportion of the night with SaO2 less than 90% was 8.9%.
In fully adjusted models, each 15-point within-pair difference in PCL score was associated with a 4.6 events-per-hour higher AHI, a 6.4 events-per-hour higher oxygen desaturation index, and a 4.8% greater sleep duration with SaO2 less than 90%.
A current PTSD diagnosis is associated with an approximate 10-unit higher adjusted AHI in separate models involving potential cardiovascular mediators (10.5-unit; 95% CI, 5.7-15.3) and sociodemographic and psychiatric confounders (10.7-unit; 95% CI, 4.0-17.4).
The investigators called for more research into the underlying mechanisms but speculated that pharyngeal collapsibility and exaggerated loop gain, among others, may play a role.
“Our findings broaden the concept of OSA as one that may involve stress pathways in addition to the traditional mechanisms involving airway collapse and obesity,” Dr. Shah said. “We should be more suspicious of OSA as an important comorbidity in PTSD, given the high OSA prevalence that we found in PTSD veterans.”
Questions Remain
In an accompanying editorial, Steven H. Woodward, PhD, and Ruth M. Benca, MD, PhD, VA Palo Alto Health Care Systems, Palo Alto, California, noted the study affirmatively answers the decades-old question of whether rates of OSA are elevated in PTSD and “eliminates many potential confounders that might cast doubt on the PTSD-OSA association.”
However, they noted, it’s difficult to ascertain the directionality of this association and point out that, in terms of potential mechanisms, the oft-cited 1994 study linking sleep fragmentation with upper airway collapsibility has never been replicated and that a recent study found no difference in airway collapsibility or evidence of differential loop gain in combat veterans with and without PTSD.
Dr. Woodward and Dr. Benca also highlighted the large body of evidence that psychiatric disorders such as bipolar disorder, schizophrenia, and, in particular, major depressive disorder, are strongly associated with higher rates of OSA.
“In sum, we do not believe that a fair reading of the current literature supports a conclusion that PTSD bears an association with OSA that does not overlap with those manifested by other psychiatric disorders,” they wrote.
“This commentary is not intended to discourage any specific line of inquiry. Rather, we seek to keep the door open as wide as possible to hypotheses and research designs aimed at elucidating the relationships between OSA and psychiatric disorders,” Dr. Woodward and Dr. Benca concluded.
In response, Dr. Shah said the editorialists’ “point about psychiatric conditions other than PTSD also being important in OSA is well taken. In our own cohort, we did not see such an association, but that does not mean that this does not exist.
“Autonomic physiology, which we plan to study next, may underlie not only the PTSD-OSA relationship but also the relationship between other psychiatric factors and OSA,” he added.
The study was funded by grants from the National Institutes of Health (NIH). One study author reported receiving personal fees from Idorsia, and another reported receiving personal fees from Clinilabs, Eisai, Ferring Pharmaceuticals, Huxley, Idorsia, and Merck Sharp & Dohme. Dr. Benca reported receiving grants from the NIH and Eisai and personal fees from Eisai, Idorsia, Haleon, and Sage Therapeutics. Dr. Woodward reported having no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Posttraumatic stress disorder (PTSD) may enhance the risk for obstructive sleep apnea (OSA) in older male veterans, the results of a cross-sectional twin study suggested. However, additional high-quality research is needed and may yield important mechanistic insights into both conditions and improve treatment, experts said.
“The strength of the association was a bit surprising,” said study investigator Amit J. Shah, MD, MSCR, Emory University, Atlanta, Georgia. “Many physicians and scientists may otherwise assume that the relationship between PTSD and sleep apnea would be primarily mediated by obesity, but we did not find that obesity explained our findings.”
The study was published online in JAMA Network Open.
A More Rigorous Evaluation
“Prior studies have shown an association between PTSD and sleep apnea, but the size of the association was not as strong,” Dr. Shah said, possibly because many were based on symptomatic patients referred for clinical evaluation of OSA and some relied on self-report of a sleep apnea diagnosis.
The current study involved 181 male twins, aged 61-71 years, including 66 pairs discordant for PTSD symptoms and 15 pairs discordant for PTSD diagnosis, who were recruited from the Vietnam Era Twin Registry and underwent a formal psychiatric and polysomnography evaluation as follow-up of the Emory Twin Study.
PTSD symptom severity was assessed using the self-administered Posttraumatic Stress Disorder Checklist (PCL). OSA was mild in 74% of participants, moderate to severe in 40%, and severe in 18%.
The mean apnea-hypopnea index (AHI) was 17.7 events per hour, and the mean proportion of the night with SaO2 less than 90% was 8.9%.
In fully adjusted models, each 15-point within-pair difference in PCL score was associated with a 4.6 events-per-hour higher AHI, a 6.4 events-per-hour higher oxygen desaturation index, and a 4.8% greater sleep duration with SaO2 less than 90%.
A current PTSD diagnosis is associated with an approximate 10-unit higher adjusted AHI in separate models involving potential cardiovascular mediators (10.5-unit; 95% CI, 5.7-15.3) and sociodemographic and psychiatric confounders (10.7-unit; 95% CI, 4.0-17.4).
The investigators called for more research into the underlying mechanisms but speculated that pharyngeal collapsibility and exaggerated loop gain, among others, may play a role.
“Our findings broaden the concept of OSA as one that may involve stress pathways in addition to the traditional mechanisms involving airway collapse and obesity,” Dr. Shah said. “We should be more suspicious of OSA as an important comorbidity in PTSD, given the high OSA prevalence that we found in PTSD veterans.”
Questions Remain
In an accompanying editorial, Steven H. Woodward, PhD, and Ruth M. Benca, MD, PhD, VA Palo Alto Health Care Systems, Palo Alto, California, noted the study affirmatively answers the decades-old question of whether rates of OSA are elevated in PTSD and “eliminates many potential confounders that might cast doubt on the PTSD-OSA association.”
However, they noted, it’s difficult to ascertain the directionality of this association and point out that, in terms of potential mechanisms, the oft-cited 1994 study linking sleep fragmentation with upper airway collapsibility has never been replicated and that a recent study found no difference in airway collapsibility or evidence of differential loop gain in combat veterans with and without PTSD.
Dr. Woodward and Dr. Benca also highlighted the large body of evidence that psychiatric disorders such as bipolar disorder, schizophrenia, and, in particular, major depressive disorder, are strongly associated with higher rates of OSA.
“In sum, we do not believe that a fair reading of the current literature supports a conclusion that PTSD bears an association with OSA that does not overlap with those manifested by other psychiatric disorders,” they wrote.
“This commentary is not intended to discourage any specific line of inquiry. Rather, we seek to keep the door open as wide as possible to hypotheses and research designs aimed at elucidating the relationships between OSA and psychiatric disorders,” Dr. Woodward and Dr. Benca concluded.
In response, Dr. Shah said the editorialists’ “point about psychiatric conditions other than PTSD also being important in OSA is well taken. In our own cohort, we did not see such an association, but that does not mean that this does not exist.
“Autonomic physiology, which we plan to study next, may underlie not only the PTSD-OSA relationship but also the relationship between other psychiatric factors and OSA,” he added.
The study was funded by grants from the National Institutes of Health (NIH). One study author reported receiving personal fees from Idorsia, and another reported receiving personal fees from Clinilabs, Eisai, Ferring Pharmaceuticals, Huxley, Idorsia, and Merck Sharp & Dohme. Dr. Benca reported receiving grants from the NIH and Eisai and personal fees from Eisai, Idorsia, Haleon, and Sage Therapeutics. Dr. Woodward reported having no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Chronic Loneliness Tied to Increased Stroke Risk
Adults older than 50 years who report experiencing persistently high levels of loneliness have a 56% increased risk for stroke, a new study showed.
The increased stroke risk did not apply to individuals who reported experiencing situational loneliness, a finding that investigators believe bolsters the hypothesis that chronic loneliness is driving the association.
“Our findings suggest that individuals who experience chronic loneliness are at higher risk for incident stroke,” lead investigator Yenee Soh, ScD, research associate of social and behavioral sciences in the Harvard T.H. Chan School of Public Health, Boston, told this news organization. “It is important to routinely assess loneliness, as the consequences may be worse if unidentified and/or ignored.”
The findings were published online in eClinicalMedicine.
Significant, Chronic Health Consequences
Exacerbated by the COVID-19 pandemic, loneliness is at an all-time high. A 2023 Surgeon General’s report highlighted the fact that loneliness and social isolation are linked to significant and chronic health consequences.
Previous research has linked loneliness to cardiovascular disease, yet few studies have examined the association between loneliness and stroke risk. The current study is one of the first to examine the association between changes in loneliness and stroke risk over time.
Using data from the 2006-2018 Health and Retirement Study, researchers assessed the link between loneliness and incident stroke over time. Between 2006 and 2008, 12,161 study participants, who were all older than 50 years with no history of stroke, responded to questions from the Revised UCLA Loneliness Scale. From these responses, researchers created summary loneliness scores.
Four years later, from 2010 to 2012, the 8936 remaining study participants responded to the same 20 questions again. Based on loneliness scores across the two time points, participants were divided into four groups:
- Consistently low (those who scored low on the loneliness scale at both baseline and follow-up).
- Remitting (those who scored high at baseline and low at follow-up).
- Recent onset (those who scored low at baseline and high at follow-up).
- Consistently high (those who scored high at both baseline and follow-up).
Incident stroke was determined by participant report and medical record data.
Among participants whose loneliness was measured at baseline only, 1237 strokes occurred during the 2006-2018 follow-up period. Among those who provided two loneliness assessments over time, 601 strokes occurred during the follow-up period.
Even after adjusting for social isolation, depressive symptoms, physical activity, body mass index, and other health conditions, investigators found that participants who reported being lonely at baseline only had a 25% increased stroke risk, compared with those who did not report being lonely at baseline (hazard ratio [HR], 1.25; 95% confidence interval (CI), 1.06-1.47).
Participants who reported having consistently high loneliness across both time points had a 56% increased risk for incident stroke vs those who did not report loneliness at both time points after adjusting for social isolation and depression (HR, 1.56; 95% CI, 1.11-2.18).
The researchers did not investigate any of the underlying issues that may contribute to the association between loneliness and stroke risk, but speculated there may be physiological factors at play. These could include inflammation caused by increased hypothalamic pituitary-adrenocortical activity, behavioral factors such as poor medication adherence, smoking and/or alcohol use, and psychosocial issues.
Those who experience chronic loneliness may represent individuals that are unable to develop or maintain satisfying social relationships, which may result in longer-term interpersonal difficulties, Dr. Soh noted.
“Since loneliness is a highly subjective experience, seeking help to address and intervene to address a patient’s specific personal needs is important. It’s important to distinguish loneliness from social isolation,” said Dr. Soh.
She added that “by screening for loneliness and providing care or referring patients to relevant behavioral healthcare providers, clinicians can play a crucial role in addressing loneliness and its associated health risks early on to help reduce the population burden of loneliness.”
Progressive Research
Commenting on the findings for this news organization, Elaine Jones, MD, medical director of Access TeleCare, who was not involved in the research, applauded the investigators for “advancing the topic by looking at the chronicity aspect of loneliness.”
She said more research is needed to investigate loneliness as a stroke risk factor and noted that there may be something inherently different among respondents who reported loneliness at both study time points.
“Personality types may play a role here. We know people with positive attitudes and outlooks can do better in challenging health situations than people who are negative in their attitudes, regardless of depression. Perhaps those who feel lonely initially decided to do something about it and join groups, take up a hobby, or re-engage with family or friends. Perhaps the people who are chronically lonely don’t, or can’t, do this,” Dr. Jones said.
Chronic loneliness can cause stress, she added, “and we know that stress chemicals and hormones can be harmful to health over long durations of time.”
The study was funded by the National Institute on Aging. There were no conflicts of interest noted.
A version of this article first appeared on Medscape.com.
Adults older than 50 years who report experiencing persistently high levels of loneliness have a 56% increased risk for stroke, a new study showed.
The increased stroke risk did not apply to individuals who reported experiencing situational loneliness, a finding that investigators believe bolsters the hypothesis that chronic loneliness is driving the association.
“Our findings suggest that individuals who experience chronic loneliness are at higher risk for incident stroke,” lead investigator Yenee Soh, ScD, research associate of social and behavioral sciences in the Harvard T.H. Chan School of Public Health, Boston, told this news organization. “It is important to routinely assess loneliness, as the consequences may be worse if unidentified and/or ignored.”
The findings were published online in eClinicalMedicine.
Significant, Chronic Health Consequences
Exacerbated by the COVID-19 pandemic, loneliness is at an all-time high. A 2023 Surgeon General’s report highlighted the fact that loneliness and social isolation are linked to significant and chronic health consequences.
Previous research has linked loneliness to cardiovascular disease, yet few studies have examined the association between loneliness and stroke risk. The current study is one of the first to examine the association between changes in loneliness and stroke risk over time.
Using data from the 2006-2018 Health and Retirement Study, researchers assessed the link between loneliness and incident stroke over time. Between 2006 and 2008, 12,161 study participants, who were all older than 50 years with no history of stroke, responded to questions from the Revised UCLA Loneliness Scale. From these responses, researchers created summary loneliness scores.
Four years later, from 2010 to 2012, the 8936 remaining study participants responded to the same 20 questions again. Based on loneliness scores across the two time points, participants were divided into four groups:
- Consistently low (those who scored low on the loneliness scale at both baseline and follow-up).
- Remitting (those who scored high at baseline and low at follow-up).
- Recent onset (those who scored low at baseline and high at follow-up).
- Consistently high (those who scored high at both baseline and follow-up).
Incident stroke was determined by participant report and medical record data.
Among participants whose loneliness was measured at baseline only, 1237 strokes occurred during the 2006-2018 follow-up period. Among those who provided two loneliness assessments over time, 601 strokes occurred during the follow-up period.
Even after adjusting for social isolation, depressive symptoms, physical activity, body mass index, and other health conditions, investigators found that participants who reported being lonely at baseline only had a 25% increased stroke risk, compared with those who did not report being lonely at baseline (hazard ratio [HR], 1.25; 95% confidence interval (CI), 1.06-1.47).
Participants who reported having consistently high loneliness across both time points had a 56% increased risk for incident stroke vs those who did not report loneliness at both time points after adjusting for social isolation and depression (HR, 1.56; 95% CI, 1.11-2.18).
The researchers did not investigate any of the underlying issues that may contribute to the association between loneliness and stroke risk, but speculated there may be physiological factors at play. These could include inflammation caused by increased hypothalamic pituitary-adrenocortical activity, behavioral factors such as poor medication adherence, smoking and/or alcohol use, and psychosocial issues.
Those who experience chronic loneliness may represent individuals that are unable to develop or maintain satisfying social relationships, which may result in longer-term interpersonal difficulties, Dr. Soh noted.
“Since loneliness is a highly subjective experience, seeking help to address and intervene to address a patient’s specific personal needs is important. It’s important to distinguish loneliness from social isolation,” said Dr. Soh.
She added that “by screening for loneliness and providing care or referring patients to relevant behavioral healthcare providers, clinicians can play a crucial role in addressing loneliness and its associated health risks early on to help reduce the population burden of loneliness.”
Progressive Research
Commenting on the findings for this news organization, Elaine Jones, MD, medical director of Access TeleCare, who was not involved in the research, applauded the investigators for “advancing the topic by looking at the chronicity aspect of loneliness.”
She said more research is needed to investigate loneliness as a stroke risk factor and noted that there may be something inherently different among respondents who reported loneliness at both study time points.
“Personality types may play a role here. We know people with positive attitudes and outlooks can do better in challenging health situations than people who are negative in their attitudes, regardless of depression. Perhaps those who feel lonely initially decided to do something about it and join groups, take up a hobby, or re-engage with family or friends. Perhaps the people who are chronically lonely don’t, or can’t, do this,” Dr. Jones said.
Chronic loneliness can cause stress, she added, “and we know that stress chemicals and hormones can be harmful to health over long durations of time.”
The study was funded by the National Institute on Aging. There were no conflicts of interest noted.
A version of this article first appeared on Medscape.com.
Adults older than 50 years who report experiencing persistently high levels of loneliness have a 56% increased risk for stroke, a new study showed.
The increased stroke risk did not apply to individuals who reported experiencing situational loneliness, a finding that investigators believe bolsters the hypothesis that chronic loneliness is driving the association.
“Our findings suggest that individuals who experience chronic loneliness are at higher risk for incident stroke,” lead investigator Yenee Soh, ScD, research associate of social and behavioral sciences in the Harvard T.H. Chan School of Public Health, Boston, told this news organization. “It is important to routinely assess loneliness, as the consequences may be worse if unidentified and/or ignored.”
The findings were published online in eClinicalMedicine.
Significant, Chronic Health Consequences
Exacerbated by the COVID-19 pandemic, loneliness is at an all-time high. A 2023 Surgeon General’s report highlighted the fact that loneliness and social isolation are linked to significant and chronic health consequences.
Previous research has linked loneliness to cardiovascular disease, yet few studies have examined the association between loneliness and stroke risk. The current study is one of the first to examine the association between changes in loneliness and stroke risk over time.
Using data from the 2006-2018 Health and Retirement Study, researchers assessed the link between loneliness and incident stroke over time. Between 2006 and 2008, 12,161 study participants, who were all older than 50 years with no history of stroke, responded to questions from the Revised UCLA Loneliness Scale. From these responses, researchers created summary loneliness scores.
Four years later, from 2010 to 2012, the 8936 remaining study participants responded to the same 20 questions again. Based on loneliness scores across the two time points, participants were divided into four groups:
- Consistently low (those who scored low on the loneliness scale at both baseline and follow-up).
- Remitting (those who scored high at baseline and low at follow-up).
- Recent onset (those who scored low at baseline and high at follow-up).
- Consistently high (those who scored high at both baseline and follow-up).
Incident stroke was determined by participant report and medical record data.
Among participants whose loneliness was measured at baseline only, 1237 strokes occurred during the 2006-2018 follow-up period. Among those who provided two loneliness assessments over time, 601 strokes occurred during the follow-up period.
Even after adjusting for social isolation, depressive symptoms, physical activity, body mass index, and other health conditions, investigators found that participants who reported being lonely at baseline only had a 25% increased stroke risk, compared with those who did not report being lonely at baseline (hazard ratio [HR], 1.25; 95% confidence interval (CI), 1.06-1.47).
Participants who reported having consistently high loneliness across both time points had a 56% increased risk for incident stroke vs those who did not report loneliness at both time points after adjusting for social isolation and depression (HR, 1.56; 95% CI, 1.11-2.18).
The researchers did not investigate any of the underlying issues that may contribute to the association between loneliness and stroke risk, but speculated there may be physiological factors at play. These could include inflammation caused by increased hypothalamic pituitary-adrenocortical activity, behavioral factors such as poor medication adherence, smoking and/or alcohol use, and psychosocial issues.
Those who experience chronic loneliness may represent individuals that are unable to develop or maintain satisfying social relationships, which may result in longer-term interpersonal difficulties, Dr. Soh noted.
“Since loneliness is a highly subjective experience, seeking help to address and intervene to address a patient’s specific personal needs is important. It’s important to distinguish loneliness from social isolation,” said Dr. Soh.
She added that “by screening for loneliness and providing care or referring patients to relevant behavioral healthcare providers, clinicians can play a crucial role in addressing loneliness and its associated health risks early on to help reduce the population burden of loneliness.”
Progressive Research
Commenting on the findings for this news organization, Elaine Jones, MD, medical director of Access TeleCare, who was not involved in the research, applauded the investigators for “advancing the topic by looking at the chronicity aspect of loneliness.”
She said more research is needed to investigate loneliness as a stroke risk factor and noted that there may be something inherently different among respondents who reported loneliness at both study time points.
“Personality types may play a role here. We know people with positive attitudes and outlooks can do better in challenging health situations than people who are negative in their attitudes, regardless of depression. Perhaps those who feel lonely initially decided to do something about it and join groups, take up a hobby, or re-engage with family or friends. Perhaps the people who are chronically lonely don’t, or can’t, do this,” Dr. Jones said.
Chronic loneliness can cause stress, she added, “and we know that stress chemicals and hormones can be harmful to health over long durations of time.”
The study was funded by the National Institute on Aging. There were no conflicts of interest noted.
A version of this article first appeared on Medscape.com.
How to Make Life Decisions
Halifax, Nova Scotia; American Samoa; Queens, New York; Lansing, Michigan; Gurugram, India. I often ask patients where they’re from. Practicing in San Diego, the answers are a geography lesson. People from around the world come here. I sometimes add the more interesting question: How’d you end up here? Many took the three highways to San Diego: the Navy, the defense industry (like General Dynamics), or followed a partner. My Queens patient had a better answer: Super Bowl XXII. On Sunday, Jan. 31st, 1988, the Redskins played the Broncos in San Diego. John Elway and the Broncos lost, but it didn’t matter. “I was scrapin’ the ice off my windshield that Monday morning when I thought, that’s it. I’m done! I drove to the garage where I worked and quit on the spot. Then I drove home and packed my bags.”
In a paper on how to make life decisions, this guy would be Exhibit A: “Don’t overthink it.” That approach might not be suitable for everyone, or for every decision. It might actually be an example of how not to make life decisions (more on that later). But,
The first treatise on this subject was a paper by one Franklin, Ben in 1772. Providing advice to a friend on how to make a career decision, Franklin argued: “My way is to divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con.” This “moral algebra” as he called it was a framework to put rigor to a messy, organic problem.
The flaw in this method is that in the end you have two lists. Then what? Do the length of the lists decide? What if some factors are more important? Well, let’s add tools to help. You could use a spreadsheet and assign weights to each variable. Then sum the values and choose based on that. So if “not scraping ice off your windshield” is twice as important as “doubling your rent,” then you’ve got your answer. But what if you aren’t good at estimating how important things are? Actually, most of us are pretty awful at assigning weights to life variables – having bags of money is the consummate example. Seems important, but because of habituation, it turns out to not be sustainable. Note Exhibit B, our wealthy neighbor who owns a Lambo and G-Wagen (AMG squared, of course), who just parked a Cybertruck in his driveway. Realizing the risk of depending on peoples’ flawed judgment, companies instead use statistical modeling called bootstrap aggregating to “vote” on the weights for variables in a prediction. If you aren’t sure how important a new Rivian or walking to the beach would be, a model can answer that for you! It’s a bit disconcerting, I know. I mean, how can a model know what we’d like? Wait, isn’t that how Netflix picks stuff for you? Exactly.
Ok, so why don’t we just ask our friendly personal AI? “OK, ChatGPT, given what you know about me, where can I have it all?” Alas, here we slam into a glass wall. It seems the answer is out there but even our life-changing magical AI tools fail us. Mathematically, it is impossible to have it all. An illustrative example of this is called the economic “impossible trinity problem.” Even the most sophisticated algorithm cannot find an optional solution to some trinities such as fixed foreign exchange rate, free capital movement, and an independent monetary policy. Economists have concluded you must trade off one to have the other two. Impossible trinities are common in economics and in life. Armistead Maupin in his “Tales of the City” codifies it as Mona’s Law, the essence of which is: You cannot have the perfect job, the perfect partner, and the perfect house at the same time. (See Exhibit C, one Tom Brady).
This brings me to my final point, hard decisions are matters of the heart and experiencing life is the best way to understand its beautiful chaos. If making rash judgments is ill-advised and using technology cannot solve all problems (try asking your AI buddy for the square root of 2 as a fraction) what tools can we use? Maybe try reading more novels. They allow us to experience multiple lifetimes in a short time, which is what we need to learn what matters. Reading Dorothea’s choice at the end of “Middlemarch” is a nice example. Should she give up Lowick Manor and marry the penniless Ladislaw or keep it and use her wealth to help others? Seeing her struggle helps us understand how to answer questions like: Should I give up my academic practice or marry that guy or move to Texas? These cannot be reduced to arithmetic. The only way to know is to know as much of life as possible.
My last visit with my Queens patient was our last together. He’s divorced and moving from San Diego to Gallatin, Tennessee. “I’ve paid my last taxes to California, Doc. I decided that’s it, I’m done!” Perhaps he should have read “The Grapes of Wrath” before he set out for California in the first place.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].
Halifax, Nova Scotia; American Samoa; Queens, New York; Lansing, Michigan; Gurugram, India. I often ask patients where they’re from. Practicing in San Diego, the answers are a geography lesson. People from around the world come here. I sometimes add the more interesting question: How’d you end up here? Many took the three highways to San Diego: the Navy, the defense industry (like General Dynamics), or followed a partner. My Queens patient had a better answer: Super Bowl XXII. On Sunday, Jan. 31st, 1988, the Redskins played the Broncos in San Diego. John Elway and the Broncos lost, but it didn’t matter. “I was scrapin’ the ice off my windshield that Monday morning when I thought, that’s it. I’m done! I drove to the garage where I worked and quit on the spot. Then I drove home and packed my bags.”
In a paper on how to make life decisions, this guy would be Exhibit A: “Don’t overthink it.” That approach might not be suitable for everyone, or for every decision. It might actually be an example of how not to make life decisions (more on that later). But,
The first treatise on this subject was a paper by one Franklin, Ben in 1772. Providing advice to a friend on how to make a career decision, Franklin argued: “My way is to divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con.” This “moral algebra” as he called it was a framework to put rigor to a messy, organic problem.
The flaw in this method is that in the end you have two lists. Then what? Do the length of the lists decide? What if some factors are more important? Well, let’s add tools to help. You could use a spreadsheet and assign weights to each variable. Then sum the values and choose based on that. So if “not scraping ice off your windshield” is twice as important as “doubling your rent,” then you’ve got your answer. But what if you aren’t good at estimating how important things are? Actually, most of us are pretty awful at assigning weights to life variables – having bags of money is the consummate example. Seems important, but because of habituation, it turns out to not be sustainable. Note Exhibit B, our wealthy neighbor who owns a Lambo and G-Wagen (AMG squared, of course), who just parked a Cybertruck in his driveway. Realizing the risk of depending on peoples’ flawed judgment, companies instead use statistical modeling called bootstrap aggregating to “vote” on the weights for variables in a prediction. If you aren’t sure how important a new Rivian or walking to the beach would be, a model can answer that for you! It’s a bit disconcerting, I know. I mean, how can a model know what we’d like? Wait, isn’t that how Netflix picks stuff for you? Exactly.
Ok, so why don’t we just ask our friendly personal AI? “OK, ChatGPT, given what you know about me, where can I have it all?” Alas, here we slam into a glass wall. It seems the answer is out there but even our life-changing magical AI tools fail us. Mathematically, it is impossible to have it all. An illustrative example of this is called the economic “impossible trinity problem.” Even the most sophisticated algorithm cannot find an optional solution to some trinities such as fixed foreign exchange rate, free capital movement, and an independent monetary policy. Economists have concluded you must trade off one to have the other two. Impossible trinities are common in economics and in life. Armistead Maupin in his “Tales of the City” codifies it as Mona’s Law, the essence of which is: You cannot have the perfect job, the perfect partner, and the perfect house at the same time. (See Exhibit C, one Tom Brady).
This brings me to my final point, hard decisions are matters of the heart and experiencing life is the best way to understand its beautiful chaos. If making rash judgments is ill-advised and using technology cannot solve all problems (try asking your AI buddy for the square root of 2 as a fraction) what tools can we use? Maybe try reading more novels. They allow us to experience multiple lifetimes in a short time, which is what we need to learn what matters. Reading Dorothea’s choice at the end of “Middlemarch” is a nice example. Should she give up Lowick Manor and marry the penniless Ladislaw or keep it and use her wealth to help others? Seeing her struggle helps us understand how to answer questions like: Should I give up my academic practice or marry that guy or move to Texas? These cannot be reduced to arithmetic. The only way to know is to know as much of life as possible.
My last visit with my Queens patient was our last together. He’s divorced and moving from San Diego to Gallatin, Tennessee. “I’ve paid my last taxes to California, Doc. I decided that’s it, I’m done!” Perhaps he should have read “The Grapes of Wrath” before he set out for California in the first place.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].
Halifax, Nova Scotia; American Samoa; Queens, New York; Lansing, Michigan; Gurugram, India. I often ask patients where they’re from. Practicing in San Diego, the answers are a geography lesson. People from around the world come here. I sometimes add the more interesting question: How’d you end up here? Many took the three highways to San Diego: the Navy, the defense industry (like General Dynamics), or followed a partner. My Queens patient had a better answer: Super Bowl XXII. On Sunday, Jan. 31st, 1988, the Redskins played the Broncos in San Diego. John Elway and the Broncos lost, but it didn’t matter. “I was scrapin’ the ice off my windshield that Monday morning when I thought, that’s it. I’m done! I drove to the garage where I worked and quit on the spot. Then I drove home and packed my bags.”
In a paper on how to make life decisions, this guy would be Exhibit A: “Don’t overthink it.” That approach might not be suitable for everyone, or for every decision. It might actually be an example of how not to make life decisions (more on that later). But,
The first treatise on this subject was a paper by one Franklin, Ben in 1772. Providing advice to a friend on how to make a career decision, Franklin argued: “My way is to divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con.” This “moral algebra” as he called it was a framework to put rigor to a messy, organic problem.
The flaw in this method is that in the end you have two lists. Then what? Do the length of the lists decide? What if some factors are more important? Well, let’s add tools to help. You could use a spreadsheet and assign weights to each variable. Then sum the values and choose based on that. So if “not scraping ice off your windshield” is twice as important as “doubling your rent,” then you’ve got your answer. But what if you aren’t good at estimating how important things are? Actually, most of us are pretty awful at assigning weights to life variables – having bags of money is the consummate example. Seems important, but because of habituation, it turns out to not be sustainable. Note Exhibit B, our wealthy neighbor who owns a Lambo and G-Wagen (AMG squared, of course), who just parked a Cybertruck in his driveway. Realizing the risk of depending on peoples’ flawed judgment, companies instead use statistical modeling called bootstrap aggregating to “vote” on the weights for variables in a prediction. If you aren’t sure how important a new Rivian or walking to the beach would be, a model can answer that for you! It’s a bit disconcerting, I know. I mean, how can a model know what we’d like? Wait, isn’t that how Netflix picks stuff for you? Exactly.
Ok, so why don’t we just ask our friendly personal AI? “OK, ChatGPT, given what you know about me, where can I have it all?” Alas, here we slam into a glass wall. It seems the answer is out there but even our life-changing magical AI tools fail us. Mathematically, it is impossible to have it all. An illustrative example of this is called the economic “impossible trinity problem.” Even the most sophisticated algorithm cannot find an optional solution to some trinities such as fixed foreign exchange rate, free capital movement, and an independent monetary policy. Economists have concluded you must trade off one to have the other two. Impossible trinities are common in economics and in life. Armistead Maupin in his “Tales of the City” codifies it as Mona’s Law, the essence of which is: You cannot have the perfect job, the perfect partner, and the perfect house at the same time. (See Exhibit C, one Tom Brady).
This brings me to my final point, hard decisions are matters of the heart and experiencing life is the best way to understand its beautiful chaos. If making rash judgments is ill-advised and using technology cannot solve all problems (try asking your AI buddy for the square root of 2 as a fraction) what tools can we use? Maybe try reading more novels. They allow us to experience multiple lifetimes in a short time, which is what we need to learn what matters. Reading Dorothea’s choice at the end of “Middlemarch” is a nice example. Should she give up Lowick Manor and marry the penniless Ladislaw or keep it and use her wealth to help others? Seeing her struggle helps us understand how to answer questions like: Should I give up my academic practice or marry that guy or move to Texas? These cannot be reduced to arithmetic. The only way to know is to know as much of life as possible.
My last visit with my Queens patient was our last together. He’s divorced and moving from San Diego to Gallatin, Tennessee. “I’ve paid my last taxes to California, Doc. I decided that’s it, I’m done!” Perhaps he should have read “The Grapes of Wrath” before he set out for California in the first place.
Dr. Benabio is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. The opinions expressed in this column are his own and do not represent those of Kaiser Permanente. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected].
CMS Announces End to Cyberattack Relief Program
The Centers for Medicare & Medicaid Services (CMS) has announced the conclusion of a program that provided billions in early Medicare payments to those affected by the Change Healthcare/UnitedHealth Group cyberattack last winter.
CMS reported that the program advanced more than $2.55 billion in Medicare payments to > 4200 Part A providers, including hospitals, and more than $717.18 million in payments to Part B suppliers such as physicians, nonphysician practitioners, and durable medical equipment suppliers.
According to CMS, the Medicare billing system is now functioning properly, and 96% of the early payments have been recovered. The advances were to represent ≤ 30 days of typical claims payments in a 3-month period of 2023, with full repayment expected within 90 days through “automatic recoupment from Medicare claims” — no extensions allowed.
The agency took a victory lap regarding its response. “In the face of one of the most widespread cyberattacks on the US health care industry, CMS promptly took action to get providers and suppliers access to the funds they needed to continue providing patients with vital care,” CMS Administrator Chiquita Brooks-LaSure said in a statement. “Our efforts helped minimize the disruptive fallout from this incident, and we will remain vigilant to be ready to address future events.”
Ongoing Concerns from Health Care Organizations
Ben Teicher, an American Hospital Association spokesman, said that the organization hopes that CMS will be responsive if there’s more need for action after the advance payment program expires. The organization represents about 5000 hospitals, health care systems, and other providers.
“Our members report that the aftereffects of this event will likely be felt throughout the remainder of the year,” he said. According to Teicher, hospitals remain concerned about their ability to process claims and appeal denials, the safety of reconnecting to cyber services, and access to information needed to bill patients and reconcile payments.
In addition, hospitals are concerned about “financial support to mitigate the considerable costs incurred as a result of the cyberattack,” he said.
Charlene MacDonald, executive vice-president of public affairs at the Federation of American Hospitals, which represents more than 1000 for-profit hospitals, sent a statement to this news organization that said some providers “are still feeling the effects of care denials and delays caused by insurer inaction.
“We appreciate that the Administration acted within its authority to support providers during this unprecedented crisis and blunt these devastating impacts, especially because a vast majority of managed care companies failed to step up to the plate,” she said. “It is now time to shift our focus to holding plans accountable for using tactics to delay and deny needed patient care.”
Cyberattack Impact and Response
The ransom-based cyberattack against Change Healthcare/UnitedHealth Group targeted an electronic data interchange clearing house processing payer reimbursement systems, disrupting cash flows at hospitals and medical practices, and affecting patient access to prescriptions and life-saving therapy.
Change Healthcare — part of the UnitedHealth Group subsidiary Optum — processes half of all medical claims, according to a Department of Justice lawsuit. The American Hospital Association described the cyberattack as “the most significant and consequential incident of its kind” in US history.
By late March, UnitedHealth Group said nearly all medical and pharmacy claims were processing properly, while a deputy secretary of the US Department of Health & Human Services told clinicians that officials were focusing on the last group of clinicians who were facing cash-flow problems.
Still, a senior advisor with CMS told providers at that time that “we have heard from so many providers over the last several weeks who are really struggling to make ends meet right now or who are worried that they will not be able to make payroll in the weeks to come.”
Randy Dotinga is a freelance health/medical reporter and board member of the Association of Health Care Journalists.
A version of this article appeared on Medscape.com.
The Centers for Medicare & Medicaid Services (CMS) has announced the conclusion of a program that provided billions in early Medicare payments to those affected by the Change Healthcare/UnitedHealth Group cyberattack last winter.
CMS reported that the program advanced more than $2.55 billion in Medicare payments to > 4200 Part A providers, including hospitals, and more than $717.18 million in payments to Part B suppliers such as physicians, nonphysician practitioners, and durable medical equipment suppliers.
According to CMS, the Medicare billing system is now functioning properly, and 96% of the early payments have been recovered. The advances were to represent ≤ 30 days of typical claims payments in a 3-month period of 2023, with full repayment expected within 90 days through “automatic recoupment from Medicare claims” — no extensions allowed.
The agency took a victory lap regarding its response. “In the face of one of the most widespread cyberattacks on the US health care industry, CMS promptly took action to get providers and suppliers access to the funds they needed to continue providing patients with vital care,” CMS Administrator Chiquita Brooks-LaSure said in a statement. “Our efforts helped minimize the disruptive fallout from this incident, and we will remain vigilant to be ready to address future events.”
Ongoing Concerns from Health Care Organizations
Ben Teicher, an American Hospital Association spokesman, said that the organization hopes that CMS will be responsive if there’s more need for action after the advance payment program expires. The organization represents about 5000 hospitals, health care systems, and other providers.
“Our members report that the aftereffects of this event will likely be felt throughout the remainder of the year,” he said. According to Teicher, hospitals remain concerned about their ability to process claims and appeal denials, the safety of reconnecting to cyber services, and access to information needed to bill patients and reconcile payments.
In addition, hospitals are concerned about “financial support to mitigate the considerable costs incurred as a result of the cyberattack,” he said.
Charlene MacDonald, executive vice-president of public affairs at the Federation of American Hospitals, which represents more than 1000 for-profit hospitals, sent a statement to this news organization that said some providers “are still feeling the effects of care denials and delays caused by insurer inaction.
“We appreciate that the Administration acted within its authority to support providers during this unprecedented crisis and blunt these devastating impacts, especially because a vast majority of managed care companies failed to step up to the plate,” she said. “It is now time to shift our focus to holding plans accountable for using tactics to delay and deny needed patient care.”
Cyberattack Impact and Response
The ransom-based cyberattack against Change Healthcare/UnitedHealth Group targeted an electronic data interchange clearing house processing payer reimbursement systems, disrupting cash flows at hospitals and medical practices, and affecting patient access to prescriptions and life-saving therapy.
Change Healthcare — part of the UnitedHealth Group subsidiary Optum — processes half of all medical claims, according to a Department of Justice lawsuit. The American Hospital Association described the cyberattack as “the most significant and consequential incident of its kind” in US history.
By late March, UnitedHealth Group said nearly all medical and pharmacy claims were processing properly, while a deputy secretary of the US Department of Health & Human Services told clinicians that officials were focusing on the last group of clinicians who were facing cash-flow problems.
Still, a senior advisor with CMS told providers at that time that “we have heard from so many providers over the last several weeks who are really struggling to make ends meet right now or who are worried that they will not be able to make payroll in the weeks to come.”
Randy Dotinga is a freelance health/medical reporter and board member of the Association of Health Care Journalists.
A version of this article appeared on Medscape.com.
The Centers for Medicare & Medicaid Services (CMS) has announced the conclusion of a program that provided billions in early Medicare payments to those affected by the Change Healthcare/UnitedHealth Group cyberattack last winter.
CMS reported that the program advanced more than $2.55 billion in Medicare payments to > 4200 Part A providers, including hospitals, and more than $717.18 million in payments to Part B suppliers such as physicians, nonphysician practitioners, and durable medical equipment suppliers.
According to CMS, the Medicare billing system is now functioning properly, and 96% of the early payments have been recovered. The advances were to represent ≤ 30 days of typical claims payments in a 3-month period of 2023, with full repayment expected within 90 days through “automatic recoupment from Medicare claims” — no extensions allowed.
The agency took a victory lap regarding its response. “In the face of one of the most widespread cyberattacks on the US health care industry, CMS promptly took action to get providers and suppliers access to the funds they needed to continue providing patients with vital care,” CMS Administrator Chiquita Brooks-LaSure said in a statement. “Our efforts helped minimize the disruptive fallout from this incident, and we will remain vigilant to be ready to address future events.”
Ongoing Concerns from Health Care Organizations
Ben Teicher, an American Hospital Association spokesman, said that the organization hopes that CMS will be responsive if there’s more need for action after the advance payment program expires. The organization represents about 5000 hospitals, health care systems, and other providers.
“Our members report that the aftereffects of this event will likely be felt throughout the remainder of the year,” he said. According to Teicher, hospitals remain concerned about their ability to process claims and appeal denials, the safety of reconnecting to cyber services, and access to information needed to bill patients and reconcile payments.
In addition, hospitals are concerned about “financial support to mitigate the considerable costs incurred as a result of the cyberattack,” he said.
Charlene MacDonald, executive vice-president of public affairs at the Federation of American Hospitals, which represents more than 1000 for-profit hospitals, sent a statement to this news organization that said some providers “are still feeling the effects of care denials and delays caused by insurer inaction.
“We appreciate that the Administration acted within its authority to support providers during this unprecedented crisis and blunt these devastating impacts, especially because a vast majority of managed care companies failed to step up to the plate,” she said. “It is now time to shift our focus to holding plans accountable for using tactics to delay and deny needed patient care.”
Cyberattack Impact and Response
The ransom-based cyberattack against Change Healthcare/UnitedHealth Group targeted an electronic data interchange clearing house processing payer reimbursement systems, disrupting cash flows at hospitals and medical practices, and affecting patient access to prescriptions and life-saving therapy.
Change Healthcare — part of the UnitedHealth Group subsidiary Optum — processes half of all medical claims, according to a Department of Justice lawsuit. The American Hospital Association described the cyberattack as “the most significant and consequential incident of its kind” in US history.
By late March, UnitedHealth Group said nearly all medical and pharmacy claims were processing properly, while a deputy secretary of the US Department of Health & Human Services told clinicians that officials were focusing on the last group of clinicians who were facing cash-flow problems.
Still, a senior advisor with CMS told providers at that time that “we have heard from so many providers over the last several weeks who are really struggling to make ends meet right now or who are worried that they will not be able to make payroll in the weeks to come.”
Randy Dotinga is a freelance health/medical reporter and board member of the Association of Health Care Journalists.
A version of this article appeared on Medscape.com.
Is This Journal Legit? Predatory Publishers
This transcript has been edited for clarity.
Andrew N. Wilner, MD: My guest today is Dr. Jose Merino, editor in chief of the Neurology family of journals and professor of neurology and co-vice chair of education at Georgetown University in Washington, DC.
Our program today is a follow-up of Dr. Merino’s presentation at the recent American Academy of Neurology meeting in Denver, Colorado. Along with two other panelists, Dr. Merino discussed the role of open-access publication and the dangers of predatory journals.
Jose G. Merino, MD, MPhil: Thank you for having me here. It’s a pleasure.
Open Access Defined
Dr. Wilner: I remember when publication in neurology was pretty straightforward. It was either the green journal or the blue journal, but things have certainly changed. I think one topic that is not clear to everyone is this concept of open access. Could you define that for us?
Dr. Merino: Sure. Open access is a mode of publication that fosters more open or accessible science. The idea of open access is that it combines two main elements. One is that the papers that are published become immediately available to anybody with an internet connection anywhere in the world without any restrictions.
The second important element from open access, which makes it different from other models we can talk about, is the fact that the authors retain the copyright of their work, but they give the journal and readers a license to use, reproduce, and modify the content.
This is different, for example, from instances where we have funder mandates. For example, NIH papers have to become available 6 months after publication, so they’re available to everybody but not immediately.
Dr. Wilner: I remember that when a journal article was published, say, in Neurology, if you didn’t have a subscription to Neurology, you went to the library that hopefully had a subscription.
If they didn’t have it, you would write to the author and say, “Hey, I heard you have this great paper because the abstract was out there. Could you send me a reprint?” Has that whole universe evaporated?
Dr. Merino: It depends on how the paper is published. For example, in Neurology, some of the research we publish is open access. Basically, if you have an internet connection, you can access the paper.
That’s the case for papers published in our wholly open-access journals in the Neurology family like Neurology Neuroimmunology & Neuroinflammation, Neurology Genetics, or Neurology Education.
For other papers that are published in Neurology, not under open access, there is a paywall. For some of them, the paywall comes down after a few months based on funder mandates and so on. As I was mentioning, the NIH-funded papers are available 6 months later.
In the first 6 months, you may have to go to your library, and if your library has a subscription, you can download it directly. [This is also true for] those that always stay behind the paywall, where you have to have a subscription or your library has to have a subscription.
Is Pay to Publish a Red Flag?
Dr. Wilner: I’m a professional writer. With any luck, when I write something, I get paid to write it. There’s been a long tradition in academic medicine that when you submit an article to, say, Neurology, you don’t get paid as an author for the publication. Your reward is the honor of it being published.
Neurology supports itself in various ways, including advertising and so on. That’s been the contract: free publication for work that merits it, and the journal survives on its own.
With open access, one of the things that’s happened is that — and I’ve published open access myself — is that I get a notification that I need to pay to have my article that I’ve slaved over published. Explain that, please.
Dr. Merino: This is the issue with open access. As I mentioned, the paper gets published. You’re giving the journal a license to publish it. You’re retaining the copyright of your work. That means that the journal cannot make money or support itself by just publishing open access because they belong to you.
Typically, open-access journals are not in print and don’t have much in terms of advertising. The contract is you’re giving me a license to publish it, but it’s your journal, so you’re paying a fee for the journal expenses to basically produce your paper. That’s what’s happening with open access.
That’s been recognized with many funders, for example, with NIH funding or many of the European funders, they’re including open-access fees as part of their funding for research. Now, of course, this doesn’t help if you’re not a funded researcher or if you’re a fellow who’s doing work and so on.
Typically, most journals will have waived fees or lower fees for these situations. The reason for the open-access fee is the fact that you’re retaining the copyright. You’re not giving it to the journal who can then use it to generate its revenue for supporting itself, the editorial staff, and so on.
Dr. Wilner: This idea of charging for publication has created a satellite business of what are called predatory journals. How does one know if the open-access journal that I’m submitting to is really just in the business of wanting my $300 or my $900 to get published? How do I know if that’s a reasonable place to publish?
Predatory Journals
Dr. Merino: That’s a big challenge that has come with this whole idea of open access and the fact that now, many journals are online only, so you’re no longer seeing a physical copy. That has given rise to the predatory journals.
The predatory journal, by definition, is a journal that claims to be open access. They’ll take your paper and publish it, but they don’t provide all the other services that you would typically expect from the fact that you’re paying an open-access fee. This includes getting appropriate peer review, production of the manuscript, and long-term curation and storage of the manuscript.
Many predatory journals will take your open-access fee, accept any paper that you submit, regardless of the quality, because they’re charging the fees for that. They don’t send it to real peer review, and then in a few months, the journal disappears so there’s no way for anybody to actually find your paper anymore.
There are certain checklists. Dr. David Moher at the University of Toronto has produced some work trying to help us identify predatory journals.
One thing I typically suggest to people who ask me this question is: Have you ever heard of this journal before? Does the journal have a track record? How far back does the story of the journal go? Is it supported by a publisher that you know? Do you know anybody who has published there? Is it something you can easily access?
If in doubt, always ask your friendly medical librarian. There used to be lists that were kept in terms of predatory journals that were being constantly updated, but those had to be shut down. As far as I understand, there were legal issues in terms of how things got on that list.
I think that overall, if you’ve heard of it, if it’s relevant, if it’s known in your field, and if your librarian knows it, it’s probably a good legitimate open-access journal. There are many very good legitimate open-access journals.
I mentioned the two that we have in our family, but all the other major journals have their own open-access journal within their family. There are some, like BMC or PLOS, that are completely open-access and legitimate journals.
Impact Factor
Dr. Wilner: What about impact factor? Many journals boast about their impact factor. I’m not sure how to interpret that number.
Dr. Merino: Impact factor is very interesting. The impact factor was developed by medical librarians to try to identify the journals they should be subscribing to. It’s a measure of the average citations to an average paper in the journal.
It doesn’t tell you about specific papers. It tells you, on average, how many of the papers in this journal get cited so many times. It’s calculated by the number of articles that were cited divided by the number of articles that were published. Journals that publish many papers, like Neurology, have a hard time bringing up their impact factor beyond a certain level.
Similarly, very small journals with one or two very highly cited papers have a very high impact factor. It’s being used as a measure, perhaps inappropriately, of how good or how reputable a journal is. We all say we don’t care about journal impact factors, but we all know our journal impact factor and we used to know it to three decimals. Now, they changed the system, and there’s only one decimal point, which makes more sense.
This is more important, for example, for authors when deciding where to submit papers. I know that in some countries, particularly in Europe, the impact factor of the journal where you publish has an impact on your promotion decisions.
I would say what’s even more important than the impact factor, is to say, “Well, is this the journal that fits the scope of my paper? Is this the journal that reaches the audience that I want to reach when I write my paper?”
There are some papers, for example, that are very influential. The impact factor just captures citations. There are some papers that are very influential that may not get cited very often. There may be papers that change clinical practice.
If you read a paper that tells you that you should be changing how you treat your patients with myasthenia based on this paper, that may not get cited. It’s a very clinically focused paper, but it’s probably more impactful than one that gets cited very much in some respect, or they make it to public policy decisions, and so on.
I think it’s important to look more at the audience and the journal scope when you submit your papers.
Dr. Wilner: One other technical question. The journals also say they’re indexed in PubMed or Google Scholar. If I want to publish my paper and I want it indexed where the right people are going to find it, where does it need to be indexed?
Dr. Merino: I grew up using Index Medicus, MedlinePlus, and the Library of Science. I still do. If I need to find something, I go to PubMed. Ideally, papers are listed in MedlinePlus or can be found in PubMed. They’re not the same thing, but you can find them through them.
That would be an important thing. Nowadays, a lot more people are using Google Scholar or Google just to identify papers. It may be a little bit less relevant, but it’s still a measure of the quality of the journal before they get indexed in some of these. For example, if you get listed in MedlinePlus, it has gone through certain quality checks by the index itself to see whether they would accept the journal or not. That’s something you want to check.
Typically, most of the large journals or the journals you and I know about are listed in more than one place, right? They’re listed in Scopus and Web of Science. They’re listed in MedlinePlus and so on. Again, if you’re submitting your paper, go somewhere where you know the journal and you’ve heard about it.
Dr. Wilner: I’m not going to ask you about artificial intelligence. We can do that another time. I want to ask something closer to me, which is this question of publish or perish.
There seems to be, in academics, more emphasis on the number of papers that one has published rather than their quality. How does a younger academician or one who really needs to publish cope with that?
Dr. Merino: Many people are writing up research that may not be relevant or that may not be high quality just because you need to have a long list of papers to get promoted, for example, if you’re an academician.
Doug Altman, who was a very influential person in the field quality of not only medical statistics but also medical publishing, had the idea that we need less research, but we need better research.
We often receive papers where you say, well, what’s the rationale behind the question in this paper? It’s like they had a large amount of data and were trying to squeeze as much as they could out of that. I think, as a young academician, the important thing to think about is whether it is an important question that matters to you and to the field, from whatever perspective, whether it’s going to advance research, advance clinical care, or have public policy implications.
Is this one where the answer will be important no matter what the answer is? If you’re thinking of that, your work will be well recognized, people will know you, and you’ll get invited to collaborate. I think that’s the most important thing rather than just churning out a large number of papers.
The productivity will come from the fact that you start by saying, let me ask something that’s really meaningful to me and to the field, with a good question and using strong research methodology.
Dr. Wilner: Thanks for that, Dr. Merino. I think that’s very valuable for all of us. This has been a great discussion. Do you have any final comments before we wrap up?
Dr. Merino: I want to encourage people to continue reading medical journals all the time and submitting to us, again, good research and important questions with robust methodology. That’s what we’re looking for in Neurology and most serious medical journals.
Dr. Wilner is an associate professor of neurology at the University of Tennessee Health Science Center, Memphis. Dr. Merino is a professor in the department of neurology at Georgetown University Medical Center, Washington, DC. Dr. Wilner reported conflicts of interest with Accordant Health Services and Lulu Publishing. Dr. Merino reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Andrew N. Wilner, MD: My guest today is Dr. Jose Merino, editor in chief of the Neurology family of journals and professor of neurology and co-vice chair of education at Georgetown University in Washington, DC.
Our program today is a follow-up of Dr. Merino’s presentation at the recent American Academy of Neurology meeting in Denver, Colorado. Along with two other panelists, Dr. Merino discussed the role of open-access publication and the dangers of predatory journals.
Jose G. Merino, MD, MPhil: Thank you for having me here. It’s a pleasure.
Open Access Defined
Dr. Wilner: I remember when publication in neurology was pretty straightforward. It was either the green journal or the blue journal, but things have certainly changed. I think one topic that is not clear to everyone is this concept of open access. Could you define that for us?
Dr. Merino: Sure. Open access is a mode of publication that fosters more open or accessible science. The idea of open access is that it combines two main elements. One is that the papers that are published become immediately available to anybody with an internet connection anywhere in the world without any restrictions.
The second important element from open access, which makes it different from other models we can talk about, is the fact that the authors retain the copyright of their work, but they give the journal and readers a license to use, reproduce, and modify the content.
This is different, for example, from instances where we have funder mandates. For example, NIH papers have to become available 6 months after publication, so they’re available to everybody but not immediately.
Dr. Wilner: I remember that when a journal article was published, say, in Neurology, if you didn’t have a subscription to Neurology, you went to the library that hopefully had a subscription.
If they didn’t have it, you would write to the author and say, “Hey, I heard you have this great paper because the abstract was out there. Could you send me a reprint?” Has that whole universe evaporated?
Dr. Merino: It depends on how the paper is published. For example, in Neurology, some of the research we publish is open access. Basically, if you have an internet connection, you can access the paper.
That’s the case for papers published in our wholly open-access journals in the Neurology family like Neurology Neuroimmunology & Neuroinflammation, Neurology Genetics, or Neurology Education.
For other papers that are published in Neurology, not under open access, there is a paywall. For some of them, the paywall comes down after a few months based on funder mandates and so on. As I was mentioning, the NIH-funded papers are available 6 months later.
In the first 6 months, you may have to go to your library, and if your library has a subscription, you can download it directly. [This is also true for] those that always stay behind the paywall, where you have to have a subscription or your library has to have a subscription.
Is Pay to Publish a Red Flag?
Dr. Wilner: I’m a professional writer. With any luck, when I write something, I get paid to write it. There’s been a long tradition in academic medicine that when you submit an article to, say, Neurology, you don’t get paid as an author for the publication. Your reward is the honor of it being published.
Neurology supports itself in various ways, including advertising and so on. That’s been the contract: free publication for work that merits it, and the journal survives on its own.
With open access, one of the things that’s happened is that — and I’ve published open access myself — is that I get a notification that I need to pay to have my article that I’ve slaved over published. Explain that, please.
Dr. Merino: This is the issue with open access. As I mentioned, the paper gets published. You’re giving the journal a license to publish it. You’re retaining the copyright of your work. That means that the journal cannot make money or support itself by just publishing open access because they belong to you.
Typically, open-access journals are not in print and don’t have much in terms of advertising. The contract is you’re giving me a license to publish it, but it’s your journal, so you’re paying a fee for the journal expenses to basically produce your paper. That’s what’s happening with open access.
That’s been recognized with many funders, for example, with NIH funding or many of the European funders, they’re including open-access fees as part of their funding for research. Now, of course, this doesn’t help if you’re not a funded researcher or if you’re a fellow who’s doing work and so on.
Typically, most journals will have waived fees or lower fees for these situations. The reason for the open-access fee is the fact that you’re retaining the copyright. You’re not giving it to the journal who can then use it to generate its revenue for supporting itself, the editorial staff, and so on.
Dr. Wilner: This idea of charging for publication has created a satellite business of what are called predatory journals. How does one know if the open-access journal that I’m submitting to is really just in the business of wanting my $300 or my $900 to get published? How do I know if that’s a reasonable place to publish?
Predatory Journals
Dr. Merino: That’s a big challenge that has come with this whole idea of open access and the fact that now, many journals are online only, so you’re no longer seeing a physical copy. That has given rise to the predatory journals.
The predatory journal, by definition, is a journal that claims to be open access. They’ll take your paper and publish it, but they don’t provide all the other services that you would typically expect from the fact that you’re paying an open-access fee. This includes getting appropriate peer review, production of the manuscript, and long-term curation and storage of the manuscript.
Many predatory journals will take your open-access fee, accept any paper that you submit, regardless of the quality, because they’re charging the fees for that. They don’t send it to real peer review, and then in a few months, the journal disappears so there’s no way for anybody to actually find your paper anymore.
There are certain checklists. Dr. David Moher at the University of Toronto has produced some work trying to help us identify predatory journals.
One thing I typically suggest to people who ask me this question is: Have you ever heard of this journal before? Does the journal have a track record? How far back does the story of the journal go? Is it supported by a publisher that you know? Do you know anybody who has published there? Is it something you can easily access?
If in doubt, always ask your friendly medical librarian. There used to be lists that were kept in terms of predatory journals that were being constantly updated, but those had to be shut down. As far as I understand, there were legal issues in terms of how things got on that list.
I think that overall, if you’ve heard of it, if it’s relevant, if it’s known in your field, and if your librarian knows it, it’s probably a good legitimate open-access journal. There are many very good legitimate open-access journals.
I mentioned the two that we have in our family, but all the other major journals have their own open-access journal within their family. There are some, like BMC or PLOS, that are completely open-access and legitimate journals.
Impact Factor
Dr. Wilner: What about impact factor? Many journals boast about their impact factor. I’m not sure how to interpret that number.
Dr. Merino: Impact factor is very interesting. The impact factor was developed by medical librarians to try to identify the journals they should be subscribing to. It’s a measure of the average citations to an average paper in the journal.
It doesn’t tell you about specific papers. It tells you, on average, how many of the papers in this journal get cited so many times. It’s calculated by the number of articles that were cited divided by the number of articles that were published. Journals that publish many papers, like Neurology, have a hard time bringing up their impact factor beyond a certain level.
Similarly, very small journals with one or two very highly cited papers have a very high impact factor. It’s being used as a measure, perhaps inappropriately, of how good or how reputable a journal is. We all say we don’t care about journal impact factors, but we all know our journal impact factor and we used to know it to three decimals. Now, they changed the system, and there’s only one decimal point, which makes more sense.
This is more important, for example, for authors when deciding where to submit papers. I know that in some countries, particularly in Europe, the impact factor of the journal where you publish has an impact on your promotion decisions.
I would say what’s even more important than the impact factor, is to say, “Well, is this the journal that fits the scope of my paper? Is this the journal that reaches the audience that I want to reach when I write my paper?”
There are some papers, for example, that are very influential. The impact factor just captures citations. There are some papers that are very influential that may not get cited very often. There may be papers that change clinical practice.
If you read a paper that tells you that you should be changing how you treat your patients with myasthenia based on this paper, that may not get cited. It’s a very clinically focused paper, but it’s probably more impactful than one that gets cited very much in some respect, or they make it to public policy decisions, and so on.
I think it’s important to look more at the audience and the journal scope when you submit your papers.
Dr. Wilner: One other technical question. The journals also say they’re indexed in PubMed or Google Scholar. If I want to publish my paper and I want it indexed where the right people are going to find it, where does it need to be indexed?
Dr. Merino: I grew up using Index Medicus, MedlinePlus, and the Library of Science. I still do. If I need to find something, I go to PubMed. Ideally, papers are listed in MedlinePlus or can be found in PubMed. They’re not the same thing, but you can find them through them.
That would be an important thing. Nowadays, a lot more people are using Google Scholar or Google just to identify papers. It may be a little bit less relevant, but it’s still a measure of the quality of the journal before they get indexed in some of these. For example, if you get listed in MedlinePlus, it has gone through certain quality checks by the index itself to see whether they would accept the journal or not. That’s something you want to check.
Typically, most of the large journals or the journals you and I know about are listed in more than one place, right? They’re listed in Scopus and Web of Science. They’re listed in MedlinePlus and so on. Again, if you’re submitting your paper, go somewhere where you know the journal and you’ve heard about it.
Dr. Wilner: I’m not going to ask you about artificial intelligence. We can do that another time. I want to ask something closer to me, which is this question of publish or perish.
There seems to be, in academics, more emphasis on the number of papers that one has published rather than their quality. How does a younger academician or one who really needs to publish cope with that?
Dr. Merino: Many people are writing up research that may not be relevant or that may not be high quality just because you need to have a long list of papers to get promoted, for example, if you’re an academician.
Doug Altman, who was a very influential person in the field quality of not only medical statistics but also medical publishing, had the idea that we need less research, but we need better research.
We often receive papers where you say, well, what’s the rationale behind the question in this paper? It’s like they had a large amount of data and were trying to squeeze as much as they could out of that. I think, as a young academician, the important thing to think about is whether it is an important question that matters to you and to the field, from whatever perspective, whether it’s going to advance research, advance clinical care, or have public policy implications.
Is this one where the answer will be important no matter what the answer is? If you’re thinking of that, your work will be well recognized, people will know you, and you’ll get invited to collaborate. I think that’s the most important thing rather than just churning out a large number of papers.
The productivity will come from the fact that you start by saying, let me ask something that’s really meaningful to me and to the field, with a good question and using strong research methodology.
Dr. Wilner: Thanks for that, Dr. Merino. I think that’s very valuable for all of us. This has been a great discussion. Do you have any final comments before we wrap up?
Dr. Merino: I want to encourage people to continue reading medical journals all the time and submitting to us, again, good research and important questions with robust methodology. That’s what we’re looking for in Neurology and most serious medical journals.
Dr. Wilner is an associate professor of neurology at the University of Tennessee Health Science Center, Memphis. Dr. Merino is a professor in the department of neurology at Georgetown University Medical Center, Washington, DC. Dr. Wilner reported conflicts of interest with Accordant Health Services and Lulu Publishing. Dr. Merino reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Andrew N. Wilner, MD: My guest today is Dr. Jose Merino, editor in chief of the Neurology family of journals and professor of neurology and co-vice chair of education at Georgetown University in Washington, DC.
Our program today is a follow-up of Dr. Merino’s presentation at the recent American Academy of Neurology meeting in Denver, Colorado. Along with two other panelists, Dr. Merino discussed the role of open-access publication and the dangers of predatory journals.
Jose G. Merino, MD, MPhil: Thank you for having me here. It’s a pleasure.
Open Access Defined
Dr. Wilner: I remember when publication in neurology was pretty straightforward. It was either the green journal or the blue journal, but things have certainly changed. I think one topic that is not clear to everyone is this concept of open access. Could you define that for us?
Dr. Merino: Sure. Open access is a mode of publication that fosters more open or accessible science. The idea of open access is that it combines two main elements. One is that the papers that are published become immediately available to anybody with an internet connection anywhere in the world without any restrictions.
The second important element from open access, which makes it different from other models we can talk about, is the fact that the authors retain the copyright of their work, but they give the journal and readers a license to use, reproduce, and modify the content.
This is different, for example, from instances where we have funder mandates. For example, NIH papers have to become available 6 months after publication, so they’re available to everybody but not immediately.
Dr. Wilner: I remember that when a journal article was published, say, in Neurology, if you didn’t have a subscription to Neurology, you went to the library that hopefully had a subscription.
If they didn’t have it, you would write to the author and say, “Hey, I heard you have this great paper because the abstract was out there. Could you send me a reprint?” Has that whole universe evaporated?
Dr. Merino: It depends on how the paper is published. For example, in Neurology, some of the research we publish is open access. Basically, if you have an internet connection, you can access the paper.
That’s the case for papers published in our wholly open-access journals in the Neurology family like Neurology Neuroimmunology & Neuroinflammation, Neurology Genetics, or Neurology Education.
For other papers that are published in Neurology, not under open access, there is a paywall. For some of them, the paywall comes down after a few months based on funder mandates and so on. As I was mentioning, the NIH-funded papers are available 6 months later.
In the first 6 months, you may have to go to your library, and if your library has a subscription, you can download it directly. [This is also true for] those that always stay behind the paywall, where you have to have a subscription or your library has to have a subscription.
Is Pay to Publish a Red Flag?
Dr. Wilner: I’m a professional writer. With any luck, when I write something, I get paid to write it. There’s been a long tradition in academic medicine that when you submit an article to, say, Neurology, you don’t get paid as an author for the publication. Your reward is the honor of it being published.
Neurology supports itself in various ways, including advertising and so on. That’s been the contract: free publication for work that merits it, and the journal survives on its own.
With open access, one of the things that’s happened is that — and I’ve published open access myself — is that I get a notification that I need to pay to have my article that I’ve slaved over published. Explain that, please.
Dr. Merino: This is the issue with open access. As I mentioned, the paper gets published. You’re giving the journal a license to publish it. You’re retaining the copyright of your work. That means that the journal cannot make money or support itself by just publishing open access because they belong to you.
Typically, open-access journals are not in print and don’t have much in terms of advertising. The contract is you’re giving me a license to publish it, but it’s your journal, so you’re paying a fee for the journal expenses to basically produce your paper. That’s what’s happening with open access.
That’s been recognized with many funders, for example, with NIH funding or many of the European funders, they’re including open-access fees as part of their funding for research. Now, of course, this doesn’t help if you’re not a funded researcher or if you’re a fellow who’s doing work and so on.
Typically, most journals will have waived fees or lower fees for these situations. The reason for the open-access fee is the fact that you’re retaining the copyright. You’re not giving it to the journal who can then use it to generate its revenue for supporting itself, the editorial staff, and so on.
Dr. Wilner: This idea of charging for publication has created a satellite business of what are called predatory journals. How does one know if the open-access journal that I’m submitting to is really just in the business of wanting my $300 or my $900 to get published? How do I know if that’s a reasonable place to publish?
Predatory Journals
Dr. Merino: That’s a big challenge that has come with this whole idea of open access and the fact that now, many journals are online only, so you’re no longer seeing a physical copy. That has given rise to the predatory journals.
The predatory journal, by definition, is a journal that claims to be open access. They’ll take your paper and publish it, but they don’t provide all the other services that you would typically expect from the fact that you’re paying an open-access fee. This includes getting appropriate peer review, production of the manuscript, and long-term curation and storage of the manuscript.
Many predatory journals will take your open-access fee, accept any paper that you submit, regardless of the quality, because they’re charging the fees for that. They don’t send it to real peer review, and then in a few months, the journal disappears so there’s no way for anybody to actually find your paper anymore.
There are certain checklists. Dr. David Moher at the University of Toronto has produced some work trying to help us identify predatory journals.
One thing I typically suggest to people who ask me this question is: Have you ever heard of this journal before? Does the journal have a track record? How far back does the story of the journal go? Is it supported by a publisher that you know? Do you know anybody who has published there? Is it something you can easily access?
If in doubt, always ask your friendly medical librarian. There used to be lists that were kept in terms of predatory journals that were being constantly updated, but those had to be shut down. As far as I understand, there were legal issues in terms of how things got on that list.
I think that overall, if you’ve heard of it, if it’s relevant, if it’s known in your field, and if your librarian knows it, it’s probably a good legitimate open-access journal. There are many very good legitimate open-access journals.
I mentioned the two that we have in our family, but all the other major journals have their own open-access journal within their family. There are some, like BMC or PLOS, that are completely open-access and legitimate journals.
Impact Factor
Dr. Wilner: What about impact factor? Many journals boast about their impact factor. I’m not sure how to interpret that number.
Dr. Merino: Impact factor is very interesting. The impact factor was developed by medical librarians to try to identify the journals they should be subscribing to. It’s a measure of the average citations to an average paper in the journal.
It doesn’t tell you about specific papers. It tells you, on average, how many of the papers in this journal get cited so many times. It’s calculated by the number of articles that were cited divided by the number of articles that were published. Journals that publish many papers, like Neurology, have a hard time bringing up their impact factor beyond a certain level.
Similarly, very small journals with one or two very highly cited papers have a very high impact factor. It’s being used as a measure, perhaps inappropriately, of how good or how reputable a journal is. We all say we don’t care about journal impact factors, but we all know our journal impact factor and we used to know it to three decimals. Now, they changed the system, and there’s only one decimal point, which makes more sense.
This is more important, for example, for authors when deciding where to submit papers. I know that in some countries, particularly in Europe, the impact factor of the journal where you publish has an impact on your promotion decisions.
I would say what’s even more important than the impact factor, is to say, “Well, is this the journal that fits the scope of my paper? Is this the journal that reaches the audience that I want to reach when I write my paper?”
There are some papers, for example, that are very influential. The impact factor just captures citations. There are some papers that are very influential that may not get cited very often. There may be papers that change clinical practice.
If you read a paper that tells you that you should be changing how you treat your patients with myasthenia based on this paper, that may not get cited. It’s a very clinically focused paper, but it’s probably more impactful than one that gets cited very much in some respect, or they make it to public policy decisions, and so on.
I think it’s important to look more at the audience and the journal scope when you submit your papers.
Dr. Wilner: One other technical question. The journals also say they’re indexed in PubMed or Google Scholar. If I want to publish my paper and I want it indexed where the right people are going to find it, where does it need to be indexed?
Dr. Merino: I grew up using Index Medicus, MedlinePlus, and the Library of Science. I still do. If I need to find something, I go to PubMed. Ideally, papers are listed in MedlinePlus or can be found in PubMed. They’re not the same thing, but you can find them through them.
That would be an important thing. Nowadays, a lot more people are using Google Scholar or Google just to identify papers. It may be a little bit less relevant, but it’s still a measure of the quality of the journal before they get indexed in some of these. For example, if you get listed in MedlinePlus, it has gone through certain quality checks by the index itself to see whether they would accept the journal or not. That’s something you want to check.
Typically, most of the large journals or the journals you and I know about are listed in more than one place, right? They’re listed in Scopus and Web of Science. They’re listed in MedlinePlus and so on. Again, if you’re submitting your paper, go somewhere where you know the journal and you’ve heard about it.
Dr. Wilner: I’m not going to ask you about artificial intelligence. We can do that another time. I want to ask something closer to me, which is this question of publish or perish.
There seems to be, in academics, more emphasis on the number of papers that one has published rather than their quality. How does a younger academician or one who really needs to publish cope with that?
Dr. Merino: Many people are writing up research that may not be relevant or that may not be high quality just because you need to have a long list of papers to get promoted, for example, if you’re an academician.
Doug Altman, who was a very influential person in the field quality of not only medical statistics but also medical publishing, had the idea that we need less research, but we need better research.
We often receive papers where you say, well, what’s the rationale behind the question in this paper? It’s like they had a large amount of data and were trying to squeeze as much as they could out of that. I think, as a young academician, the important thing to think about is whether it is an important question that matters to you and to the field, from whatever perspective, whether it’s going to advance research, advance clinical care, or have public policy implications.
Is this one where the answer will be important no matter what the answer is? If you’re thinking of that, your work will be well recognized, people will know you, and you’ll get invited to collaborate. I think that’s the most important thing rather than just churning out a large number of papers.
The productivity will come from the fact that you start by saying, let me ask something that’s really meaningful to me and to the field, with a good question and using strong research methodology.
Dr. Wilner: Thanks for that, Dr. Merino. I think that’s very valuable for all of us. This has been a great discussion. Do you have any final comments before we wrap up?
Dr. Merino: I want to encourage people to continue reading medical journals all the time and submitting to us, again, good research and important questions with robust methodology. That’s what we’re looking for in Neurology and most serious medical journals.
Dr. Wilner is an associate professor of neurology at the University of Tennessee Health Science Center, Memphis. Dr. Merino is a professor in the department of neurology at Georgetown University Medical Center, Washington, DC. Dr. Wilner reported conflicts of interest with Accordant Health Services and Lulu Publishing. Dr. Merino reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.