Oral Microbiome Test Could Detect Gastric Cancer Earlier

Article Type
Changed
Wed, 05/15/2024 - 12:42

WASHINGTON, DC – A mouth rinse used to identify oral microbiome composition could serve as an early-detection tool for gastric cancer, new evidence suggests.

Researchers found distinct bacterial composition differences in patient samples that point to the potential for oral microbial signatures to be used as biomarkers for assessing gastric cancer risk. 

“Too many patients are being diagnosed too late. There are no formal screening guidelines for gastric cancer, and more than half of patients with gastric cancer do not receive a diagnosis until their cancer is already at an advanced stage,” said Shruthi Reddy Perati, MD, a general surgery resident at Rutgers University Robert Wood Johnson School of Medicine in New Brunswick, New Jersey.

Detecting gastric cancer now generally requires an invasive procedure, such as endoscopy. Therefore, a noninvasive “swish and spit” test could be more accessible and allow for more widespread screening, Dr. Perati said at a May 8 press briefing during which her research (Abstract 949) was previewed for Digestive Disease Week® (DDW).

Gastric cancer, also known as stomach cancer, is the fourth most common cause of cancer-related death in the world. The United States can expect 26,890 new cases and 10,880 deaths from this type of cancer in 2024, the American Cancer Society estimates.
 

Microbial Signatures Found

Dr. Perati and colleagues collected oral rinse samples from 98 patients: 30 known to have gastric cancer , 30 with precancerous gastric conditions (pre–gastric cancer), and 38 control participants without pre-gastric or gastric cancer. Sixty-two percent were women, 32% were Hispanic, 31% had diabetes, and 18% were smokers.

The researchers analyzed the samples for alpha and beta diversity and conducted differential analysis using the framework called analysis of compositions of microbiomes.

They found distinct differences between the oral microbiomes of the healthy group and those of the groups with gastric cancer and pre–gastric cancer. In addition, the microbiomes of participants with cancer and of those with precancerous conditions were similar.

The results suggest that the microbiome changes may occur as soon as the stomach environment starts to undergo changes that can eventually turn into cancer.

“The oral microbiome may serve as a window into the composition of the stomach environment,” Dr. Perati said.

The investigators created a screening model to detect the most relevant 13 bacterial genera that differed between the control group and the gastric cancer and pre–gastric cancer groups. The tenfold cross-validation model demonstrated good ability to discriminate using bacteria alone (area under the curve [AUC], 0.74) and was further improved with the addition of clinical variables, including demographics and comorbidities (AUC, 0.91), the researchers noted.

As the investigators noted, the model’s performance improved with the addition of clinical variables, said Loren Laine, MD, professor of medicine (digestive diseases) at Yale School of Medicine and chair of DDW 2024.

An AUC of 0.74 using bacteria alone, which increased to 0.91 by adding demographics and comorbidities, “[is] starting to be really meaningful,” Dr. Laine said.

Further studies should evaluate the test’s sensitivity and specificity, Dr. Laine added.
 

Additional Considerations

The microbiome can vary between people and within the same individual over time. Probiotics, antibiotics, and diet can lead to changes in the microbiome, Dr. Perati said.

When asked how these changes could affect the accuracy of an oral rinse test, Dr. Perati said “it’s known that, in general, dietary modifications can have an impact on the diversity and the prevalence of certain bacteria throughout the GI tract.”

Though variance is expected, we’re hoping to see that the differences in the microbiome composition between the malignant groups and the control groups are more significant than those lower-level background changes due to dietary modifications, for example, she added.

The research is in its early days, and the results need to be validated in a larger study, Dr. Perati said.

Ninety-eight patients is “still a very small number,” said Dr. Laine, who co-moderated the press briefing. “More research is needed.”

Still, the study “has huge implications that could eventually lead to the development of noninvasive and accessible early screening for gastric cancer,” she said.

Dr. Perati and Dr. Laine reported no relevant financial relationships. The study was independently supported.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

WASHINGTON, DC – A mouth rinse used to identify oral microbiome composition could serve as an early-detection tool for gastric cancer, new evidence suggests.

Researchers found distinct bacterial composition differences in patient samples that point to the potential for oral microbial signatures to be used as biomarkers for assessing gastric cancer risk. 

“Too many patients are being diagnosed too late. There are no formal screening guidelines for gastric cancer, and more than half of patients with gastric cancer do not receive a diagnosis until their cancer is already at an advanced stage,” said Shruthi Reddy Perati, MD, a general surgery resident at Rutgers University Robert Wood Johnson School of Medicine in New Brunswick, New Jersey.

Detecting gastric cancer now generally requires an invasive procedure, such as endoscopy. Therefore, a noninvasive “swish and spit” test could be more accessible and allow for more widespread screening, Dr. Perati said at a May 8 press briefing during which her research (Abstract 949) was previewed for Digestive Disease Week® (DDW).

Gastric cancer, also known as stomach cancer, is the fourth most common cause of cancer-related death in the world. The United States can expect 26,890 new cases and 10,880 deaths from this type of cancer in 2024, the American Cancer Society estimates.
 

Microbial Signatures Found

Dr. Perati and colleagues collected oral rinse samples from 98 patients: 30 known to have gastric cancer , 30 with precancerous gastric conditions (pre–gastric cancer), and 38 control participants without pre-gastric or gastric cancer. Sixty-two percent were women, 32% were Hispanic, 31% had diabetes, and 18% were smokers.

The researchers analyzed the samples for alpha and beta diversity and conducted differential analysis using the framework called analysis of compositions of microbiomes.

They found distinct differences between the oral microbiomes of the healthy group and those of the groups with gastric cancer and pre–gastric cancer. In addition, the microbiomes of participants with cancer and of those with precancerous conditions were similar.

The results suggest that the microbiome changes may occur as soon as the stomach environment starts to undergo changes that can eventually turn into cancer.

“The oral microbiome may serve as a window into the composition of the stomach environment,” Dr. Perati said.

The investigators created a screening model to detect the most relevant 13 bacterial genera that differed between the control group and the gastric cancer and pre–gastric cancer groups. The tenfold cross-validation model demonstrated good ability to discriminate using bacteria alone (area under the curve [AUC], 0.74) and was further improved with the addition of clinical variables, including demographics and comorbidities (AUC, 0.91), the researchers noted.

As the investigators noted, the model’s performance improved with the addition of clinical variables, said Loren Laine, MD, professor of medicine (digestive diseases) at Yale School of Medicine and chair of DDW 2024.

An AUC of 0.74 using bacteria alone, which increased to 0.91 by adding demographics and comorbidities, “[is] starting to be really meaningful,” Dr. Laine said.

Further studies should evaluate the test’s sensitivity and specificity, Dr. Laine added.
 

Additional Considerations

The microbiome can vary between people and within the same individual over time. Probiotics, antibiotics, and diet can lead to changes in the microbiome, Dr. Perati said.

When asked how these changes could affect the accuracy of an oral rinse test, Dr. Perati said “it’s known that, in general, dietary modifications can have an impact on the diversity and the prevalence of certain bacteria throughout the GI tract.”

Though variance is expected, we’re hoping to see that the differences in the microbiome composition between the malignant groups and the control groups are more significant than those lower-level background changes due to dietary modifications, for example, she added.

The research is in its early days, and the results need to be validated in a larger study, Dr. Perati said.

Ninety-eight patients is “still a very small number,” said Dr. Laine, who co-moderated the press briefing. “More research is needed.”

Still, the study “has huge implications that could eventually lead to the development of noninvasive and accessible early screening for gastric cancer,” she said.

Dr. Perati and Dr. Laine reported no relevant financial relationships. The study was independently supported.

A version of this article appeared on Medscape.com.

WASHINGTON, DC – A mouth rinse used to identify oral microbiome composition could serve as an early-detection tool for gastric cancer, new evidence suggests.

Researchers found distinct bacterial composition differences in patient samples that point to the potential for oral microbial signatures to be used as biomarkers for assessing gastric cancer risk. 

“Too many patients are being diagnosed too late. There are no formal screening guidelines for gastric cancer, and more than half of patients with gastric cancer do not receive a diagnosis until their cancer is already at an advanced stage,” said Shruthi Reddy Perati, MD, a general surgery resident at Rutgers University Robert Wood Johnson School of Medicine in New Brunswick, New Jersey.

Detecting gastric cancer now generally requires an invasive procedure, such as endoscopy. Therefore, a noninvasive “swish and spit” test could be more accessible and allow for more widespread screening, Dr. Perati said at a May 8 press briefing during which her research (Abstract 949) was previewed for Digestive Disease Week® (DDW).

Gastric cancer, also known as stomach cancer, is the fourth most common cause of cancer-related death in the world. The United States can expect 26,890 new cases and 10,880 deaths from this type of cancer in 2024, the American Cancer Society estimates.
 

Microbial Signatures Found

Dr. Perati and colleagues collected oral rinse samples from 98 patients: 30 known to have gastric cancer , 30 with precancerous gastric conditions (pre–gastric cancer), and 38 control participants without pre-gastric or gastric cancer. Sixty-two percent were women, 32% were Hispanic, 31% had diabetes, and 18% were smokers.

The researchers analyzed the samples for alpha and beta diversity and conducted differential analysis using the framework called analysis of compositions of microbiomes.

They found distinct differences between the oral microbiomes of the healthy group and those of the groups with gastric cancer and pre–gastric cancer. In addition, the microbiomes of participants with cancer and of those with precancerous conditions were similar.

The results suggest that the microbiome changes may occur as soon as the stomach environment starts to undergo changes that can eventually turn into cancer.

“The oral microbiome may serve as a window into the composition of the stomach environment,” Dr. Perati said.

The investigators created a screening model to detect the most relevant 13 bacterial genera that differed between the control group and the gastric cancer and pre–gastric cancer groups. The tenfold cross-validation model demonstrated good ability to discriminate using bacteria alone (area under the curve [AUC], 0.74) and was further improved with the addition of clinical variables, including demographics and comorbidities (AUC, 0.91), the researchers noted.

As the investigators noted, the model’s performance improved with the addition of clinical variables, said Loren Laine, MD, professor of medicine (digestive diseases) at Yale School of Medicine and chair of DDW 2024.

An AUC of 0.74 using bacteria alone, which increased to 0.91 by adding demographics and comorbidities, “[is] starting to be really meaningful,” Dr. Laine said.

Further studies should evaluate the test’s sensitivity and specificity, Dr. Laine added.
 

Additional Considerations

The microbiome can vary between people and within the same individual over time. Probiotics, antibiotics, and diet can lead to changes in the microbiome, Dr. Perati said.

When asked how these changes could affect the accuracy of an oral rinse test, Dr. Perati said “it’s known that, in general, dietary modifications can have an impact on the diversity and the prevalence of certain bacteria throughout the GI tract.”

Though variance is expected, we’re hoping to see that the differences in the microbiome composition between the malignant groups and the control groups are more significant than those lower-level background changes due to dietary modifications, for example, she added.

The research is in its early days, and the results need to be validated in a larger study, Dr. Perati said.

Ninety-eight patients is “still a very small number,” said Dr. Laine, who co-moderated the press briefing. “More research is needed.”

Still, the study “has huge implications that could eventually lead to the development of noninvasive and accessible early screening for gastric cancer,” she said.

Dr. Perati and Dr. Laine reported no relevant financial relationships. The study was independently supported.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT DDW 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hypofractionated Radiotherapy Limits Toxic Effects in Cervical Cancer

Article Type
Changed
Wed, 05/15/2024 - 12:41

 

TOPLINE:

A hypofractionated intensity-modulated radiotherapy (IMRT) may be safe and well-tolerated in women with cervical cancer undergoing pelvic irradiation with concurrent chemotherapy following surgical resection, results from the phase 2 POHIM-CCRT trial suggested.

METHODOLOGY:

  • To date, no studies have assessed the treatment outcomes and toxic effects of hypofractionated IMRT following radical hysterectomy in patients with cervical cancer undergoing curative radiotherapy.
  • The team analyzed outcomes from 79 patients undergoing hypofractionated IMRT for cervical cancer after radical hysterectomy and pelvic lymph node dissection.
  • Patients were a median age of 48; 29.5% had stage IB to IIA disease, another 29.5% had stage IIB disease, and 41% had stage III disease. Patients also had at least one of the following criteria following radical hysterectomy and pelvic lymph node dissection: lymph node metastasis (39.7%), parametrial invasion (54.4%), and positive resection margin (5.1%).
  • The prescribed dose to the planning target volume was 40 Gy, delivered in 16 fractions to the whole pelvis, with any type of IMRT permitted. Overall, 71 patients also underwent concurrent weekly cisplatin (40 mg/m2 of body surface area for three cycles), and eight received fluorouracil (1000 mg/m2 on days 1-5) with cisplatin (60 mg/m2 for two cycles).
  • The primary endpoint was the incidence of acute grade 3 or higher gastrointestinal tract, genitourinary, and hematologic toxic effects during radiotherapy or within 3 months of completing radiotherapy.

TAKEAWAY:

  • After radiotherapy, only two patients (2.5%) experienced acute grade 3 or higher toxic effects. One was hospitalized for enterocolitis on the last day of radiotherapy and developed grade 3 anemia 3 months after completing radiotherapy; the other experienced hematologic toxic effects and also developed grade 3 anemia 3 months after completing radiotherapy.
  • No patients experienced late grade 3 or higher toxic effects.
  • When assessing toxic effects of any grade, acute and late gastrointestinal tract toxicities occurred in 76% and 31.6% of patients, respectively; acute and late genitourinary toxicities, all grade 1, occurred in 19% and 24.1% of patients, respectively; and hematologic toxicities occurred in 29.1% and 6.3% of patients, respectively.
  • Overall, at 3 years, 79.3% of patients were disease-free and 98% were alive. After a median follow-up of 43 months, 16 patients (20.3%) experienced disease recurrence, four of whom were salvaged and three of whom died.

IN PRACTICE:

“This nonrandomized controlled trial is the first prospective trial, to our knowledge, to show acceptable acute toxic effects of hypofractionated IMRT for cervical cancer in a postoperative concurrent chemoradiotherapy setting,” the authors said, adding that the rate of grade 3 or higher acute toxic effects of 2.5% reported in this study was “substantially lower than our initial hypothesis of less than 15%.”

However , in an accompanying editorial, Mark E. Bernard, MD, of the University of Kentucky College of Medicine, Lexington, highlighted caveats to the study design and raised two core questions: “Should acute toxic effects be the primary endpoint of a single-group, phase 2 study using hypofractionation with fewer cycles of concurrent chemotherapy? Should the primary endpoint rather have been a cancer control endpoint, such as disease-free survival, overall survival, or local control?”

Still, Dr. Bernard wrote, “This trial does help lay the foundation for future pelvic hypofractionated trials with concurrent chemotherapy, especially for gynecological malignant tumors.”

 

 

SOURCE:

The research, led by Won Park, MD, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea, was published in JAMA Oncology.

LIMITATIONS:

The trial is a single-arm study, with a short follow-up time. In the editorial, Bernard listed several limitations, including the fact that patients received fewer cycles of concurrent chemotherapy than what’s typically given in this population.

DISCLOSURES:

No funding or relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A hypofractionated intensity-modulated radiotherapy (IMRT) may be safe and well-tolerated in women with cervical cancer undergoing pelvic irradiation with concurrent chemotherapy following surgical resection, results from the phase 2 POHIM-CCRT trial suggested.

METHODOLOGY:

  • To date, no studies have assessed the treatment outcomes and toxic effects of hypofractionated IMRT following radical hysterectomy in patients with cervical cancer undergoing curative radiotherapy.
  • The team analyzed outcomes from 79 patients undergoing hypofractionated IMRT for cervical cancer after radical hysterectomy and pelvic lymph node dissection.
  • Patients were a median age of 48; 29.5% had stage IB to IIA disease, another 29.5% had stage IIB disease, and 41% had stage III disease. Patients also had at least one of the following criteria following radical hysterectomy and pelvic lymph node dissection: lymph node metastasis (39.7%), parametrial invasion (54.4%), and positive resection margin (5.1%).
  • The prescribed dose to the planning target volume was 40 Gy, delivered in 16 fractions to the whole pelvis, with any type of IMRT permitted. Overall, 71 patients also underwent concurrent weekly cisplatin (40 mg/m2 of body surface area for three cycles), and eight received fluorouracil (1000 mg/m2 on days 1-5) with cisplatin (60 mg/m2 for two cycles).
  • The primary endpoint was the incidence of acute grade 3 or higher gastrointestinal tract, genitourinary, and hematologic toxic effects during radiotherapy or within 3 months of completing radiotherapy.

TAKEAWAY:

  • After radiotherapy, only two patients (2.5%) experienced acute grade 3 or higher toxic effects. One was hospitalized for enterocolitis on the last day of radiotherapy and developed grade 3 anemia 3 months after completing radiotherapy; the other experienced hematologic toxic effects and also developed grade 3 anemia 3 months after completing radiotherapy.
  • No patients experienced late grade 3 or higher toxic effects.
  • When assessing toxic effects of any grade, acute and late gastrointestinal tract toxicities occurred in 76% and 31.6% of patients, respectively; acute and late genitourinary toxicities, all grade 1, occurred in 19% and 24.1% of patients, respectively; and hematologic toxicities occurred in 29.1% and 6.3% of patients, respectively.
  • Overall, at 3 years, 79.3% of patients were disease-free and 98% were alive. After a median follow-up of 43 months, 16 patients (20.3%) experienced disease recurrence, four of whom were salvaged and three of whom died.

IN PRACTICE:

“This nonrandomized controlled trial is the first prospective trial, to our knowledge, to show acceptable acute toxic effects of hypofractionated IMRT for cervical cancer in a postoperative concurrent chemoradiotherapy setting,” the authors said, adding that the rate of grade 3 or higher acute toxic effects of 2.5% reported in this study was “substantially lower than our initial hypothesis of less than 15%.”

However , in an accompanying editorial, Mark E. Bernard, MD, of the University of Kentucky College of Medicine, Lexington, highlighted caveats to the study design and raised two core questions: “Should acute toxic effects be the primary endpoint of a single-group, phase 2 study using hypofractionation with fewer cycles of concurrent chemotherapy? Should the primary endpoint rather have been a cancer control endpoint, such as disease-free survival, overall survival, or local control?”

Still, Dr. Bernard wrote, “This trial does help lay the foundation for future pelvic hypofractionated trials with concurrent chemotherapy, especially for gynecological malignant tumors.”

 

 

SOURCE:

The research, led by Won Park, MD, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea, was published in JAMA Oncology.

LIMITATIONS:

The trial is a single-arm study, with a short follow-up time. In the editorial, Bernard listed several limitations, including the fact that patients received fewer cycles of concurrent chemotherapy than what’s typically given in this population.

DISCLOSURES:

No funding or relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

 

TOPLINE:

A hypofractionated intensity-modulated radiotherapy (IMRT) may be safe and well-tolerated in women with cervical cancer undergoing pelvic irradiation with concurrent chemotherapy following surgical resection, results from the phase 2 POHIM-CCRT trial suggested.

METHODOLOGY:

  • To date, no studies have assessed the treatment outcomes and toxic effects of hypofractionated IMRT following radical hysterectomy in patients with cervical cancer undergoing curative radiotherapy.
  • The team analyzed outcomes from 79 patients undergoing hypofractionated IMRT for cervical cancer after radical hysterectomy and pelvic lymph node dissection.
  • Patients were a median age of 48; 29.5% had stage IB to IIA disease, another 29.5% had stage IIB disease, and 41% had stage III disease. Patients also had at least one of the following criteria following radical hysterectomy and pelvic lymph node dissection: lymph node metastasis (39.7%), parametrial invasion (54.4%), and positive resection margin (5.1%).
  • The prescribed dose to the planning target volume was 40 Gy, delivered in 16 fractions to the whole pelvis, with any type of IMRT permitted. Overall, 71 patients also underwent concurrent weekly cisplatin (40 mg/m2 of body surface area for three cycles), and eight received fluorouracil (1000 mg/m2 on days 1-5) with cisplatin (60 mg/m2 for two cycles).
  • The primary endpoint was the incidence of acute grade 3 or higher gastrointestinal tract, genitourinary, and hematologic toxic effects during radiotherapy or within 3 months of completing radiotherapy.

TAKEAWAY:

  • After radiotherapy, only two patients (2.5%) experienced acute grade 3 or higher toxic effects. One was hospitalized for enterocolitis on the last day of radiotherapy and developed grade 3 anemia 3 months after completing radiotherapy; the other experienced hematologic toxic effects and also developed grade 3 anemia 3 months after completing radiotherapy.
  • No patients experienced late grade 3 or higher toxic effects.
  • When assessing toxic effects of any grade, acute and late gastrointestinal tract toxicities occurred in 76% and 31.6% of patients, respectively; acute and late genitourinary toxicities, all grade 1, occurred in 19% and 24.1% of patients, respectively; and hematologic toxicities occurred in 29.1% and 6.3% of patients, respectively.
  • Overall, at 3 years, 79.3% of patients were disease-free and 98% were alive. After a median follow-up of 43 months, 16 patients (20.3%) experienced disease recurrence, four of whom were salvaged and three of whom died.

IN PRACTICE:

“This nonrandomized controlled trial is the first prospective trial, to our knowledge, to show acceptable acute toxic effects of hypofractionated IMRT for cervical cancer in a postoperative concurrent chemoradiotherapy setting,” the authors said, adding that the rate of grade 3 or higher acute toxic effects of 2.5% reported in this study was “substantially lower than our initial hypothesis of less than 15%.”

However , in an accompanying editorial, Mark E. Bernard, MD, of the University of Kentucky College of Medicine, Lexington, highlighted caveats to the study design and raised two core questions: “Should acute toxic effects be the primary endpoint of a single-group, phase 2 study using hypofractionation with fewer cycles of concurrent chemotherapy? Should the primary endpoint rather have been a cancer control endpoint, such as disease-free survival, overall survival, or local control?”

Still, Dr. Bernard wrote, “This trial does help lay the foundation for future pelvic hypofractionated trials with concurrent chemotherapy, especially for gynecological malignant tumors.”

 

 

SOURCE:

The research, led by Won Park, MD, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea, was published in JAMA Oncology.

LIMITATIONS:

The trial is a single-arm study, with a short follow-up time. In the editorial, Bernard listed several limitations, including the fact that patients received fewer cycles of concurrent chemotherapy than what’s typically given in this population.

DISCLOSURES:

No funding or relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Inappropriate Medication Use Persists in Older Adults With Dementia

Article Type
Changed
Mon, 05/13/2024 - 16:46

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Diagnosing Giant Cell Arteritis Using Ultrasound First Proves Accurate, Avoids Biopsy in Many Cases

Article Type
Changed
Mon, 05/13/2024 - 15:59

Temporal artery ultrasound alone was sufficient to accurately diagnose giant cell arteritis (GCA) in over half of patients in a new prospective study.

The findings provide further evidence that “[ultrasound] of temporal arteries could really take the place of traditional temporal artery biopsy (TAB)” in patients with high clinical suspicion of GCA, lead author Guillaume Denis, MD, of the Centre Hospitalier de Rochefort in Rochefort, France, told this news organization.

The European Alliance of Associations for Rheumatology (EULAR) already recommends ultrasound as a first-line diagnostic tool for patients with suspected large vessel vasculitis, and the 2022 American College of Rheumatology (ACR)/EULAR classification criteria for GCA weighs positive TAB or temporal artery halo sign on ultrasound equally.

Dmytro Zinkevych | Dreamstime


Guidelines from the ACR and the Vasculitis Foundation still recommend TAB over ultrasound.

“In general, rheumatologists and radiologists in the US are less experienced in using ultrasound to diagnose temporal artery involvement in GCA compared to their counterparts in Europe,” the 2021 guidelines stated. “In centers with appropriate training and expertise in using temporal artery ultrasound, ultrasound may be a useful and complementary tool for diagnosing GCA.”
 

Methodology

In the study, researchers recruited 165 individuals with high clinical suspicion of GCA from August 2016 through February 2020 at six French hospitals. Only patients older than 50 years of age and with biologic inflammatory syndrome with C-reactive protein elevation (≥ 6 mg/L) qualified for the study. Patients also needed to have at least one of these factors:

  • Clinical signs of GCA (abnormal temporal arteries, scalp hyperesthesia, jaw claudication, or vision loss)
  • General signs of GCA (headache, fever, or impaired general condition)
  • Large-vessel vasculitis visible on imaging (CT angiography [CTA], MR angiography [MRA], and/or PET/CT)

All participants underwent a color Doppler ultrasound of the temporal artery, performed less than 1 week after the initiation of corticosteroid therapy. (Previous research demonstrated that corticosteroids can change the hallmark halo sign of vasculitis detectable via ultrasound as early as 1 week after initiation of therapy, the authors noted.) In this study, the time between consultation with a specialist and ultrasound was less than 1 day.

“Patients with halo signs detected around the lumen of both temporal arteries (that is, bilateral temporal halo sign) were considered as ultrasound-positive,” Guillaume Denis, MD, and colleagues explained. “Patients with no halo sign, or bilateral halo signs in the axillary arteries, or a unilateral halo sign in the temporal artery were considered as ultrasound-negative.”

The findings were published in Annals of Internal Medicine on May 7.
 

Results

In total, 73 participants (44%) had positive ultrasounds and were diagnosed with GCA. These patients also underwent a second ultrasound a month later to document if the halo sign remained unchanged, reduced, or disappeared.

The remaining 92 patients with negative ultrasound results underwent TAB, which was conducted on average 4.5 days after the ultrasound. A total of 28 patients (30%) had a positive TAB result. Physicians diagnosed 35 TAB-negative patients with GCA using clinical, imaging, and biologic data, and 29 patients received alternative diagnoses. These other diagnoses included polymyalgia rheumatica, infectious diseases, cancer, and other systemic inflammatory rheumatic diseases.

All patients diagnosed with GCA via ultrasound had their diagnoses reconfirmed at 1 month and for up to 2 years of follow-up.

“In summary, our study showed that the use of temporal artery ultrasound may be an efficient way to make the diagnosis of GCA in patients with high clinical suspicion and to reduce imaging costs and the need for biopsy, thereby limiting complications and the need for a surgeon,” the authors concluded.
 

 

 

Qualifications and Limitations

While over half of patients ultimately diagnosed with GCA were diagnosed using ultrasound, that percentage was “a bit lower than expected,” said Mark Matza, MD, MBA, the co-clinical director of rheumatology at Massachusetts General Hospital in Boston. By comparison, one systematic review calculated ultrasound’s pooled sensitivity at 88% and pooled specificity at 96% for the diagnosis of GCA.

“In this [current] study, 30% of patients who had negative ultrasound were then found to have positive biopsy, indicating that ultrasound missed a substantial portion of patients who were ultimately diagnosed with GCA,” he continued.

Ultrasound is “very operator dependent,” he added, and there has been “variability in test performance of ultrasound.”

The authors acknowledged that techniques for ultrasound of the temporal arteries have also evolved over the study period, and thus, findings may not have been consistent.

However, about one in four patients with GCA were diagnosed after having both negative ultrasound and TAB results.

“One of the things that this paper shows is that even the gold standard of temporal artery biopsy isn’t 100% either,” noted Minna Kohler, MD, who directs the rheumatology musculoskeletal ultrasound program at Massachusetts General Hospital. “That’s why clinically, there is an increasing emphasis on using multimodality imaging to assist in the diagnosis of GCA along with a physician’s clinical intuition,” she said.

Massachusetts General Hospital/Harvard Medical School
Dr. Minna Kohler


While ultrasound can visualize axillary, subclavian, and carotid arteries, other imaging modalities such as CTA, MRA, and PET/CT are better to fully assess supra-aortic and aortic vessels, she continued. However, “this imaging is more expensive and takes more time to coordinate, schedule, whereas ultrasound of temporal and axillary arteries can easily be done within the clinic with an immediate answer.”

This study was supported by a grant from “Recherche CH-CHU Poitou-Charentes 2014.” Dr. Denis disclosed relationships with Leo Pharma, Janssen, Novartis, Takeda, and Sanofi. Dr. Matza reported honoraria from the Ultrasound School of North American Rheumatologists. Kohler had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Temporal artery ultrasound alone was sufficient to accurately diagnose giant cell arteritis (GCA) in over half of patients in a new prospective study.

The findings provide further evidence that “[ultrasound] of temporal arteries could really take the place of traditional temporal artery biopsy (TAB)” in patients with high clinical suspicion of GCA, lead author Guillaume Denis, MD, of the Centre Hospitalier de Rochefort in Rochefort, France, told this news organization.

The European Alliance of Associations for Rheumatology (EULAR) already recommends ultrasound as a first-line diagnostic tool for patients with suspected large vessel vasculitis, and the 2022 American College of Rheumatology (ACR)/EULAR classification criteria for GCA weighs positive TAB or temporal artery halo sign on ultrasound equally.

Dmytro Zinkevych | Dreamstime


Guidelines from the ACR and the Vasculitis Foundation still recommend TAB over ultrasound.

“In general, rheumatologists and radiologists in the US are less experienced in using ultrasound to diagnose temporal artery involvement in GCA compared to their counterparts in Europe,” the 2021 guidelines stated. “In centers with appropriate training and expertise in using temporal artery ultrasound, ultrasound may be a useful and complementary tool for diagnosing GCA.”
 

Methodology

In the study, researchers recruited 165 individuals with high clinical suspicion of GCA from August 2016 through February 2020 at six French hospitals. Only patients older than 50 years of age and with biologic inflammatory syndrome with C-reactive protein elevation (≥ 6 mg/L) qualified for the study. Patients also needed to have at least one of these factors:

  • Clinical signs of GCA (abnormal temporal arteries, scalp hyperesthesia, jaw claudication, or vision loss)
  • General signs of GCA (headache, fever, or impaired general condition)
  • Large-vessel vasculitis visible on imaging (CT angiography [CTA], MR angiography [MRA], and/or PET/CT)

All participants underwent a color Doppler ultrasound of the temporal artery, performed less than 1 week after the initiation of corticosteroid therapy. (Previous research demonstrated that corticosteroids can change the hallmark halo sign of vasculitis detectable via ultrasound as early as 1 week after initiation of therapy, the authors noted.) In this study, the time between consultation with a specialist and ultrasound was less than 1 day.

“Patients with halo signs detected around the lumen of both temporal arteries (that is, bilateral temporal halo sign) were considered as ultrasound-positive,” Guillaume Denis, MD, and colleagues explained. “Patients with no halo sign, or bilateral halo signs in the axillary arteries, or a unilateral halo sign in the temporal artery were considered as ultrasound-negative.”

The findings were published in Annals of Internal Medicine on May 7.
 

Results

In total, 73 participants (44%) had positive ultrasounds and were diagnosed with GCA. These patients also underwent a second ultrasound a month later to document if the halo sign remained unchanged, reduced, or disappeared.

The remaining 92 patients with negative ultrasound results underwent TAB, which was conducted on average 4.5 days after the ultrasound. A total of 28 patients (30%) had a positive TAB result. Physicians diagnosed 35 TAB-negative patients with GCA using clinical, imaging, and biologic data, and 29 patients received alternative diagnoses. These other diagnoses included polymyalgia rheumatica, infectious diseases, cancer, and other systemic inflammatory rheumatic diseases.

All patients diagnosed with GCA via ultrasound had their diagnoses reconfirmed at 1 month and for up to 2 years of follow-up.

“In summary, our study showed that the use of temporal artery ultrasound may be an efficient way to make the diagnosis of GCA in patients with high clinical suspicion and to reduce imaging costs and the need for biopsy, thereby limiting complications and the need for a surgeon,” the authors concluded.
 

 

 

Qualifications and Limitations

While over half of patients ultimately diagnosed with GCA were diagnosed using ultrasound, that percentage was “a bit lower than expected,” said Mark Matza, MD, MBA, the co-clinical director of rheumatology at Massachusetts General Hospital in Boston. By comparison, one systematic review calculated ultrasound’s pooled sensitivity at 88% and pooled specificity at 96% for the diagnosis of GCA.

“In this [current] study, 30% of patients who had negative ultrasound were then found to have positive biopsy, indicating that ultrasound missed a substantial portion of patients who were ultimately diagnosed with GCA,” he continued.

Ultrasound is “very operator dependent,” he added, and there has been “variability in test performance of ultrasound.”

The authors acknowledged that techniques for ultrasound of the temporal arteries have also evolved over the study period, and thus, findings may not have been consistent.

However, about one in four patients with GCA were diagnosed after having both negative ultrasound and TAB results.

“One of the things that this paper shows is that even the gold standard of temporal artery biopsy isn’t 100% either,” noted Minna Kohler, MD, who directs the rheumatology musculoskeletal ultrasound program at Massachusetts General Hospital. “That’s why clinically, there is an increasing emphasis on using multimodality imaging to assist in the diagnosis of GCA along with a physician’s clinical intuition,” she said.

Massachusetts General Hospital/Harvard Medical School
Dr. Minna Kohler


While ultrasound can visualize axillary, subclavian, and carotid arteries, other imaging modalities such as CTA, MRA, and PET/CT are better to fully assess supra-aortic and aortic vessels, she continued. However, “this imaging is more expensive and takes more time to coordinate, schedule, whereas ultrasound of temporal and axillary arteries can easily be done within the clinic with an immediate answer.”

This study was supported by a grant from “Recherche CH-CHU Poitou-Charentes 2014.” Dr. Denis disclosed relationships with Leo Pharma, Janssen, Novartis, Takeda, and Sanofi. Dr. Matza reported honoraria from the Ultrasound School of North American Rheumatologists. Kohler had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Temporal artery ultrasound alone was sufficient to accurately diagnose giant cell arteritis (GCA) in over half of patients in a new prospective study.

The findings provide further evidence that “[ultrasound] of temporal arteries could really take the place of traditional temporal artery biopsy (TAB)” in patients with high clinical suspicion of GCA, lead author Guillaume Denis, MD, of the Centre Hospitalier de Rochefort in Rochefort, France, told this news organization.

The European Alliance of Associations for Rheumatology (EULAR) already recommends ultrasound as a first-line diagnostic tool for patients with suspected large vessel vasculitis, and the 2022 American College of Rheumatology (ACR)/EULAR classification criteria for GCA weighs positive TAB or temporal artery halo sign on ultrasound equally.

Dmytro Zinkevych | Dreamstime


Guidelines from the ACR and the Vasculitis Foundation still recommend TAB over ultrasound.

“In general, rheumatologists and radiologists in the US are less experienced in using ultrasound to diagnose temporal artery involvement in GCA compared to their counterparts in Europe,” the 2021 guidelines stated. “In centers with appropriate training and expertise in using temporal artery ultrasound, ultrasound may be a useful and complementary tool for diagnosing GCA.”
 

Methodology

In the study, researchers recruited 165 individuals with high clinical suspicion of GCA from August 2016 through February 2020 at six French hospitals. Only patients older than 50 years of age and with biologic inflammatory syndrome with C-reactive protein elevation (≥ 6 mg/L) qualified for the study. Patients also needed to have at least one of these factors:

  • Clinical signs of GCA (abnormal temporal arteries, scalp hyperesthesia, jaw claudication, or vision loss)
  • General signs of GCA (headache, fever, or impaired general condition)
  • Large-vessel vasculitis visible on imaging (CT angiography [CTA], MR angiography [MRA], and/or PET/CT)

All participants underwent a color Doppler ultrasound of the temporal artery, performed less than 1 week after the initiation of corticosteroid therapy. (Previous research demonstrated that corticosteroids can change the hallmark halo sign of vasculitis detectable via ultrasound as early as 1 week after initiation of therapy, the authors noted.) In this study, the time between consultation with a specialist and ultrasound was less than 1 day.

“Patients with halo signs detected around the lumen of both temporal arteries (that is, bilateral temporal halo sign) were considered as ultrasound-positive,” Guillaume Denis, MD, and colleagues explained. “Patients with no halo sign, or bilateral halo signs in the axillary arteries, or a unilateral halo sign in the temporal artery were considered as ultrasound-negative.”

The findings were published in Annals of Internal Medicine on May 7.
 

Results

In total, 73 participants (44%) had positive ultrasounds and were diagnosed with GCA. These patients also underwent a second ultrasound a month later to document if the halo sign remained unchanged, reduced, or disappeared.

The remaining 92 patients with negative ultrasound results underwent TAB, which was conducted on average 4.5 days after the ultrasound. A total of 28 patients (30%) had a positive TAB result. Physicians diagnosed 35 TAB-negative patients with GCA using clinical, imaging, and biologic data, and 29 patients received alternative diagnoses. These other diagnoses included polymyalgia rheumatica, infectious diseases, cancer, and other systemic inflammatory rheumatic diseases.

All patients diagnosed with GCA via ultrasound had their diagnoses reconfirmed at 1 month and for up to 2 years of follow-up.

“In summary, our study showed that the use of temporal artery ultrasound may be an efficient way to make the diagnosis of GCA in patients with high clinical suspicion and to reduce imaging costs and the need for biopsy, thereby limiting complications and the need for a surgeon,” the authors concluded.
 

 

 

Qualifications and Limitations

While over half of patients ultimately diagnosed with GCA were diagnosed using ultrasound, that percentage was “a bit lower than expected,” said Mark Matza, MD, MBA, the co-clinical director of rheumatology at Massachusetts General Hospital in Boston. By comparison, one systematic review calculated ultrasound’s pooled sensitivity at 88% and pooled specificity at 96% for the diagnosis of GCA.

“In this [current] study, 30% of patients who had negative ultrasound were then found to have positive biopsy, indicating that ultrasound missed a substantial portion of patients who were ultimately diagnosed with GCA,” he continued.

Ultrasound is “very operator dependent,” he added, and there has been “variability in test performance of ultrasound.”

The authors acknowledged that techniques for ultrasound of the temporal arteries have also evolved over the study period, and thus, findings may not have been consistent.

However, about one in four patients with GCA were diagnosed after having both negative ultrasound and TAB results.

“One of the things that this paper shows is that even the gold standard of temporal artery biopsy isn’t 100% either,” noted Minna Kohler, MD, who directs the rheumatology musculoskeletal ultrasound program at Massachusetts General Hospital. “That’s why clinically, there is an increasing emphasis on using multimodality imaging to assist in the diagnosis of GCA along with a physician’s clinical intuition,” she said.

Massachusetts General Hospital/Harvard Medical School
Dr. Minna Kohler


While ultrasound can visualize axillary, subclavian, and carotid arteries, other imaging modalities such as CTA, MRA, and PET/CT are better to fully assess supra-aortic and aortic vessels, she continued. However, “this imaging is more expensive and takes more time to coordinate, schedule, whereas ultrasound of temporal and axillary arteries can easily be done within the clinic with an immediate answer.”

This study was supported by a grant from “Recherche CH-CHU Poitou-Charentes 2014.” Dr. Denis disclosed relationships with Leo Pharma, Janssen, Novartis, Takeda, and Sanofi. Dr. Matza reported honoraria from the Ultrasound School of North American Rheumatologists. Kohler had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Monoclonal Antibody With Unique Mechanism Gets Second Chance in RA

Article Type
Changed
Mon, 05/13/2024 - 15:41

LIVERPOOL, ENGLAND — The IRIS-RA study of the investigational monoclonal antibody drug nipocalimab in patients with rheumatoid arthritis (RA) did not meet its primary endpoint, but there could still be people with moderate to severe RA who might benefit from treatment with the drug, according to information reported at the British Society for Rheumatology annual meeting.

The primary endpoint for the phase 2A trial was the least squares mean change in Disease Activity Score in 28 joints using C-reactive protein (DAS28-CRP) from baseline to 12 weeks of treatment. This was reduced by −1.03 with nipocalimab and by −0.58 with placebo, giving a mean difference of just −0.45 (P = .224).

However, one of the key secondary endpoints was the proportion of patients who had 20% improvement in American College of Rheumatology response criteria (ACR20). Results for this endpoint showed a greater difference in response to nipocalimab vs placebo, with a respective 45.5% and 20.0% (P = .055) of individuals achieving ACR20.

Moreover, an analysis stratifying for anti-citrullinated protein autoantibody (ACPA) levels at baseline found that people with higher levels had a better response to nipocalimab.
 

Choice of Endpoint

“The way this study was powered was to look at a change between the treatment groups of a DAS28-CRP reduction of 1.0,” said Peter C. Taylor, BMBCh, PhD, the Norman Collisson chair of musculoskeletal medicine at the University of Oxford in Oxford, England.

DAS28-CRP was often chosen as the primary endpoint in small proof-of-concept studies, such as IRIS-RA, because it was a “measure of continuous change [that] theoretically, would allow greater sensitivity to change,” Dr. Taylor added.

Sara Freeman/Medscape Medical News
Dr. Peter C. Taylor


“Ironically, it has to be said that had we chosen ACR20, we would have hit the primary endpoint. One lives and learns,” noted Dr. Taylor.
 

Proof of Concept

IRIS-RA was billed as a “proof-of-concept” study because it was the first time that a monoclonal antibody targeting the neonatal fragment crystallizable receptor (FcRn) was being tested in an RA population.

The study was a randomized double-blind trial in which 33 people with moderate to severe RA who had an inadequate response to tumor necrosis factor (TNF) inhibitors were treated with nipocalimab at a dose of 15 mg/kg given intravenously every 2 weeks, and 20 received a matching placebo. Participants were treated for 10 weeks, and then the primary follow-up was at 12 weeks, with additional follow-up for safety undertaken at 18 weeks.

Nipocalimab is a fully human, immunoglobulin G1 (IgG1) monoclonal antibody that is designed to selectively block the FcRn. By doing so, it essentially stops IgG from being recycled within the immune system, and this in turn lowers IgG levels. That includes potentially harmful ACPAs, among other pathogenic antibodies, Dr. Taylor and fellow investigators explained in their abstract.

“We’ve known for a long time that ACPA have prognostic value, but there’s been controversy about whether or not ACPA are actually pathogenic,” Dr. Taylor said. “So, one of the hypotheses that this study gives rise to is that by blocking FcRn, and thereby reducing, potentially, the concentration of ACPA in the blood, will we actually have efficacy in patients?”
 

 

 

Are ACPA Really Lowered?

Paul Emery, MD, Versus Arthritis professor of rheumatology and director of the Leeds Biomedical Research Centre at the University of Leeds in Leeds, England, questioned the reduction in antibody levels during the discussion that followed.

Dr. Paul Emery

Although these data had not been presented, Dr. Emery observed that the reduction in IgG was actually greater than that in ACPA, “which is fairly critical. Is it feasible to look to selectively lower normal immunoglobulin over pathogenic autoantibodies?”

Dr. Emery also wanted to know if there “was a floor on the reduction of immunoglobulin” with long-term therapy, “which would be a worry.”

Dr. Taylor responded that total IgG had been reduced by about 65% and ACPA by about 40%. Why this difference exists is not yet clear. It could be because ACPA are part of complexed antibodies.

“Most of these patients are rheumatoid factor [RF]–positive,” said Taylor, pointing out that although IgM “wouldn’t normally be affected by FcRn blockade,” there was a 10% reduction in RF IgM, probably because it was complexed to IgG.

“So, the hypothesis here is that if you look at the clearance of complexes, they’re handled differently in the cytoplasm from the clearance of monomeric IgG. But that’s a hypothesis. It needs further investigation. In vitro, there’s very good, confirmatory evidence to support that. But we’ve yet to explore that more fully in vivo,” Dr. Taylor said.

As for long-term effects, Dr. Taylor responded: “All I can tell you is [that] after the 10-week intervention, that up to an 18-week observation period, immunoglobulin levels recovered very rapidly afterwards. And you mustn’t forget that other isotypes are not affected, unlike rituximab.”
 

Safety and Other Results

With regard to safety, 27 (82%) of nipocalimab- and 12 (60%) of placebo-treated participants experienced at least one treatment-emergent adverse event (TEAE). The most common, occurring in 10% or more of cases, were RA flares (36.4% for nipocalimab vs 15.0% with placebo), headache (12.1% vs 5.0%), and COVID-19 (12.1% vs 0.0%).

There were three serious TEAEs, all in the nipocalimab-treatment group: One was an infection of a burn that had been present at inclusion, another was a deep vein thrombosis that resolved with apixaban treatment, and the other was an infusion-related reaction that resolved with supportive treatment.

Another notable efficacy finding was the proportion of patients achieving DAS28-CRP remission at 12 weeks in the nipocalimab vs the placebo group was substantially greater if considering only people with high baseline ACPA levels, at a respective 40.0% vs 16.7%, when compared with the total population (21.2% vs 10.0%).

Similar findings were seen for the proportion of patients achieving an ACR50, and there were numerically greater reductions in the components of the ACR response criteria such as tender and swollen joints with nipocalimab vs placebo. All of these were exploratory observations, Dr. Taylor emphasized.
 

Combination and Further Trials

Further trials of nipocalimab are planned or are already ongoing in systemic lupus erythematosusactive lupus nephritisSjögren disease, and five other diseases.

In RA, nipocalimab is now being tested in combination with the TNF inhibitor certolizumab pegol (Cimzia) in the DAISY-RA trial. This is another proof-of-concept, phase 2A trial with a target accrual of 104 patients.

The IRIS-RA study was funded by Janssen Research & Development. Dr. Taylor serves as a consultant to AbbVie, Biogen, Eli Lilly, Fresenius, Galapagos, Gilead Sciences, GlaxoSmithKline, Janssen, Nordic Pharma, Pfizer, Sanofi, Aqtual, and UCB and received research funding from Galapagos, among others. Dr. Emery received research grants paid to his institution from AbbVie, Bristol Myers Squibb (BMS), Pfizer, MSD, and Roche; received consultant fees from BMS, AbbVie, Pfizer, MSD, Novartis, Roche, and UCB; and has undertaken clinical trials and provided expert advice to Pfizer, MSD, AbbVie, BMS, UCB, Roche, Novartis, Samsung, Sandoz, and Lilly.
 

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

LIVERPOOL, ENGLAND — The IRIS-RA study of the investigational monoclonal antibody drug nipocalimab in patients with rheumatoid arthritis (RA) did not meet its primary endpoint, but there could still be people with moderate to severe RA who might benefit from treatment with the drug, according to information reported at the British Society for Rheumatology annual meeting.

The primary endpoint for the phase 2A trial was the least squares mean change in Disease Activity Score in 28 joints using C-reactive protein (DAS28-CRP) from baseline to 12 weeks of treatment. This was reduced by −1.03 with nipocalimab and by −0.58 with placebo, giving a mean difference of just −0.45 (P = .224).

However, one of the key secondary endpoints was the proportion of patients who had 20% improvement in American College of Rheumatology response criteria (ACR20). Results for this endpoint showed a greater difference in response to nipocalimab vs placebo, with a respective 45.5% and 20.0% (P = .055) of individuals achieving ACR20.

Moreover, an analysis stratifying for anti-citrullinated protein autoantibody (ACPA) levels at baseline found that people with higher levels had a better response to nipocalimab.
 

Choice of Endpoint

“The way this study was powered was to look at a change between the treatment groups of a DAS28-CRP reduction of 1.0,” said Peter C. Taylor, BMBCh, PhD, the Norman Collisson chair of musculoskeletal medicine at the University of Oxford in Oxford, England.

DAS28-CRP was often chosen as the primary endpoint in small proof-of-concept studies, such as IRIS-RA, because it was a “measure of continuous change [that] theoretically, would allow greater sensitivity to change,” Dr. Taylor added.

Sara Freeman/Medscape Medical News
Dr. Peter C. Taylor


“Ironically, it has to be said that had we chosen ACR20, we would have hit the primary endpoint. One lives and learns,” noted Dr. Taylor.
 

Proof of Concept

IRIS-RA was billed as a “proof-of-concept” study because it was the first time that a monoclonal antibody targeting the neonatal fragment crystallizable receptor (FcRn) was being tested in an RA population.

The study was a randomized double-blind trial in which 33 people with moderate to severe RA who had an inadequate response to tumor necrosis factor (TNF) inhibitors were treated with nipocalimab at a dose of 15 mg/kg given intravenously every 2 weeks, and 20 received a matching placebo. Participants were treated for 10 weeks, and then the primary follow-up was at 12 weeks, with additional follow-up for safety undertaken at 18 weeks.

Nipocalimab is a fully human, immunoglobulin G1 (IgG1) monoclonal antibody that is designed to selectively block the FcRn. By doing so, it essentially stops IgG from being recycled within the immune system, and this in turn lowers IgG levels. That includes potentially harmful ACPAs, among other pathogenic antibodies, Dr. Taylor and fellow investigators explained in their abstract.

“We’ve known for a long time that ACPA have prognostic value, but there’s been controversy about whether or not ACPA are actually pathogenic,” Dr. Taylor said. “So, one of the hypotheses that this study gives rise to is that by blocking FcRn, and thereby reducing, potentially, the concentration of ACPA in the blood, will we actually have efficacy in patients?”
 

 

 

Are ACPA Really Lowered?

Paul Emery, MD, Versus Arthritis professor of rheumatology and director of the Leeds Biomedical Research Centre at the University of Leeds in Leeds, England, questioned the reduction in antibody levels during the discussion that followed.

Dr. Paul Emery

Although these data had not been presented, Dr. Emery observed that the reduction in IgG was actually greater than that in ACPA, “which is fairly critical. Is it feasible to look to selectively lower normal immunoglobulin over pathogenic autoantibodies?”

Dr. Emery also wanted to know if there “was a floor on the reduction of immunoglobulin” with long-term therapy, “which would be a worry.”

Dr. Taylor responded that total IgG had been reduced by about 65% and ACPA by about 40%. Why this difference exists is not yet clear. It could be because ACPA are part of complexed antibodies.

“Most of these patients are rheumatoid factor [RF]–positive,” said Taylor, pointing out that although IgM “wouldn’t normally be affected by FcRn blockade,” there was a 10% reduction in RF IgM, probably because it was complexed to IgG.

“So, the hypothesis here is that if you look at the clearance of complexes, they’re handled differently in the cytoplasm from the clearance of monomeric IgG. But that’s a hypothesis. It needs further investigation. In vitro, there’s very good, confirmatory evidence to support that. But we’ve yet to explore that more fully in vivo,” Dr. Taylor said.

As for long-term effects, Dr. Taylor responded: “All I can tell you is [that] after the 10-week intervention, that up to an 18-week observation period, immunoglobulin levels recovered very rapidly afterwards. And you mustn’t forget that other isotypes are not affected, unlike rituximab.”
 

Safety and Other Results

With regard to safety, 27 (82%) of nipocalimab- and 12 (60%) of placebo-treated participants experienced at least one treatment-emergent adverse event (TEAE). The most common, occurring in 10% or more of cases, were RA flares (36.4% for nipocalimab vs 15.0% with placebo), headache (12.1% vs 5.0%), and COVID-19 (12.1% vs 0.0%).

There were three serious TEAEs, all in the nipocalimab-treatment group: One was an infection of a burn that had been present at inclusion, another was a deep vein thrombosis that resolved with apixaban treatment, and the other was an infusion-related reaction that resolved with supportive treatment.

Another notable efficacy finding was the proportion of patients achieving DAS28-CRP remission at 12 weeks in the nipocalimab vs the placebo group was substantially greater if considering only people with high baseline ACPA levels, at a respective 40.0% vs 16.7%, when compared with the total population (21.2% vs 10.0%).

Similar findings were seen for the proportion of patients achieving an ACR50, and there were numerically greater reductions in the components of the ACR response criteria such as tender and swollen joints with nipocalimab vs placebo. All of these were exploratory observations, Dr. Taylor emphasized.
 

Combination and Further Trials

Further trials of nipocalimab are planned or are already ongoing in systemic lupus erythematosusactive lupus nephritisSjögren disease, and five other diseases.

In RA, nipocalimab is now being tested in combination with the TNF inhibitor certolizumab pegol (Cimzia) in the DAISY-RA trial. This is another proof-of-concept, phase 2A trial with a target accrual of 104 patients.

The IRIS-RA study was funded by Janssen Research & Development. Dr. Taylor serves as a consultant to AbbVie, Biogen, Eli Lilly, Fresenius, Galapagos, Gilead Sciences, GlaxoSmithKline, Janssen, Nordic Pharma, Pfizer, Sanofi, Aqtual, and UCB and received research funding from Galapagos, among others. Dr. Emery received research grants paid to his institution from AbbVie, Bristol Myers Squibb (BMS), Pfizer, MSD, and Roche; received consultant fees from BMS, AbbVie, Pfizer, MSD, Novartis, Roche, and UCB; and has undertaken clinical trials and provided expert advice to Pfizer, MSD, AbbVie, BMS, UCB, Roche, Novartis, Samsung, Sandoz, and Lilly.
 

A version of this article appeared on Medscape.com.

LIVERPOOL, ENGLAND — The IRIS-RA study of the investigational monoclonal antibody drug nipocalimab in patients with rheumatoid arthritis (RA) did not meet its primary endpoint, but there could still be people with moderate to severe RA who might benefit from treatment with the drug, according to information reported at the British Society for Rheumatology annual meeting.

The primary endpoint for the phase 2A trial was the least squares mean change in Disease Activity Score in 28 joints using C-reactive protein (DAS28-CRP) from baseline to 12 weeks of treatment. This was reduced by −1.03 with nipocalimab and by −0.58 with placebo, giving a mean difference of just −0.45 (P = .224).

However, one of the key secondary endpoints was the proportion of patients who had 20% improvement in American College of Rheumatology response criteria (ACR20). Results for this endpoint showed a greater difference in response to nipocalimab vs placebo, with a respective 45.5% and 20.0% (P = .055) of individuals achieving ACR20.

Moreover, an analysis stratifying for anti-citrullinated protein autoantibody (ACPA) levels at baseline found that people with higher levels had a better response to nipocalimab.
 

Choice of Endpoint

“The way this study was powered was to look at a change between the treatment groups of a DAS28-CRP reduction of 1.0,” said Peter C. Taylor, BMBCh, PhD, the Norman Collisson chair of musculoskeletal medicine at the University of Oxford in Oxford, England.

DAS28-CRP was often chosen as the primary endpoint in small proof-of-concept studies, such as IRIS-RA, because it was a “measure of continuous change [that] theoretically, would allow greater sensitivity to change,” Dr. Taylor added.

Sara Freeman/Medscape Medical News
Dr. Peter C. Taylor


“Ironically, it has to be said that had we chosen ACR20, we would have hit the primary endpoint. One lives and learns,” noted Dr. Taylor.
 

Proof of Concept

IRIS-RA was billed as a “proof-of-concept” study because it was the first time that a monoclonal antibody targeting the neonatal fragment crystallizable receptor (FcRn) was being tested in an RA population.

The study was a randomized double-blind trial in which 33 people with moderate to severe RA who had an inadequate response to tumor necrosis factor (TNF) inhibitors were treated with nipocalimab at a dose of 15 mg/kg given intravenously every 2 weeks, and 20 received a matching placebo. Participants were treated for 10 weeks, and then the primary follow-up was at 12 weeks, with additional follow-up for safety undertaken at 18 weeks.

Nipocalimab is a fully human, immunoglobulin G1 (IgG1) monoclonal antibody that is designed to selectively block the FcRn. By doing so, it essentially stops IgG from being recycled within the immune system, and this in turn lowers IgG levels. That includes potentially harmful ACPAs, among other pathogenic antibodies, Dr. Taylor and fellow investigators explained in their abstract.

“We’ve known for a long time that ACPA have prognostic value, but there’s been controversy about whether or not ACPA are actually pathogenic,” Dr. Taylor said. “So, one of the hypotheses that this study gives rise to is that by blocking FcRn, and thereby reducing, potentially, the concentration of ACPA in the blood, will we actually have efficacy in patients?”
 

 

 

Are ACPA Really Lowered?

Paul Emery, MD, Versus Arthritis professor of rheumatology and director of the Leeds Biomedical Research Centre at the University of Leeds in Leeds, England, questioned the reduction in antibody levels during the discussion that followed.

Dr. Paul Emery

Although these data had not been presented, Dr. Emery observed that the reduction in IgG was actually greater than that in ACPA, “which is fairly critical. Is it feasible to look to selectively lower normal immunoglobulin over pathogenic autoantibodies?”

Dr. Emery also wanted to know if there “was a floor on the reduction of immunoglobulin” with long-term therapy, “which would be a worry.”

Dr. Taylor responded that total IgG had been reduced by about 65% and ACPA by about 40%. Why this difference exists is not yet clear. It could be because ACPA are part of complexed antibodies.

“Most of these patients are rheumatoid factor [RF]–positive,” said Taylor, pointing out that although IgM “wouldn’t normally be affected by FcRn blockade,” there was a 10% reduction in RF IgM, probably because it was complexed to IgG.

“So, the hypothesis here is that if you look at the clearance of complexes, they’re handled differently in the cytoplasm from the clearance of monomeric IgG. But that’s a hypothesis. It needs further investigation. In vitro, there’s very good, confirmatory evidence to support that. But we’ve yet to explore that more fully in vivo,” Dr. Taylor said.

As for long-term effects, Dr. Taylor responded: “All I can tell you is [that] after the 10-week intervention, that up to an 18-week observation period, immunoglobulin levels recovered very rapidly afterwards. And you mustn’t forget that other isotypes are not affected, unlike rituximab.”
 

Safety and Other Results

With regard to safety, 27 (82%) of nipocalimab- and 12 (60%) of placebo-treated participants experienced at least one treatment-emergent adverse event (TEAE). The most common, occurring in 10% or more of cases, were RA flares (36.4% for nipocalimab vs 15.0% with placebo), headache (12.1% vs 5.0%), and COVID-19 (12.1% vs 0.0%).

There were three serious TEAEs, all in the nipocalimab-treatment group: One was an infection of a burn that had been present at inclusion, another was a deep vein thrombosis that resolved with apixaban treatment, and the other was an infusion-related reaction that resolved with supportive treatment.

Another notable efficacy finding was the proportion of patients achieving DAS28-CRP remission at 12 weeks in the nipocalimab vs the placebo group was substantially greater if considering only people with high baseline ACPA levels, at a respective 40.0% vs 16.7%, when compared with the total population (21.2% vs 10.0%).

Similar findings were seen for the proportion of patients achieving an ACR50, and there were numerically greater reductions in the components of the ACR response criteria such as tender and swollen joints with nipocalimab vs placebo. All of these were exploratory observations, Dr. Taylor emphasized.
 

Combination and Further Trials

Further trials of nipocalimab are planned or are already ongoing in systemic lupus erythematosusactive lupus nephritisSjögren disease, and five other diseases.

In RA, nipocalimab is now being tested in combination with the TNF inhibitor certolizumab pegol (Cimzia) in the DAISY-RA trial. This is another proof-of-concept, phase 2A trial with a target accrual of 104 patients.

The IRIS-RA study was funded by Janssen Research & Development. Dr. Taylor serves as a consultant to AbbVie, Biogen, Eli Lilly, Fresenius, Galapagos, Gilead Sciences, GlaxoSmithKline, Janssen, Nordic Pharma, Pfizer, Sanofi, Aqtual, and UCB and received research funding from Galapagos, among others. Dr. Emery received research grants paid to his institution from AbbVie, Bristol Myers Squibb (BMS), Pfizer, MSD, and Roche; received consultant fees from BMS, AbbVie, Pfizer, MSD, Novartis, Roche, and UCB; and has undertaken clinical trials and provided expert advice to Pfizer, MSD, AbbVie, BMS, UCB, Roche, Novartis, Samsung, Sandoz, and Lilly.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BSR 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Survey Spotlights Identification of Dermatologic Adverse Events From Cancer Therapies

Article Type
Changed
Mon, 05/13/2024 - 15:09

 

SAN DIEGO — Compared with medical oncologists, dermatologists were more likely to correctly classify and grade dermatologic adverse events from cancer therapies, results from a multicenter survey showed.

“New cancer therapies have brought a diversity of treatment-related dermatologic adverse events (dAEs) beyond those experienced with conventional chemotherapy, which has demanded an evolving assessment of toxicities,” researchers led by Nicole R. LeBoeuf, MD, MPH, of the Department of Dermatology at Brigham and Women’s Hospital and the Center for Cutaneous Oncology at the Dana-Farber Brigham Cancer Center, Boston, wrote in a poster presented at the American Academy of Dermatology annual meeting.

The authors noted that “Version 5.0 of the Common Terminology Criteria for Adverse Events (CTCAE v5.0)” serves as the current, broadly accepted criteria for classification and grading during routine medical care and clinical trials. But despite extensive utilization of CTCAE, there is little data regarding its application.”

To evaluate how CTCAE is being used in clinical practice, they sent a four-case survey of dAEs to 81 dermatologists and 182 medical oncologists at six US-based academic institutions. For three of the cases, respondents were asked to classify and grade morbilliform, psoriasiform, and papulopustular rashes based on a review of photographs and text descriptions. For the fourth case, respondents were asked to grade a dAE using only a clinic note text description. The researchers used chi-square tests in R software to compare survey responses.

Compared with medical oncologists, dermatologists were significantly more likely to provide correct responses in characterizing morbilliform and psoriasiform eruptions. “As low as 12%” of medical oncologists were correct, and “as low as 87%” of dermatologists were correct (P < .001). Similarly, dermatologists were significantly more likely to grade the psoriasiform, papulopustular, and written cases correctly compared with medical oncologists (P < .001 for all associations).

“These cases demonstrated poor concordance of classification and grading between specialties and across medical oncology,” the authors concluded in their poster, noting that 87% of medical oncologists were interested in additional educational tools on dAEs. “With correct classification as low as 12%, medical oncologists may have more difficulty delivering appropriate, toxicity-specific therapy and may consider banal eruptions dangerous.”

Poor concordance of grading among the two groups of clinicians “raises the question of whether CTCAE v5.0 is an appropriate determinant for patient continuation on therapy or in trials,” they added. “As anticancer therapy becomes more complex — with new toxicities from novel agents and combinations — we must ensure we have a grading system that is valid across investigators and does not harm patients by instituting unnecessary treatment stops.”

Future studies, they said, “can explore what interventions beyond involvement of dermatologists improve classification and grading in practice.”

Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, who was asked to comment on the study, noted that with the continued expansion and introduction of new targeted and immunotherapies in the oncology space, “you can be sure we will continue to appreciate the importance and value of the field of supportive oncodermatology, as hair, skin, and nails are almost guaranteed collateral damage in this story.

“Ensuring early identification and consistent grading severity is not only important for the plethora of patients who are currently developing the litany of cutaneous adverse events but to evaluate potential mitigation strategies and even push along countermeasures down the FDA approval pathway,” Dr. Friedman said. In this study, the investigators demonstrated that work “is sorely needed, not just in dermatology but even more so for our colleagues across the aisle. A central tenet of supportive oncodermatology must also be education for all stakeholders, and the good news is our oncology partners will welcome it.”

Dr. LeBoeuf disclosed that she is a consultant to and has received honoraria from Bayer, Seattle Genetics, Sanofi, Silverback, Fortress Biotech, and Synox Therapeutics outside the submitted work. No other authors reported having financial disclosures. Dr. Friedman directs the supportive oncodermatology program at GW that received independent funding from La Roche-Posay.
 

 

 

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

SAN DIEGO — Compared with medical oncologists, dermatologists were more likely to correctly classify and grade dermatologic adverse events from cancer therapies, results from a multicenter survey showed.

“New cancer therapies have brought a diversity of treatment-related dermatologic adverse events (dAEs) beyond those experienced with conventional chemotherapy, which has demanded an evolving assessment of toxicities,” researchers led by Nicole R. LeBoeuf, MD, MPH, of the Department of Dermatology at Brigham and Women’s Hospital and the Center for Cutaneous Oncology at the Dana-Farber Brigham Cancer Center, Boston, wrote in a poster presented at the American Academy of Dermatology annual meeting.

The authors noted that “Version 5.0 of the Common Terminology Criteria for Adverse Events (CTCAE v5.0)” serves as the current, broadly accepted criteria for classification and grading during routine medical care and clinical trials. But despite extensive utilization of CTCAE, there is little data regarding its application.”

To evaluate how CTCAE is being used in clinical practice, they sent a four-case survey of dAEs to 81 dermatologists and 182 medical oncologists at six US-based academic institutions. For three of the cases, respondents were asked to classify and grade morbilliform, psoriasiform, and papulopustular rashes based on a review of photographs and text descriptions. For the fourth case, respondents were asked to grade a dAE using only a clinic note text description. The researchers used chi-square tests in R software to compare survey responses.

Compared with medical oncologists, dermatologists were significantly more likely to provide correct responses in characterizing morbilliform and psoriasiform eruptions. “As low as 12%” of medical oncologists were correct, and “as low as 87%” of dermatologists were correct (P < .001). Similarly, dermatologists were significantly more likely to grade the psoriasiform, papulopustular, and written cases correctly compared with medical oncologists (P < .001 for all associations).

“These cases demonstrated poor concordance of classification and grading between specialties and across medical oncology,” the authors concluded in their poster, noting that 87% of medical oncologists were interested in additional educational tools on dAEs. “With correct classification as low as 12%, medical oncologists may have more difficulty delivering appropriate, toxicity-specific therapy and may consider banal eruptions dangerous.”

Poor concordance of grading among the two groups of clinicians “raises the question of whether CTCAE v5.0 is an appropriate determinant for patient continuation on therapy or in trials,” they added. “As anticancer therapy becomes more complex — with new toxicities from novel agents and combinations — we must ensure we have a grading system that is valid across investigators and does not harm patients by instituting unnecessary treatment stops.”

Future studies, they said, “can explore what interventions beyond involvement of dermatologists improve classification and grading in practice.”

Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, who was asked to comment on the study, noted that with the continued expansion and introduction of new targeted and immunotherapies in the oncology space, “you can be sure we will continue to appreciate the importance and value of the field of supportive oncodermatology, as hair, skin, and nails are almost guaranteed collateral damage in this story.

“Ensuring early identification and consistent grading severity is not only important for the plethora of patients who are currently developing the litany of cutaneous adverse events but to evaluate potential mitigation strategies and even push along countermeasures down the FDA approval pathway,” Dr. Friedman said. In this study, the investigators demonstrated that work “is sorely needed, not just in dermatology but even more so for our colleagues across the aisle. A central tenet of supportive oncodermatology must also be education for all stakeholders, and the good news is our oncology partners will welcome it.”

Dr. LeBoeuf disclosed that she is a consultant to and has received honoraria from Bayer, Seattle Genetics, Sanofi, Silverback, Fortress Biotech, and Synox Therapeutics outside the submitted work. No other authors reported having financial disclosures. Dr. Friedman directs the supportive oncodermatology program at GW that received independent funding from La Roche-Posay.
 

 

 

A version of this article first appeared on Medscape.com.

 

SAN DIEGO — Compared with medical oncologists, dermatologists were more likely to correctly classify and grade dermatologic adverse events from cancer therapies, results from a multicenter survey showed.

“New cancer therapies have brought a diversity of treatment-related dermatologic adverse events (dAEs) beyond those experienced with conventional chemotherapy, which has demanded an evolving assessment of toxicities,” researchers led by Nicole R. LeBoeuf, MD, MPH, of the Department of Dermatology at Brigham and Women’s Hospital and the Center for Cutaneous Oncology at the Dana-Farber Brigham Cancer Center, Boston, wrote in a poster presented at the American Academy of Dermatology annual meeting.

The authors noted that “Version 5.0 of the Common Terminology Criteria for Adverse Events (CTCAE v5.0)” serves as the current, broadly accepted criteria for classification and grading during routine medical care and clinical trials. But despite extensive utilization of CTCAE, there is little data regarding its application.”

To evaluate how CTCAE is being used in clinical practice, they sent a four-case survey of dAEs to 81 dermatologists and 182 medical oncologists at six US-based academic institutions. For three of the cases, respondents were asked to classify and grade morbilliform, psoriasiform, and papulopustular rashes based on a review of photographs and text descriptions. For the fourth case, respondents were asked to grade a dAE using only a clinic note text description. The researchers used chi-square tests in R software to compare survey responses.

Compared with medical oncologists, dermatologists were significantly more likely to provide correct responses in characterizing morbilliform and psoriasiform eruptions. “As low as 12%” of medical oncologists were correct, and “as low as 87%” of dermatologists were correct (P < .001). Similarly, dermatologists were significantly more likely to grade the psoriasiform, papulopustular, and written cases correctly compared with medical oncologists (P < .001 for all associations).

“These cases demonstrated poor concordance of classification and grading between specialties and across medical oncology,” the authors concluded in their poster, noting that 87% of medical oncologists were interested in additional educational tools on dAEs. “With correct classification as low as 12%, medical oncologists may have more difficulty delivering appropriate, toxicity-specific therapy and may consider banal eruptions dangerous.”

Poor concordance of grading among the two groups of clinicians “raises the question of whether CTCAE v5.0 is an appropriate determinant for patient continuation on therapy or in trials,” they added. “As anticancer therapy becomes more complex — with new toxicities from novel agents and combinations — we must ensure we have a grading system that is valid across investigators and does not harm patients by instituting unnecessary treatment stops.”

Future studies, they said, “can explore what interventions beyond involvement of dermatologists improve classification and grading in practice.”

Adam Friedman, MD, professor and chair of dermatology at George Washington University, Washington, who was asked to comment on the study, noted that with the continued expansion and introduction of new targeted and immunotherapies in the oncology space, “you can be sure we will continue to appreciate the importance and value of the field of supportive oncodermatology, as hair, skin, and nails are almost guaranteed collateral damage in this story.

“Ensuring early identification and consistent grading severity is not only important for the plethora of patients who are currently developing the litany of cutaneous adverse events but to evaluate potential mitigation strategies and even push along countermeasures down the FDA approval pathway,” Dr. Friedman said. In this study, the investigators demonstrated that work “is sorely needed, not just in dermatology but even more so for our colleagues across the aisle. A central tenet of supportive oncodermatology must also be education for all stakeholders, and the good news is our oncology partners will welcome it.”

Dr. LeBoeuf disclosed that she is a consultant to and has received honoraria from Bayer, Seattle Genetics, Sanofi, Silverback, Fortress Biotech, and Synox Therapeutics outside the submitted work. No other authors reported having financial disclosures. Dr. Friedman directs the supportive oncodermatology program at GW that received independent funding from La Roche-Posay.
 

 

 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAD 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Traffic Noise Negatively Impacts Health

Article Type
Changed
Mon, 05/13/2024 - 14:49

 

New research by Thomas Münzel, MD, senior professor of cardiology at Johannes Gutenberg University Mainz in Mainz, Germany, and colleagues again emphasized the harmful effects of noise on the heart and blood vessels. An analysis of current epidemiologic data provided strong indications that transportation noise is closely related to cardiovascular and cerebrovascular diseases, according to a statement on the data analysis. The results were published in Circulation Research.

Morbidity and Mortality

Epidemiologic studies have shown that road, rail, or air traffic noise increases the risk for cardiovascular morbidity and mortality, with strong evidence for ischemic heart disease, heart failure, and stroke, according to the scientists. The World Health Organization reported that at least 1.6 million healthy life years are lost annually in Western Europe because of traffic-related noise. Nighttime traffic noise leads to sleep fragmentation and shortening, an increase in stress hormone levels, and increased oxidative stress in the vessels and brain. These factors could favor vascular (endothelial) dysfunction, inflammation, and hypertension, thereby increasing cardiovascular risk.

Consequences and Pathomechanisms

In the current publication, the authors provided an overview of epidemiologic research on the effects of transportation noise on cardiovascular risk factors and diseases, discussed mechanistic insights from the latest clinical and experimental studies, and proposed new risk markers to address noise-induced cardiovascular effects in the general population. An integrated analysis in the article demonstrated that for every 10 dB(A) increase, the risk for cardiovascular diseases such as heart attack, stroke, and heart failure significantly increases by 3.2%.

The authors also explained the possible effects of noise on changes in gene networks, epigenetic pathways, circadian rhythms, signal transmission along the neuronal-cardiovascular axis, oxidative stress, inflammation, and metabolism. Finally, current and future noise protection strategies are described, and the existing evidence on noise as a cardiovascular risk factor is discussed.

Confirmed Cardiovascular Risk Factor

“As an increasing proportion of the population is exposed to harmful traffic noise, efforts to reduce noise and laws for noise reduction are of great importance for future public health,” said Dr. Münzel. “It is also important for us that due to the strong evidence, traffic noise is finally recognized as a risk factor for cardiovascular diseases.”

Heart Attack Outcomes

Dr. Münzel and other researchers from Mainz have been studying the cardiovascular consequences of air pollution and traffic noise for several years. For example, they found that heart attacks in people and animals exposed to high noise levels earlier in life healed poorly. These results were published last year in Cardiovascular Research. According to the authors, the findings suggest that traffic noise may play a significant role in the development and course of coronary heart disease, such as after a heart attack.

The scientists initially found in animal experiments that exposure to aircraft noise for 4 days led to increased inflammation in the vessels. Compared with mice not exposed to aircraft noise, the noise-exposed animals showed an increase in free radicals; these animals exhibited a significant inflammatory response and had impaired vessel function.

The researchers explained that the experimental data showed aircraft noise alone triggers a proinflammatory transcription program that promotes the infiltration of immune cells into cardiovascular tissue in animals with acute myocardial infarction. They noted an increased infiltration of CD45+ cells into the vessels and heart, dominated by neutrophils in vessel tissue and Ly6Chigh monocytes in heart tissue. This infiltration creates a proinflammatory milieu that adversely affects the outcome after myocardial infarction by predisposing the heart tissue to greater ischemic damage and functional impairment. Exposure of animals to aircraft noise before induction of myocardial infarction by left anterior descending (LAD) coronary artery ligation impaired left ventricular function and increased infarct size after cardiac ischemia. In addition, noise exposure exacerbated infarct-induced endothelial dysfunction of peripheral vessels as early as 24 hours after LAD ligation.

 

 

Clinical Confirmation

These experimental results were confirmed by observations in the population-based Gutenberg Health Study. The researchers analyzed data from 100 patients with heart attack. The lead and senior authors of the study Michael Molitor, MD, and Philip Wenzel, MD, of the University of Mainz, explained, “From our studies, we have learned that exposure to aircraft noise before a heart attack significantly amplifies subsequent cardiovascular inflammation and exacerbates ischemic heart failure, which is favored by inflammation-promoting vascular conditioning. Our translational results show that people who have been exposed to noise in the past have a worse course if they experience a heart attack later in life.”

Study participants who had experienced a heart attack in their medical history had elevated levels of C-reactive protein if they had been exposed to aircraft noise in the past and subsequently developed noise annoyance reactions (0.305 vs 1.5; P = .0094). In addition, left ventricular ejection fraction in these patients after a heart attack was worse than that in patients with infarction without noise exposure in their medical history (62.5 vs 65.6; P = .0053).

The results suggest that measures to reduce environmental noise could help improve the clinical outcomes of heart attack patients, according to the authors.

Mental Health Effects

Traffic noise also may be associated with an increased risk for depression and anxiety disorders, as reported 2 years ago by the German Society for Psychosomatic Medicine and Medical Psychotherapy. Evolution has programmed the human organism to perceive noises as indicators of potential sources of danger — even during sleep. “Noise puts the body on alert,” explained Manfred E. Beutel, MD, director of the Clinic for Psychosomatic Medicine and Psychotherapy at the University of Mainz. As a result, the autonomic nervous system activates stress hormones such as adrenaline and cortisol, leading to an increase in heart rate and blood pressure. If noise becomes chronic, chronic diseases can develop. “Indeed, observational and experimental studies have shown that persistent noise annoyance promotes incident hypertension, cardiovascular diseases, and type 2 diabetes,” said Dr. Beutel.

Depression Risk Doubled

Among the negative effects of noise annoyance are also mental illnesses, as has become increasingly clear. “Noise annoyance disrupts daily activities and interferes with feelings and thoughts, sleep, and recovery,” said Dr. Beutel. The interruptions trigger negative emotional reactions such as anger, distress, exhaustion, flight impulses, and stress symptoms. “Such conditions promote the development of depression over time,” said Dr. Beutel. This observation was confirmed by the large-scale Gutenberg Health Study using the example of the Mainz population, which suffers to a large extent from noise annoyance because of the nearby Frankfurt Airport. “With increasing noise annoyance, the rates of depression and anxiety disorders steadily increased, until the risks eventually doubled with extreme annoyance,” said Dr. Beutel. Other studies point in the same direction. For example, a meta-analysis found a 12% increase in the risk for depression per 10-dB increase in noise. Another study found an association between nocturnal noise annoyance and the use of antidepressants.

Fine Particulate Matter

According to an evaluation of the Gutenberg Study, people perceive noise annoyance from aircraft noise as the most pronounced, followed by road, neighborhood, industrial, and railway noise. Noise occurs most frequently in urban areas that also produce air pollution such as fine particulate matter. “Fine particulate matter is also suspected of promoting anxiety and depression,” said Dr. Beutel, “because the small particles of fine particulate matter can enter the bloodstream and trigger inflammatory processes there, which in turn are closely related to depression.”

This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

New research by Thomas Münzel, MD, senior professor of cardiology at Johannes Gutenberg University Mainz in Mainz, Germany, and colleagues again emphasized the harmful effects of noise on the heart and blood vessels. An analysis of current epidemiologic data provided strong indications that transportation noise is closely related to cardiovascular and cerebrovascular diseases, according to a statement on the data analysis. The results were published in Circulation Research.

Morbidity and Mortality

Epidemiologic studies have shown that road, rail, or air traffic noise increases the risk for cardiovascular morbidity and mortality, with strong evidence for ischemic heart disease, heart failure, and stroke, according to the scientists. The World Health Organization reported that at least 1.6 million healthy life years are lost annually in Western Europe because of traffic-related noise. Nighttime traffic noise leads to sleep fragmentation and shortening, an increase in stress hormone levels, and increased oxidative stress in the vessels and brain. These factors could favor vascular (endothelial) dysfunction, inflammation, and hypertension, thereby increasing cardiovascular risk.

Consequences and Pathomechanisms

In the current publication, the authors provided an overview of epidemiologic research on the effects of transportation noise on cardiovascular risk factors and diseases, discussed mechanistic insights from the latest clinical and experimental studies, and proposed new risk markers to address noise-induced cardiovascular effects in the general population. An integrated analysis in the article demonstrated that for every 10 dB(A) increase, the risk for cardiovascular diseases such as heart attack, stroke, and heart failure significantly increases by 3.2%.

The authors also explained the possible effects of noise on changes in gene networks, epigenetic pathways, circadian rhythms, signal transmission along the neuronal-cardiovascular axis, oxidative stress, inflammation, and metabolism. Finally, current and future noise protection strategies are described, and the existing evidence on noise as a cardiovascular risk factor is discussed.

Confirmed Cardiovascular Risk Factor

“As an increasing proportion of the population is exposed to harmful traffic noise, efforts to reduce noise and laws for noise reduction are of great importance for future public health,” said Dr. Münzel. “It is also important for us that due to the strong evidence, traffic noise is finally recognized as a risk factor for cardiovascular diseases.”

Heart Attack Outcomes

Dr. Münzel and other researchers from Mainz have been studying the cardiovascular consequences of air pollution and traffic noise for several years. For example, they found that heart attacks in people and animals exposed to high noise levels earlier in life healed poorly. These results were published last year in Cardiovascular Research. According to the authors, the findings suggest that traffic noise may play a significant role in the development and course of coronary heart disease, such as after a heart attack.

The scientists initially found in animal experiments that exposure to aircraft noise for 4 days led to increased inflammation in the vessels. Compared with mice not exposed to aircraft noise, the noise-exposed animals showed an increase in free radicals; these animals exhibited a significant inflammatory response and had impaired vessel function.

The researchers explained that the experimental data showed aircraft noise alone triggers a proinflammatory transcription program that promotes the infiltration of immune cells into cardiovascular tissue in animals with acute myocardial infarction. They noted an increased infiltration of CD45+ cells into the vessels and heart, dominated by neutrophils in vessel tissue and Ly6Chigh monocytes in heart tissue. This infiltration creates a proinflammatory milieu that adversely affects the outcome after myocardial infarction by predisposing the heart tissue to greater ischemic damage and functional impairment. Exposure of animals to aircraft noise before induction of myocardial infarction by left anterior descending (LAD) coronary artery ligation impaired left ventricular function and increased infarct size after cardiac ischemia. In addition, noise exposure exacerbated infarct-induced endothelial dysfunction of peripheral vessels as early as 24 hours after LAD ligation.

 

 

Clinical Confirmation

These experimental results were confirmed by observations in the population-based Gutenberg Health Study. The researchers analyzed data from 100 patients with heart attack. The lead and senior authors of the study Michael Molitor, MD, and Philip Wenzel, MD, of the University of Mainz, explained, “From our studies, we have learned that exposure to aircraft noise before a heart attack significantly amplifies subsequent cardiovascular inflammation and exacerbates ischemic heart failure, which is favored by inflammation-promoting vascular conditioning. Our translational results show that people who have been exposed to noise in the past have a worse course if they experience a heart attack later in life.”

Study participants who had experienced a heart attack in their medical history had elevated levels of C-reactive protein if they had been exposed to aircraft noise in the past and subsequently developed noise annoyance reactions (0.305 vs 1.5; P = .0094). In addition, left ventricular ejection fraction in these patients after a heart attack was worse than that in patients with infarction without noise exposure in their medical history (62.5 vs 65.6; P = .0053).

The results suggest that measures to reduce environmental noise could help improve the clinical outcomes of heart attack patients, according to the authors.

Mental Health Effects

Traffic noise also may be associated with an increased risk for depression and anxiety disorders, as reported 2 years ago by the German Society for Psychosomatic Medicine and Medical Psychotherapy. Evolution has programmed the human organism to perceive noises as indicators of potential sources of danger — even during sleep. “Noise puts the body on alert,” explained Manfred E. Beutel, MD, director of the Clinic for Psychosomatic Medicine and Psychotherapy at the University of Mainz. As a result, the autonomic nervous system activates stress hormones such as adrenaline and cortisol, leading to an increase in heart rate and blood pressure. If noise becomes chronic, chronic diseases can develop. “Indeed, observational and experimental studies have shown that persistent noise annoyance promotes incident hypertension, cardiovascular diseases, and type 2 diabetes,” said Dr. Beutel.

Depression Risk Doubled

Among the negative effects of noise annoyance are also mental illnesses, as has become increasingly clear. “Noise annoyance disrupts daily activities and interferes with feelings and thoughts, sleep, and recovery,” said Dr. Beutel. The interruptions trigger negative emotional reactions such as anger, distress, exhaustion, flight impulses, and stress symptoms. “Such conditions promote the development of depression over time,” said Dr. Beutel. This observation was confirmed by the large-scale Gutenberg Health Study using the example of the Mainz population, which suffers to a large extent from noise annoyance because of the nearby Frankfurt Airport. “With increasing noise annoyance, the rates of depression and anxiety disorders steadily increased, until the risks eventually doubled with extreme annoyance,” said Dr. Beutel. Other studies point in the same direction. For example, a meta-analysis found a 12% increase in the risk for depression per 10-dB increase in noise. Another study found an association between nocturnal noise annoyance and the use of antidepressants.

Fine Particulate Matter

According to an evaluation of the Gutenberg Study, people perceive noise annoyance from aircraft noise as the most pronounced, followed by road, neighborhood, industrial, and railway noise. Noise occurs most frequently in urban areas that also produce air pollution such as fine particulate matter. “Fine particulate matter is also suspected of promoting anxiety and depression,” said Dr. Beutel, “because the small particles of fine particulate matter can enter the bloodstream and trigger inflammatory processes there, which in turn are closely related to depression.”

This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

New research by Thomas Münzel, MD, senior professor of cardiology at Johannes Gutenberg University Mainz in Mainz, Germany, and colleagues again emphasized the harmful effects of noise on the heart and blood vessels. An analysis of current epidemiologic data provided strong indications that transportation noise is closely related to cardiovascular and cerebrovascular diseases, according to a statement on the data analysis. The results were published in Circulation Research.

Morbidity and Mortality

Epidemiologic studies have shown that road, rail, or air traffic noise increases the risk for cardiovascular morbidity and mortality, with strong evidence for ischemic heart disease, heart failure, and stroke, according to the scientists. The World Health Organization reported that at least 1.6 million healthy life years are lost annually in Western Europe because of traffic-related noise. Nighttime traffic noise leads to sleep fragmentation and shortening, an increase in stress hormone levels, and increased oxidative stress in the vessels and brain. These factors could favor vascular (endothelial) dysfunction, inflammation, and hypertension, thereby increasing cardiovascular risk.

Consequences and Pathomechanisms

In the current publication, the authors provided an overview of epidemiologic research on the effects of transportation noise on cardiovascular risk factors and diseases, discussed mechanistic insights from the latest clinical and experimental studies, and proposed new risk markers to address noise-induced cardiovascular effects in the general population. An integrated analysis in the article demonstrated that for every 10 dB(A) increase, the risk for cardiovascular diseases such as heart attack, stroke, and heart failure significantly increases by 3.2%.

The authors also explained the possible effects of noise on changes in gene networks, epigenetic pathways, circadian rhythms, signal transmission along the neuronal-cardiovascular axis, oxidative stress, inflammation, and metabolism. Finally, current and future noise protection strategies are described, and the existing evidence on noise as a cardiovascular risk factor is discussed.

Confirmed Cardiovascular Risk Factor

“As an increasing proportion of the population is exposed to harmful traffic noise, efforts to reduce noise and laws for noise reduction are of great importance for future public health,” said Dr. Münzel. “It is also important for us that due to the strong evidence, traffic noise is finally recognized as a risk factor for cardiovascular diseases.”

Heart Attack Outcomes

Dr. Münzel and other researchers from Mainz have been studying the cardiovascular consequences of air pollution and traffic noise for several years. For example, they found that heart attacks in people and animals exposed to high noise levels earlier in life healed poorly. These results were published last year in Cardiovascular Research. According to the authors, the findings suggest that traffic noise may play a significant role in the development and course of coronary heart disease, such as after a heart attack.

The scientists initially found in animal experiments that exposure to aircraft noise for 4 days led to increased inflammation in the vessels. Compared with mice not exposed to aircraft noise, the noise-exposed animals showed an increase in free radicals; these animals exhibited a significant inflammatory response and had impaired vessel function.

The researchers explained that the experimental data showed aircraft noise alone triggers a proinflammatory transcription program that promotes the infiltration of immune cells into cardiovascular tissue in animals with acute myocardial infarction. They noted an increased infiltration of CD45+ cells into the vessels and heart, dominated by neutrophils in vessel tissue and Ly6Chigh monocytes in heart tissue. This infiltration creates a proinflammatory milieu that adversely affects the outcome after myocardial infarction by predisposing the heart tissue to greater ischemic damage and functional impairment. Exposure of animals to aircraft noise before induction of myocardial infarction by left anterior descending (LAD) coronary artery ligation impaired left ventricular function and increased infarct size after cardiac ischemia. In addition, noise exposure exacerbated infarct-induced endothelial dysfunction of peripheral vessels as early as 24 hours after LAD ligation.

 

 

Clinical Confirmation

These experimental results were confirmed by observations in the population-based Gutenberg Health Study. The researchers analyzed data from 100 patients with heart attack. The lead and senior authors of the study Michael Molitor, MD, and Philip Wenzel, MD, of the University of Mainz, explained, “From our studies, we have learned that exposure to aircraft noise before a heart attack significantly amplifies subsequent cardiovascular inflammation and exacerbates ischemic heart failure, which is favored by inflammation-promoting vascular conditioning. Our translational results show that people who have been exposed to noise in the past have a worse course if they experience a heart attack later in life.”

Study participants who had experienced a heart attack in their medical history had elevated levels of C-reactive protein if they had been exposed to aircraft noise in the past and subsequently developed noise annoyance reactions (0.305 vs 1.5; P = .0094). In addition, left ventricular ejection fraction in these patients after a heart attack was worse than that in patients with infarction without noise exposure in their medical history (62.5 vs 65.6; P = .0053).

The results suggest that measures to reduce environmental noise could help improve the clinical outcomes of heart attack patients, according to the authors.

Mental Health Effects

Traffic noise also may be associated with an increased risk for depression and anxiety disorders, as reported 2 years ago by the German Society for Psychosomatic Medicine and Medical Psychotherapy. Evolution has programmed the human organism to perceive noises as indicators of potential sources of danger — even during sleep. “Noise puts the body on alert,” explained Manfred E. Beutel, MD, director of the Clinic for Psychosomatic Medicine and Psychotherapy at the University of Mainz. As a result, the autonomic nervous system activates stress hormones such as adrenaline and cortisol, leading to an increase in heart rate and blood pressure. If noise becomes chronic, chronic diseases can develop. “Indeed, observational and experimental studies have shown that persistent noise annoyance promotes incident hypertension, cardiovascular diseases, and type 2 diabetes,” said Dr. Beutel.

Depression Risk Doubled

Among the negative effects of noise annoyance are also mental illnesses, as has become increasingly clear. “Noise annoyance disrupts daily activities and interferes with feelings and thoughts, sleep, and recovery,” said Dr. Beutel. The interruptions trigger negative emotional reactions such as anger, distress, exhaustion, flight impulses, and stress symptoms. “Such conditions promote the development of depression over time,” said Dr. Beutel. This observation was confirmed by the large-scale Gutenberg Health Study using the example of the Mainz population, which suffers to a large extent from noise annoyance because of the nearby Frankfurt Airport. “With increasing noise annoyance, the rates of depression and anxiety disorders steadily increased, until the risks eventually doubled with extreme annoyance,” said Dr. Beutel. Other studies point in the same direction. For example, a meta-analysis found a 12% increase in the risk for depression per 10-dB increase in noise. Another study found an association between nocturnal noise annoyance and the use of antidepressants.

Fine Particulate Matter

According to an evaluation of the Gutenberg Study, people perceive noise annoyance from aircraft noise as the most pronounced, followed by road, neighborhood, industrial, and railway noise. Noise occurs most frequently in urban areas that also produce air pollution such as fine particulate matter. “Fine particulate matter is also suspected of promoting anxiety and depression,” said Dr. Beutel, “because the small particles of fine particulate matter can enter the bloodstream and trigger inflammatory processes there, which in turn are closely related to depression.”

This story was translated from Univadis Germany, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is Red Meat Healthy? Multiverse Analysis Has Lessons Beyond Meat

Article Type
Changed
Mon, 05/13/2024 - 15:13

Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise. 

Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.

My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.

But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.

A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
 

Multiple Ways to Slice the Data

Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality. 

Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies. 

You might wonder how there could be thousands of ways to analyze a dataset. I surely did. 

The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.

Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.

For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association. 
 

Is Red Meat Good or Bad?

Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality. 

Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review. 

They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals. 

Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology. 
 

 

 

Specification Curve Analyses Results 

Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.

  • The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
  • The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
  • Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
  • As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
  • Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
  • Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.

Paradigm Changing 

As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have. 

Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method. 

Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result. 

Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null. 

What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible. 

The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data. 

I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices? 

Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.

 

Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise. 

Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.

My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.

But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.

A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
 

Multiple Ways to Slice the Data

Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality. 

Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies. 

You might wonder how there could be thousands of ways to analyze a dataset. I surely did. 

The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.

Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.

For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association. 
 

Is Red Meat Good or Bad?

Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality. 

Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review. 

They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals. 

Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology. 
 

 

 

Specification Curve Analyses Results 

Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.

  • The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
  • The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
  • Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
  • As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
  • Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
  • Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.

Paradigm Changing 

As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have. 

Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method. 

Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result. 

Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null. 

What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible. 

The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data. 

I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices? 

Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.

 

Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise. 

Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.

My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.

But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.

A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
 

Multiple Ways to Slice the Data

Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality. 

Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies. 

You might wonder how there could be thousands of ways to analyze a dataset. I surely did. 

The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.

Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.

For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association. 
 

Is Red Meat Good or Bad?

Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality. 

Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review. 

They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals. 

Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology. 
 

 

 

Specification Curve Analyses Results 

Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.

  • The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
  • The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
  • Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
  • As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
  • Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
  • Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.

Paradigm Changing 

As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have. 

Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method. 

Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result. 

Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null. 

What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible. 

The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data. 

I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices? 

Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.

 

Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Video Games Marketing Food Impacts Teens’ Eating

Article Type
Changed
Mon, 05/13/2024 - 14:19

 

Food and drink advertisements on video game live-streaming platforms (VGLSPs) such as Twitch are associated with a greater preference for and consumption of products high in fat, salt, and/or sugar (HFSS) among teenagers, according to research presented on May 12, 2024, at the 31st European Congress on Obesity in Venice, Italy.

The presentation by Rebecca Evans, University of Liverpool, United Kingdom, included findings from three recently published studies and a submitted randomized controlled trial. At the time of the research, the top VGLSPs globally were Twitch (with 77% of the market share by hours watched), YouTube Gaming (15%), and Facebook Gaming Live (7%).

“Endorsement deals for prominent streamers on Twitch can be worth many millions of dollars, and younger people, who are attractive to advertisers, are moving away from television to these more interactive forms of entertainment,” Evans said. “These deals involve collaborating with brands and promoting their products, including foods that are high in fats, salt, and/or sugar.”

To delve more deeply into the extent and consequences of VGLSP advertising for HFSS, the researchers first analyzed 52 hour-long Twitch videos uploaded to gaming platforms by three popular influencers. They found that food cues appeared at an average rate of 2.6 per hour, and the average duration of each cue was 20 minutes.

Most cues (70.7%) were for branded HFSS (80.5%), led by energy drinks (62.4%). Most (97.7%) were not accompanied by an advertising disclosure. Most food cues were either product placement (44.0%) and looping banners (40.6%) or features such as tie-ins, logos, or offers. Notably, these forms of advertising are always visible on the video game screen, so viewers cannot skip over them or close them.

Next, the team did a systematic review and meta-analysis to assess the relationship between exposure to digital game-based or influencer food marketing with food-related outcomes. They found that young people were twice as likely to prefer foods displayed via digital game-based marketing, and that influencer and digital game-based marketing was associated with increased HFSS food consumption of about 37 additional calories in one sitting.

Researchers then surveyed 490 youngsters (mean age, 16.8 years; 70%, female) to explore associations between recall of food marketing of the top VGLSPs and food-related outcomes. Recall was associated with more positive attitudes towards HFSS foods and, in turn, the purchase and consumption of the marketed HFSS foods.

In addition, the researchers conducted a lab-based randomized controlled trial to explore associations between HFSS food marketing via a mock Twitch stream and subsequent snack intake. A total of 91 youngsters (average age, 18 years; 69% women) viewed the mock stream, which contained either an advertisement (an image overlaid on the video featuring a brand logo and product) for an HFSS food, or a non-branded food. They were then offered a snack. Acute exposure to HFSS food marketing was not associated with immediate consumption, but more habitual use of VGLSPs was associated with increased intake of the marketed snack.

The observational studies could not prove cause and effect, and may not be generalizable to all teens, the authors acknowledged. They also noted that some of the findings are based on self-report surveys, which can lead to recall bias and may have affected the results.

Nevertheless, Ms. Evans said, “The high level of exposure to digital marketing of unhealthy food could drive excess calorie consumption and weight gain, particularly in adolescents who are more susceptible to advertising. It is important that digital food marketing restrictions encompass innovative and emerging digital media such as VGLSPs.”

The research formed Ms. Evans’ PhD work, which is funded by the University of Liverpool. Evans and colleagues declared no conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

 

Food and drink advertisements on video game live-streaming platforms (VGLSPs) such as Twitch are associated with a greater preference for and consumption of products high in fat, salt, and/or sugar (HFSS) among teenagers, according to research presented on May 12, 2024, at the 31st European Congress on Obesity in Venice, Italy.

The presentation by Rebecca Evans, University of Liverpool, United Kingdom, included findings from three recently published studies and a submitted randomized controlled trial. At the time of the research, the top VGLSPs globally were Twitch (with 77% of the market share by hours watched), YouTube Gaming (15%), and Facebook Gaming Live (7%).

“Endorsement deals for prominent streamers on Twitch can be worth many millions of dollars, and younger people, who are attractive to advertisers, are moving away from television to these more interactive forms of entertainment,” Evans said. “These deals involve collaborating with brands and promoting their products, including foods that are high in fats, salt, and/or sugar.”

To delve more deeply into the extent and consequences of VGLSP advertising for HFSS, the researchers first analyzed 52 hour-long Twitch videos uploaded to gaming platforms by three popular influencers. They found that food cues appeared at an average rate of 2.6 per hour, and the average duration of each cue was 20 minutes.

Most cues (70.7%) were for branded HFSS (80.5%), led by energy drinks (62.4%). Most (97.7%) were not accompanied by an advertising disclosure. Most food cues were either product placement (44.0%) and looping banners (40.6%) or features such as tie-ins, logos, or offers. Notably, these forms of advertising are always visible on the video game screen, so viewers cannot skip over them or close them.

Next, the team did a systematic review and meta-analysis to assess the relationship between exposure to digital game-based or influencer food marketing with food-related outcomes. They found that young people were twice as likely to prefer foods displayed via digital game-based marketing, and that influencer and digital game-based marketing was associated with increased HFSS food consumption of about 37 additional calories in one sitting.

Researchers then surveyed 490 youngsters (mean age, 16.8 years; 70%, female) to explore associations between recall of food marketing of the top VGLSPs and food-related outcomes. Recall was associated with more positive attitudes towards HFSS foods and, in turn, the purchase and consumption of the marketed HFSS foods.

In addition, the researchers conducted a lab-based randomized controlled trial to explore associations between HFSS food marketing via a mock Twitch stream and subsequent snack intake. A total of 91 youngsters (average age, 18 years; 69% women) viewed the mock stream, which contained either an advertisement (an image overlaid on the video featuring a brand logo and product) for an HFSS food, or a non-branded food. They were then offered a snack. Acute exposure to HFSS food marketing was not associated with immediate consumption, but more habitual use of VGLSPs was associated with increased intake of the marketed snack.

The observational studies could not prove cause and effect, and may not be generalizable to all teens, the authors acknowledged. They also noted that some of the findings are based on self-report surveys, which can lead to recall bias and may have affected the results.

Nevertheless, Ms. Evans said, “The high level of exposure to digital marketing of unhealthy food could drive excess calorie consumption and weight gain, particularly in adolescents who are more susceptible to advertising. It is important that digital food marketing restrictions encompass innovative and emerging digital media such as VGLSPs.”

The research formed Ms. Evans’ PhD work, which is funded by the University of Liverpool. Evans and colleagues declared no conflicts of interest.

A version of this article appeared on Medscape.com .

 

Food and drink advertisements on video game live-streaming platforms (VGLSPs) such as Twitch are associated with a greater preference for and consumption of products high in fat, salt, and/or sugar (HFSS) among teenagers, according to research presented on May 12, 2024, at the 31st European Congress on Obesity in Venice, Italy.

The presentation by Rebecca Evans, University of Liverpool, United Kingdom, included findings from three recently published studies and a submitted randomized controlled trial. At the time of the research, the top VGLSPs globally were Twitch (with 77% of the market share by hours watched), YouTube Gaming (15%), and Facebook Gaming Live (7%).

“Endorsement deals for prominent streamers on Twitch can be worth many millions of dollars, and younger people, who are attractive to advertisers, are moving away from television to these more interactive forms of entertainment,” Evans said. “These deals involve collaborating with brands and promoting their products, including foods that are high in fats, salt, and/or sugar.”

To delve more deeply into the extent and consequences of VGLSP advertising for HFSS, the researchers first analyzed 52 hour-long Twitch videos uploaded to gaming platforms by three popular influencers. They found that food cues appeared at an average rate of 2.6 per hour, and the average duration of each cue was 20 minutes.

Most cues (70.7%) were for branded HFSS (80.5%), led by energy drinks (62.4%). Most (97.7%) were not accompanied by an advertising disclosure. Most food cues were either product placement (44.0%) and looping banners (40.6%) or features such as tie-ins, logos, or offers. Notably, these forms of advertising are always visible on the video game screen, so viewers cannot skip over them or close them.

Next, the team did a systematic review and meta-analysis to assess the relationship between exposure to digital game-based or influencer food marketing with food-related outcomes. They found that young people were twice as likely to prefer foods displayed via digital game-based marketing, and that influencer and digital game-based marketing was associated with increased HFSS food consumption of about 37 additional calories in one sitting.

Researchers then surveyed 490 youngsters (mean age, 16.8 years; 70%, female) to explore associations between recall of food marketing of the top VGLSPs and food-related outcomes. Recall was associated with more positive attitudes towards HFSS foods and, in turn, the purchase and consumption of the marketed HFSS foods.

In addition, the researchers conducted a lab-based randomized controlled trial to explore associations between HFSS food marketing via a mock Twitch stream and subsequent snack intake. A total of 91 youngsters (average age, 18 years; 69% women) viewed the mock stream, which contained either an advertisement (an image overlaid on the video featuring a brand logo and product) for an HFSS food, or a non-branded food. They were then offered a snack. Acute exposure to HFSS food marketing was not associated with immediate consumption, but more habitual use of VGLSPs was associated with increased intake of the marketed snack.

The observational studies could not prove cause and effect, and may not be generalizable to all teens, the authors acknowledged. They also noted that some of the findings are based on self-report surveys, which can lead to recall bias and may have affected the results.

Nevertheless, Ms. Evans said, “The high level of exposure to digital marketing of unhealthy food could drive excess calorie consumption and weight gain, particularly in adolescents who are more susceptible to advertising. It is important that digital food marketing restrictions encompass innovative and emerging digital media such as VGLSPs.”

The research formed Ms. Evans’ PhD work, which is funded by the University of Liverpool. Evans and colleagues declared no conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New and Emerging Treatments for Major Depressive Disorder

Article Type
Changed
Tue, 05/14/2024 - 09:41
Display Headline
New and Emerging Treatments for Major Depressive Disorder

Outside of treating major depressive disorder (MDD) through the monoamine system with selective serotonin reuptake inhibitors and serotonin-norepinephrine reuptake inhibitors, exploration of other treatment pathways has opened the possibility of faster onset of action and fewer side effects.

 

In this ReCAP, Dr Joseph Goldberg, from Mount Sinai Hospital in New York, NY, outlines how a better understanding of the glutamate system has led to the emergence of ketamine and esketamine as important treatment options, as well as the combination therapy of dextromethorphan with bupropion.

 

Dr Goldberg also discusses new results from serotonin system modulation through the 5HT1A receptor with gepirone, or the 5HT2A receptor with psilocybin. He also reports on a new compound esmethadone, known as REL-1017. Finally, he discusses the first approval of a digital therapeutic app designed to augment pharmacotherapy, and the dopamine partial agonist cariprazine as an adjunctive therapy.

--

Joseph F. Goldberg, MD, Clinical Professor, Department of Psychiatry, Icahn School of Medicine at Mount Sinai; Teaching Attending, Department of Psychiatry, Mount Sinai Hospital, New York, NY

Joseph F. Goldberg, MD, has disclosed the following relevant financial relationships:

 

Serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for: AbbVie; Genomind; Luye Pharma; Neuroma; Neurelis; Otsuka; Sunovion

Serve(d) as a speaker or a member of a speakers bureau for: AbbVie; Alkermes; Axsome; Intracellular Therapies

Receive(d) royalties from: American Psychiatric Publishing; Cambridge University Press

Publications
Topics
Sections

Outside of treating major depressive disorder (MDD) through the monoamine system with selective serotonin reuptake inhibitors and serotonin-norepinephrine reuptake inhibitors, exploration of other treatment pathways has opened the possibility of faster onset of action and fewer side effects.

 

In this ReCAP, Dr Joseph Goldberg, from Mount Sinai Hospital in New York, NY, outlines how a better understanding of the glutamate system has led to the emergence of ketamine and esketamine as important treatment options, as well as the combination therapy of dextromethorphan with bupropion.

 

Dr Goldberg also discusses new results from serotonin system modulation through the 5HT1A receptor with gepirone, or the 5HT2A receptor with psilocybin. He also reports on a new compound esmethadone, known as REL-1017. Finally, he discusses the first approval of a digital therapeutic app designed to augment pharmacotherapy, and the dopamine partial agonist cariprazine as an adjunctive therapy.

--

Joseph F. Goldberg, MD, Clinical Professor, Department of Psychiatry, Icahn School of Medicine at Mount Sinai; Teaching Attending, Department of Psychiatry, Mount Sinai Hospital, New York, NY

Joseph F. Goldberg, MD, has disclosed the following relevant financial relationships:

 

Serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for: AbbVie; Genomind; Luye Pharma; Neuroma; Neurelis; Otsuka; Sunovion

Serve(d) as a speaker or a member of a speakers bureau for: AbbVie; Alkermes; Axsome; Intracellular Therapies

Receive(d) royalties from: American Psychiatric Publishing; Cambridge University Press

Outside of treating major depressive disorder (MDD) through the monoamine system with selective serotonin reuptake inhibitors and serotonin-norepinephrine reuptake inhibitors, exploration of other treatment pathways has opened the possibility of faster onset of action and fewer side effects.

 

In this ReCAP, Dr Joseph Goldberg, from Mount Sinai Hospital in New York, NY, outlines how a better understanding of the glutamate system has led to the emergence of ketamine and esketamine as important treatment options, as well as the combination therapy of dextromethorphan with bupropion.

 

Dr Goldberg also discusses new results from serotonin system modulation through the 5HT1A receptor with gepirone, or the 5HT2A receptor with psilocybin. He also reports on a new compound esmethadone, known as REL-1017. Finally, he discusses the first approval of a digital therapeutic app designed to augment pharmacotherapy, and the dopamine partial agonist cariprazine as an adjunctive therapy.

--

Joseph F. Goldberg, MD, Clinical Professor, Department of Psychiatry, Icahn School of Medicine at Mount Sinai; Teaching Attending, Department of Psychiatry, Mount Sinai Hospital, New York, NY

Joseph F. Goldberg, MD, has disclosed the following relevant financial relationships:

 

Serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for: AbbVie; Genomind; Luye Pharma; Neuroma; Neurelis; Otsuka; Sunovion

Serve(d) as a speaker or a member of a speakers bureau for: AbbVie; Alkermes; Axsome; Intracellular Therapies

Receive(d) royalties from: American Psychiatric Publishing; Cambridge University Press

Publications
Publications
Topics
Article Type
Display Headline
New and Emerging Treatments for Major Depressive Disorder
Display Headline
New and Emerging Treatments for Major Depressive Disorder
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Eyebrow Default
Conference ReCAP
Gate On Date
Mon, 05/13/2024 - 13:15
Un-Gate On Date
Mon, 05/13/2024 - 13:15
Use ProPublica
CFC Schedule Remove Status
Mon, 05/13/2024 - 13:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Conference Recap
video_before_title
Vidyard Video
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article