Rural Areas Face Steeper Decline in Hospital Obstetric Services Than Urban Centers

Article Type
Changed

TOPLINE:

Between 2010 and 2022, hospital-based obstetric care declined significantly across the United States, with 52.4% of rural hospitals and 35.7% of urban hospitals not offering obstetric services by 2022. Rural hospitals experienced a steeper increase in the percentage of facilities without obstetrics than urban counterparts, despite several national maternity care access initiatives.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study of 4964 United States short-term acute care hospitals, including 1982 in rural counties and 2982 in urban counties, analyzing data from 2010 to 2022.
  • Analysis utilized American Hospital Association annual surveys and Centers for Medicare & Medicaid Services Provider of Services files, applying an enhanced algorithm to identify hospital-based obstetric services availability.
  • Hospital rurality classification followed Office of Management and Budget definitions, with urban hospitals located in metropolitan statistical areas having > 250,000 inhabitants and rural hospitals in nonmetropolitan areas with < 50,000 inhabitants.

TAKEAWAY:

  • A total of 537 hospitals lost obstetric services between 2010 and 2022, with 238 rural hospitals and 299 urban hospitals affected, while only 138 hospitals gained obstetric services during this period.
  • The percentage of hospitals without obstetrics increased steadily from 35.2% to 42.4% of all hospitals between 2010 and 2022, with rural hospitals consistently showing higher rates than urban facilities.
  • By 2022, more than half (52.4%) of rural hospitals and over one third (35.7%) of urban hospitals did not offer obstetric care, representing a significant decline in access to maternal healthcare services.
  • Urban areas showed greater potential for service recovery with 112 hospitals gaining obstetric services than only 26 rural hospitals during the study period.

IN PRACTICE:

“Access to obstetric care is an important determinant of maternal and infant health outcomes, and amidst a maternal health crisis in the US, hospital-based obstetric care has declined in both rural and urban communities,” wrote the authors of the study.

SOURCE:

The study was led by Katy B. Kozhimannil, PhD, MPA, Division of Health Policy and Management, University of Minnesota School of Public Health in Minneapolis. It was published online on December 4 in JAMA.

LIMITATIONS: 

The study was limited by the lack of data on births outside hospital settings, which represent less than 2% of United States births. Additionally, the denominator for the study outcome declined each year because of hospital closures, particularly affecting rural hospitals. The researchers also noted that while rurality exists on a continuum, they applied a dichotomous county–based measure of hospital location. Furthermore, the hospital-level data did not contain patient-level information, making it impossible to analyze how changes in obstetric status affected patient outcomes. 

DISCLOSURES:

This study was supported by the Federal Office of Rural Health Policy, Health Resources and Services Administration, Department of Health & Human Services under Public Health Service Cooperative Agreement. One coauthor disclosed receiving grants from the Laura and John Arnold Foundation, Ballad Health, and the Commonwealth Fund outside the submitted work. A coauthor reported receiving personal fees from the American Institute of Biological Sciences on behalf of March of Dimes as a grant reviewer. Another coauthor reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Between 2010 and 2022, hospital-based obstetric care declined significantly across the United States, with 52.4% of rural hospitals and 35.7% of urban hospitals not offering obstetric services by 2022. Rural hospitals experienced a steeper increase in the percentage of facilities without obstetrics than urban counterparts, despite several national maternity care access initiatives.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study of 4964 United States short-term acute care hospitals, including 1982 in rural counties and 2982 in urban counties, analyzing data from 2010 to 2022.
  • Analysis utilized American Hospital Association annual surveys and Centers for Medicare & Medicaid Services Provider of Services files, applying an enhanced algorithm to identify hospital-based obstetric services availability.
  • Hospital rurality classification followed Office of Management and Budget definitions, with urban hospitals located in metropolitan statistical areas having > 250,000 inhabitants and rural hospitals in nonmetropolitan areas with < 50,000 inhabitants.

TAKEAWAY:

  • A total of 537 hospitals lost obstetric services between 2010 and 2022, with 238 rural hospitals and 299 urban hospitals affected, while only 138 hospitals gained obstetric services during this period.
  • The percentage of hospitals without obstetrics increased steadily from 35.2% to 42.4% of all hospitals between 2010 and 2022, with rural hospitals consistently showing higher rates than urban facilities.
  • By 2022, more than half (52.4%) of rural hospitals and over one third (35.7%) of urban hospitals did not offer obstetric care, representing a significant decline in access to maternal healthcare services.
  • Urban areas showed greater potential for service recovery with 112 hospitals gaining obstetric services than only 26 rural hospitals during the study period.

IN PRACTICE:

“Access to obstetric care is an important determinant of maternal and infant health outcomes, and amidst a maternal health crisis in the US, hospital-based obstetric care has declined in both rural and urban communities,” wrote the authors of the study.

SOURCE:

The study was led by Katy B. Kozhimannil, PhD, MPA, Division of Health Policy and Management, University of Minnesota School of Public Health in Minneapolis. It was published online on December 4 in JAMA.

LIMITATIONS: 

The study was limited by the lack of data on births outside hospital settings, which represent less than 2% of United States births. Additionally, the denominator for the study outcome declined each year because of hospital closures, particularly affecting rural hospitals. The researchers also noted that while rurality exists on a continuum, they applied a dichotomous county–based measure of hospital location. Furthermore, the hospital-level data did not contain patient-level information, making it impossible to analyze how changes in obstetric status affected patient outcomes. 

DISCLOSURES:

This study was supported by the Federal Office of Rural Health Policy, Health Resources and Services Administration, Department of Health & Human Services under Public Health Service Cooperative Agreement. One coauthor disclosed receiving grants from the Laura and John Arnold Foundation, Ballad Health, and the Commonwealth Fund outside the submitted work. A coauthor reported receiving personal fees from the American Institute of Biological Sciences on behalf of March of Dimes as a grant reviewer. Another coauthor reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

TOPLINE:

Between 2010 and 2022, hospital-based obstetric care declined significantly across the United States, with 52.4% of rural hospitals and 35.7% of urban hospitals not offering obstetric services by 2022. Rural hospitals experienced a steeper increase in the percentage of facilities without obstetrics than urban counterparts, despite several national maternity care access initiatives.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study of 4964 United States short-term acute care hospitals, including 1982 in rural counties and 2982 in urban counties, analyzing data from 2010 to 2022.
  • Analysis utilized American Hospital Association annual surveys and Centers for Medicare & Medicaid Services Provider of Services files, applying an enhanced algorithm to identify hospital-based obstetric services availability.
  • Hospital rurality classification followed Office of Management and Budget definitions, with urban hospitals located in metropolitan statistical areas having > 250,000 inhabitants and rural hospitals in nonmetropolitan areas with < 50,000 inhabitants.

TAKEAWAY:

  • A total of 537 hospitals lost obstetric services between 2010 and 2022, with 238 rural hospitals and 299 urban hospitals affected, while only 138 hospitals gained obstetric services during this period.
  • The percentage of hospitals without obstetrics increased steadily from 35.2% to 42.4% of all hospitals between 2010 and 2022, with rural hospitals consistently showing higher rates than urban facilities.
  • By 2022, more than half (52.4%) of rural hospitals and over one third (35.7%) of urban hospitals did not offer obstetric care, representing a significant decline in access to maternal healthcare services.
  • Urban areas showed greater potential for service recovery with 112 hospitals gaining obstetric services than only 26 rural hospitals during the study period.

IN PRACTICE:

“Access to obstetric care is an important determinant of maternal and infant health outcomes, and amidst a maternal health crisis in the US, hospital-based obstetric care has declined in both rural and urban communities,” wrote the authors of the study.

SOURCE:

The study was led by Katy B. Kozhimannil, PhD, MPA, Division of Health Policy and Management, University of Minnesota School of Public Health in Minneapolis. It was published online on December 4 in JAMA.

LIMITATIONS: 

The study was limited by the lack of data on births outside hospital settings, which represent less than 2% of United States births. Additionally, the denominator for the study outcome declined each year because of hospital closures, particularly affecting rural hospitals. The researchers also noted that while rurality exists on a continuum, they applied a dichotomous county–based measure of hospital location. Furthermore, the hospital-level data did not contain patient-level information, making it impossible to analyze how changes in obstetric status affected patient outcomes. 

DISCLOSURES:

This study was supported by the Federal Office of Rural Health Policy, Health Resources and Services Administration, Department of Health & Human Services under Public Health Service Cooperative Agreement. One coauthor disclosed receiving grants from the Laura and John Arnold Foundation, Ballad Health, and the Commonwealth Fund outside the submitted work. A coauthor reported receiving personal fees from the American Institute of Biological Sciences on behalf of March of Dimes as a grant reviewer. Another coauthor reported receiving grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Smarter Pregnancy App Links Improved Lifestyle Habits to Lower Maternal Blood Pressure in Early Pregnancy

Article Type
Changed

TOPLINE:

Digital lifestyle coaching through the Smarter Pregnancy program reduces maternal blood pressure (BP) by approximately 2 mm Hg during the first trimester of pregnancy. The program enhances lifestyle behaviors through personalized coaching on vegetable and fruit intake, smoking cessation, and alcohol abstinence.

METHODOLOGY:

  • Researchers analyzed data from the Rotterdam Periconception Cohort between 2010 and 2019, including 132 pregnant women who used Smarter Pregnancy for 6-24 weeks in the intervention group and 1091 pregnant women in the control group.
  • Participants’ outcomes included changes in systolic, diastolic, and mean arterial BPs between baseline and first trimester measurements, with median gestational age of 7 weeks at inclusion.
  • Analysis tracked lifestyle behaviors in the intervention group at 12 and 24 weeks using risk scores for vegetables, fruits, smoking, and alcohol consumption.
  • Multivariable analysis adjusted for baseline BP measurements, age, gestational age, geographic origin, parity, and conception mode to evaluate program effectiveness.

TAKEAWAY:

  • The intervention group demonstrated significant reductions in systolic (beta, −2.34 mm Hg; 95% CI, −4.67 to −0.01; P = .049), diastolic (beta, −2.00 mm Hg; 95% CI, −3.57 to −0.45; P = .012), and mean arterial BP (beta, −2.22 mm Hg; 95% CI, −3.81 to −0.52; P = .011) compared with controls.
  • Among women who underwent assisted reproductive technology (ART), significant reductions were observed in diastolic (beta, −2.38 mm Hg; 95% CI, −4.20 to −0.56) and mean arterial BP (beta, −2.63 mm Hg; 95% CI, −4.61 to −0.56).
  • Program usage was associated with decreased lifestyle risk scores at 12 weeks (beta, −0.84; 95% CI, −1.19 to −0.49) and 24 weeks (beta, −1.07; 95% CI, −1.44 to −0.69), indicating improved lifestyle behaviors.
  • Lifestyle risk scores significantly decreased in both ART and natural pregnancy subgroups after program completion.

IN PRACTICE:

“The findings suggest that Smarter Pregnancy can be used to coach women on healthy lifestyle behaviors commencing from the preconception period onwards to improve BP outcomes. Of note, although implementing the program during [the] first trimester seems easier, initiating lifestyle coaching as early as preconceptional period can act as preventive measure against adverse health outcomes,” wrote the authors of the study.

SOURCE:

The study was led by Batoul Hojeij, PhD, Erasmus University Medical Center in Rotterdam, the Netherlands. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

According to the authors, participants in the intervention group might have had healthier lifestyles due to their motivation to use a digital coaching program. The sample size of naturally conceived pregnancies in the intervention group was small (n = 41), reducing statistical power for subgroup analysis. The high percentage of missing data for baseline BP measurements (64%) could have affected statistical power and led to potential bias, though multiple imputations were used to address this limitation.

DISCLOSURES:

This study was supported by the European Union’s Horizon 2020 research and innovation program (DohART-NET) and the Department of Obstetrics and Gynaecology of the Erasmus MC. Kevin D Sinclair, PhD, DSc, received funding from the Biotechnology and Biological Sciences Research Council.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Digital lifestyle coaching through the Smarter Pregnancy program reduces maternal blood pressure (BP) by approximately 2 mm Hg during the first trimester of pregnancy. The program enhances lifestyle behaviors through personalized coaching on vegetable and fruit intake, smoking cessation, and alcohol abstinence.

METHODOLOGY:

  • Researchers analyzed data from the Rotterdam Periconception Cohort between 2010 and 2019, including 132 pregnant women who used Smarter Pregnancy for 6-24 weeks in the intervention group and 1091 pregnant women in the control group.
  • Participants’ outcomes included changes in systolic, diastolic, and mean arterial BPs between baseline and first trimester measurements, with median gestational age of 7 weeks at inclusion.
  • Analysis tracked lifestyle behaviors in the intervention group at 12 and 24 weeks using risk scores for vegetables, fruits, smoking, and alcohol consumption.
  • Multivariable analysis adjusted for baseline BP measurements, age, gestational age, geographic origin, parity, and conception mode to evaluate program effectiveness.

TAKEAWAY:

  • The intervention group demonstrated significant reductions in systolic (beta, −2.34 mm Hg; 95% CI, −4.67 to −0.01; P = .049), diastolic (beta, −2.00 mm Hg; 95% CI, −3.57 to −0.45; P = .012), and mean arterial BP (beta, −2.22 mm Hg; 95% CI, −3.81 to −0.52; P = .011) compared with controls.
  • Among women who underwent assisted reproductive technology (ART), significant reductions were observed in diastolic (beta, −2.38 mm Hg; 95% CI, −4.20 to −0.56) and mean arterial BP (beta, −2.63 mm Hg; 95% CI, −4.61 to −0.56).
  • Program usage was associated with decreased lifestyle risk scores at 12 weeks (beta, −0.84; 95% CI, −1.19 to −0.49) and 24 weeks (beta, −1.07; 95% CI, −1.44 to −0.69), indicating improved lifestyle behaviors.
  • Lifestyle risk scores significantly decreased in both ART and natural pregnancy subgroups after program completion.

IN PRACTICE:

“The findings suggest that Smarter Pregnancy can be used to coach women on healthy lifestyle behaviors commencing from the preconception period onwards to improve BP outcomes. Of note, although implementing the program during [the] first trimester seems easier, initiating lifestyle coaching as early as preconceptional period can act as preventive measure against adverse health outcomes,” wrote the authors of the study.

SOURCE:

The study was led by Batoul Hojeij, PhD, Erasmus University Medical Center in Rotterdam, the Netherlands. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

According to the authors, participants in the intervention group might have had healthier lifestyles due to their motivation to use a digital coaching program. The sample size of naturally conceived pregnancies in the intervention group was small (n = 41), reducing statistical power for subgroup analysis. The high percentage of missing data for baseline BP measurements (64%) could have affected statistical power and led to potential bias, though multiple imputations were used to address this limitation.

DISCLOSURES:

This study was supported by the European Union’s Horizon 2020 research and innovation program (DohART-NET) and the Department of Obstetrics and Gynaecology of the Erasmus MC. Kevin D Sinclair, PhD, DSc, received funding from the Biotechnology and Biological Sciences Research Council.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

Digital lifestyle coaching through the Smarter Pregnancy program reduces maternal blood pressure (BP) by approximately 2 mm Hg during the first trimester of pregnancy. The program enhances lifestyle behaviors through personalized coaching on vegetable and fruit intake, smoking cessation, and alcohol abstinence.

METHODOLOGY:

  • Researchers analyzed data from the Rotterdam Periconception Cohort between 2010 and 2019, including 132 pregnant women who used Smarter Pregnancy for 6-24 weeks in the intervention group and 1091 pregnant women in the control group.
  • Participants’ outcomes included changes in systolic, diastolic, and mean arterial BPs between baseline and first trimester measurements, with median gestational age of 7 weeks at inclusion.
  • Analysis tracked lifestyle behaviors in the intervention group at 12 and 24 weeks using risk scores for vegetables, fruits, smoking, and alcohol consumption.
  • Multivariable analysis adjusted for baseline BP measurements, age, gestational age, geographic origin, parity, and conception mode to evaluate program effectiveness.

TAKEAWAY:

  • The intervention group demonstrated significant reductions in systolic (beta, −2.34 mm Hg; 95% CI, −4.67 to −0.01; P = .049), diastolic (beta, −2.00 mm Hg; 95% CI, −3.57 to −0.45; P = .012), and mean arterial BP (beta, −2.22 mm Hg; 95% CI, −3.81 to −0.52; P = .011) compared with controls.
  • Among women who underwent assisted reproductive technology (ART), significant reductions were observed in diastolic (beta, −2.38 mm Hg; 95% CI, −4.20 to −0.56) and mean arterial BP (beta, −2.63 mm Hg; 95% CI, −4.61 to −0.56).
  • Program usage was associated with decreased lifestyle risk scores at 12 weeks (beta, −0.84; 95% CI, −1.19 to −0.49) and 24 weeks (beta, −1.07; 95% CI, −1.44 to −0.69), indicating improved lifestyle behaviors.
  • Lifestyle risk scores significantly decreased in both ART and natural pregnancy subgroups after program completion.

IN PRACTICE:

“The findings suggest that Smarter Pregnancy can be used to coach women on healthy lifestyle behaviors commencing from the preconception period onwards to improve BP outcomes. Of note, although implementing the program during [the] first trimester seems easier, initiating lifestyle coaching as early as preconceptional period can act as preventive measure against adverse health outcomes,” wrote the authors of the study.

SOURCE:

The study was led by Batoul Hojeij, PhD, Erasmus University Medical Center in Rotterdam, the Netherlands. It was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

According to the authors, participants in the intervention group might have had healthier lifestyles due to their motivation to use a digital coaching program. The sample size of naturally conceived pregnancies in the intervention group was small (n = 41), reducing statistical power for subgroup analysis. The high percentage of missing data for baseline BP measurements (64%) could have affected statistical power and led to potential bias, though multiple imputations were used to address this limitation.

DISCLOSURES:

This study was supported by the European Union’s Horizon 2020 research and innovation program (DohART-NET) and the Department of Obstetrics and Gynaecology of the Erasmus MC. Kevin D Sinclair, PhD, DSc, received funding from the Biotechnology and Biological Sciences Research Council.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

As-Needed Blood Pressure Medication Linked to Higher Risk for Acute Kidney Injury

Article Type
Changed

TOPLINE:

Veterans receiving blood pressure (BP) medication as needed while hospitalized were at a 23% higher risk for acute kidney injury (AKI) and a 1.5-fold greater risk for potentially dangerous rapid reductions in BP.

METHODOLOGY:

  • Researchers analyzed the records of 133,760 veterans (90% men; mean age, 71.2 years) hospitalized in Veterans Affairs hospitals between 2015 and 2020.
  • The study analyzed as-needed administration of BP drugs to patients who had an elevated BP but were asymptomatic.
  • Patients who had at least one systolic BP reading above 140 mm Hg and received scheduled BP medication in the first 24 hours of hospitalization were included; those admitted to intensive care units or those who required surgery were excluded.
  • The analysis compared outcomes between 28,526 patients who received as-needed drugs and 105,234 who did not; the primary outcome was time to the first AKI occurrence while hospitalized.
  • Secondary outcomes included a reduction of more than 25% in systolic BP within 3 hours of as-needed BP medication, as well as a composite outcome of myocardial infarction, stroke, or death during hospitalization.

TAKEAWAY:

  • Researchers found that an AKI was 23% more likely to occur in veterans who received at least one as-needed BP medication (hazard ratio [HR], 1.23; 95% CI, 1.18-1.29).
  • Veterans who received BP medication as needed were 50% more likely to experience a rapid drop in BP within 3 hours (HR, 1.50; 95% CI, 1.39-1.62) and more than twice as likely after 1 hour (HR, 2.11; 95% CI, 1.81-2.46) than those who did not receive medication.
  • The risk of experiencing the composite outcome was 69% times higher in the as-needed group (rate ratio [RR], 1.69; 95% CI, 1.49-1.92), with individual increased risks for myocardial infarction (RR, 2.92; 95% CI, 2.09-4.07), stroke (RR, 1.99; 95% CI, 1.30-3.03), and death (RR, 1.52; 95% CI, 1.32-1.75).

IN PRACTICE:

“The practical implication of our findings is that there is at least equipoise regarding the utility of as-needed BP medication use for asymptomatic BP elevations in hospitals ... future prospective trials should evaluate the risks and benefits of this common practice,” the study authors wrote.

SOURCE:

The study was led by Muna Thalji Canales, MD, MS, of the North Florida/South Georgia Veterans Health System in Gainesville, Florida. It was published online on November 25 in JAMA Internal Medicine.

LIMITATIONS:

The analysis may have included confounding factors that could have influenced results. The focus on veterans who had not undergone surgery limits generalizability to women, surgical patients, and nonveteran populations. The researchers noted limited data on factors that might influence BP readings in the hospital such as pain, stress, and faulty machinery.

DISCLOSURES:

Study authors reported grants and consulting fees from Merck Sharp & Dohme and BMS, and Teva Pharmaceuticals, among others.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Veterans receiving blood pressure (BP) medication as needed while hospitalized were at a 23% higher risk for acute kidney injury (AKI) and a 1.5-fold greater risk for potentially dangerous rapid reductions in BP.

METHODOLOGY:

  • Researchers analyzed the records of 133,760 veterans (90% men; mean age, 71.2 years) hospitalized in Veterans Affairs hospitals between 2015 and 2020.
  • The study analyzed as-needed administration of BP drugs to patients who had an elevated BP but were asymptomatic.
  • Patients who had at least one systolic BP reading above 140 mm Hg and received scheduled BP medication in the first 24 hours of hospitalization were included; those admitted to intensive care units or those who required surgery were excluded.
  • The analysis compared outcomes between 28,526 patients who received as-needed drugs and 105,234 who did not; the primary outcome was time to the first AKI occurrence while hospitalized.
  • Secondary outcomes included a reduction of more than 25% in systolic BP within 3 hours of as-needed BP medication, as well as a composite outcome of myocardial infarction, stroke, or death during hospitalization.

TAKEAWAY:

  • Researchers found that an AKI was 23% more likely to occur in veterans who received at least one as-needed BP medication (hazard ratio [HR], 1.23; 95% CI, 1.18-1.29).
  • Veterans who received BP medication as needed were 50% more likely to experience a rapid drop in BP within 3 hours (HR, 1.50; 95% CI, 1.39-1.62) and more than twice as likely after 1 hour (HR, 2.11; 95% CI, 1.81-2.46) than those who did not receive medication.
  • The risk of experiencing the composite outcome was 69% times higher in the as-needed group (rate ratio [RR], 1.69; 95% CI, 1.49-1.92), with individual increased risks for myocardial infarction (RR, 2.92; 95% CI, 2.09-4.07), stroke (RR, 1.99; 95% CI, 1.30-3.03), and death (RR, 1.52; 95% CI, 1.32-1.75).

IN PRACTICE:

“The practical implication of our findings is that there is at least equipoise regarding the utility of as-needed BP medication use for asymptomatic BP elevations in hospitals ... future prospective trials should evaluate the risks and benefits of this common practice,” the study authors wrote.

SOURCE:

The study was led by Muna Thalji Canales, MD, MS, of the North Florida/South Georgia Veterans Health System in Gainesville, Florida. It was published online on November 25 in JAMA Internal Medicine.

LIMITATIONS:

The analysis may have included confounding factors that could have influenced results. The focus on veterans who had not undergone surgery limits generalizability to women, surgical patients, and nonveteran populations. The researchers noted limited data on factors that might influence BP readings in the hospital such as pain, stress, and faulty machinery.

DISCLOSURES:

Study authors reported grants and consulting fees from Merck Sharp & Dohme and BMS, and Teva Pharmaceuticals, among others.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Veterans receiving blood pressure (BP) medication as needed while hospitalized were at a 23% higher risk for acute kidney injury (AKI) and a 1.5-fold greater risk for potentially dangerous rapid reductions in BP.

METHODOLOGY:

  • Researchers analyzed the records of 133,760 veterans (90% men; mean age, 71.2 years) hospitalized in Veterans Affairs hospitals between 2015 and 2020.
  • The study analyzed as-needed administration of BP drugs to patients who had an elevated BP but were asymptomatic.
  • Patients who had at least one systolic BP reading above 140 mm Hg and received scheduled BP medication in the first 24 hours of hospitalization were included; those admitted to intensive care units or those who required surgery were excluded.
  • The analysis compared outcomes between 28,526 patients who received as-needed drugs and 105,234 who did not; the primary outcome was time to the first AKI occurrence while hospitalized.
  • Secondary outcomes included a reduction of more than 25% in systolic BP within 3 hours of as-needed BP medication, as well as a composite outcome of myocardial infarction, stroke, or death during hospitalization.

TAKEAWAY:

  • Researchers found that an AKI was 23% more likely to occur in veterans who received at least one as-needed BP medication (hazard ratio [HR], 1.23; 95% CI, 1.18-1.29).
  • Veterans who received BP medication as needed were 50% more likely to experience a rapid drop in BP within 3 hours (HR, 1.50; 95% CI, 1.39-1.62) and more than twice as likely after 1 hour (HR, 2.11; 95% CI, 1.81-2.46) than those who did not receive medication.
  • The risk of experiencing the composite outcome was 69% times higher in the as-needed group (rate ratio [RR], 1.69; 95% CI, 1.49-1.92), with individual increased risks for myocardial infarction (RR, 2.92; 95% CI, 2.09-4.07), stroke (RR, 1.99; 95% CI, 1.30-3.03), and death (RR, 1.52; 95% CI, 1.32-1.75).

IN PRACTICE:

“The practical implication of our findings is that there is at least equipoise regarding the utility of as-needed BP medication use for asymptomatic BP elevations in hospitals ... future prospective trials should evaluate the risks and benefits of this common practice,” the study authors wrote.

SOURCE:

The study was led by Muna Thalji Canales, MD, MS, of the North Florida/South Georgia Veterans Health System in Gainesville, Florida. It was published online on November 25 in JAMA Internal Medicine.

LIMITATIONS:

The analysis may have included confounding factors that could have influenced results. The focus on veterans who had not undergone surgery limits generalizability to women, surgical patients, and nonveteran populations. The researchers noted limited data on factors that might influence BP readings in the hospital such as pain, stress, and faulty machinery.

DISCLOSURES:

Study authors reported grants and consulting fees from Merck Sharp & Dohme and BMS, and Teva Pharmaceuticals, among others.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Iron Overload: The Silent Bone Breaker

Article Type
Changed

TOPLINE:

Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.

 

METHODOLOGY:

  • Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
  • Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
  • Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
  • The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
  • A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.

TAKEAWAY:

  • In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
  • Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
  • There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
  • No sex-specific differences were identified in the association between iron overload and fracture risk.

IN PRACTICE:

“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.

 

SOURCE:

The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.

 

LIMITATIONS:

The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.

 

DISCLOSURES:

This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.

 

METHODOLOGY:

  • Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
  • Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
  • Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
  • The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
  • A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.

TAKEAWAY:

  • In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
  • Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
  • There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
  • No sex-specific differences were identified in the association between iron overload and fracture risk.

IN PRACTICE:

“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.

 

SOURCE:

The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.

 

LIMITATIONS:

The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.

 

DISCLOSURES:

This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.

 

METHODOLOGY:

  • Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
  • Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
  • Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
  • The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
  • A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.

TAKEAWAY:

  • In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
  • Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
  • There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
  • No sex-specific differences were identified in the association between iron overload and fracture risk.

IN PRACTICE:

“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.

 

SOURCE:

The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.

 

LIMITATIONS:

The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.

 

DISCLOSURES:

This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

PFAS Exposure Can Impair Thyroid Homeostasis in Adults

Article Type
Changed

TOPLINE:

Exposure to individual or mixed per- and polyfluoroalkyl substances (PFASs) is associated with changes in peripheral rather than central thyroid hormone sensitivity.

METHODOLOGY:

  • PFASs are widely recognized for their persistence in the environment and potential endocrine-disrupting effects.
  • A cross-sectional study investigated associations between PFAS exposures and thyroid homeostasis parameters in adult participants in two National Health and Nutrition Examination Survey cycles (2007-2008 and 2011-2012).
  • Participants were required to have complete thyroid hormone profiles and measurements of PFAS concentration, not be pregnant, and not have thyroid disease or a history of using thyroid drugs such as thyroxine, methimazole, and propylthiouracil.
  • Levels of six PFASs were measured in the serum: Perfluorooctanoic acid (PFOA), perfluorooctanesulfonic acid (PFOS), perfluorononanoic acid (PFNA), perfluorodecanoic acid, perfluorohexane sulfonic acid (PFHxS), and 2-(N-methyl-perfluorooctane sulfonamido) acetic acid.
  • Thyroid homeostasis parameters were assessed using serum concentrations of thyroid hormones.
  • Peripheral sensitivity was calculated using the ratio of free triiodothyronine to free thyroxine (FT3/FT4) and the sum activity of peripheral deiodinases (SPINA-GD).
  • Central sensitivity was assessed with thyrotroph thyroxine resistance index, thyroid-stimulating hormone index, thyroid feedback quantile–based index (TFQI), and parametric TFQI.

TAKEAWAY:

  • Researchers included 2386 adults (mean age, 47.59 years; 53.94% men; 42.88% White).
  • FT3/FT4 and SPINA-GD were positively associated with PFOA, PFOS, PFNA, and PFHxS (P < .05 for all) in an adjusted analysis; however, no link was found between central thyroid sensitivity parameters and PFAS exposures.
  • Specifically, higher quartiles of PFOA and PFOS concentrations were associated with an increased FT3/FT4 and SPINA-GD, indicating an increased conversion efficiency of FT4 to FT3 or peripheral deiodinase.
  • Exposure to a mixture of different PFASs was also positively correlated with FT3/FT4 (beta, 0.013; P < .001) and SPINA-GD (beta, 1.230; P < .001), with PFOA showing the highest contribution.
  • Men and smokers showed higher correlations of PFOA with peripheral thyroid hormone sensitivity indicators than women and nonsmokers, respectively.

IN PRACTICE:

“PFAS exposure, especially PFOA and PFOS, mainly impacted peripheral sensitivity to thyroid hormones, instead of central sensitivity,” the authors wrote, adding that their results may support, “taking more steps to prevent and reduce” the harmful effects of PFASs.

SOURCE:

This study was led by Xinwen Yu and Yufei Liu, Department of Endocrinology, The Second Affiliated Hospital of Air Force Medical University, Xi’an, China. It was published online in The Journal of Clinical Endocrinology & Metabolism.

LIMITATIONS:

The cross-sectional design of this study limited the ability to establish causal relationships between PFAS exposure and thyroid function. The assessment of thyroid homeostasis parameters was conducted indirectly by measuring thyroid hormone levels. Inaccuracies in self-reported data on long-term exposure to PFASs and the exclusion of other endocrine-disrupting chemicals may have affected the study’s conclusions.

DISCLOSURES:

This study was supported by grants from the Natural Science Foundation of Shaanxi Province, China; the Key Research and Development Project of Shaanxi Province; and the Clinical Research Program of Air Force Medical University. The authors reported having no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Exposure to individual or mixed per- and polyfluoroalkyl substances (PFASs) is associated with changes in peripheral rather than central thyroid hormone sensitivity.

METHODOLOGY:

  • PFASs are widely recognized for their persistence in the environment and potential endocrine-disrupting effects.
  • A cross-sectional study investigated associations between PFAS exposures and thyroid homeostasis parameters in adult participants in two National Health and Nutrition Examination Survey cycles (2007-2008 and 2011-2012).
  • Participants were required to have complete thyroid hormone profiles and measurements of PFAS concentration, not be pregnant, and not have thyroid disease or a history of using thyroid drugs such as thyroxine, methimazole, and propylthiouracil.
  • Levels of six PFASs were measured in the serum: Perfluorooctanoic acid (PFOA), perfluorooctanesulfonic acid (PFOS), perfluorononanoic acid (PFNA), perfluorodecanoic acid, perfluorohexane sulfonic acid (PFHxS), and 2-(N-methyl-perfluorooctane sulfonamido) acetic acid.
  • Thyroid homeostasis parameters were assessed using serum concentrations of thyroid hormones.
  • Peripheral sensitivity was calculated using the ratio of free triiodothyronine to free thyroxine (FT3/FT4) and the sum activity of peripheral deiodinases (SPINA-GD).
  • Central sensitivity was assessed with thyrotroph thyroxine resistance index, thyroid-stimulating hormone index, thyroid feedback quantile–based index (TFQI), and parametric TFQI.

TAKEAWAY:

  • Researchers included 2386 adults (mean age, 47.59 years; 53.94% men; 42.88% White).
  • FT3/FT4 and SPINA-GD were positively associated with PFOA, PFOS, PFNA, and PFHxS (P < .05 for all) in an adjusted analysis; however, no link was found between central thyroid sensitivity parameters and PFAS exposures.
  • Specifically, higher quartiles of PFOA and PFOS concentrations were associated with an increased FT3/FT4 and SPINA-GD, indicating an increased conversion efficiency of FT4 to FT3 or peripheral deiodinase.
  • Exposure to a mixture of different PFASs was also positively correlated with FT3/FT4 (beta, 0.013; P < .001) and SPINA-GD (beta, 1.230; P < .001), with PFOA showing the highest contribution.
  • Men and smokers showed higher correlations of PFOA with peripheral thyroid hormone sensitivity indicators than women and nonsmokers, respectively.

IN PRACTICE:

“PFAS exposure, especially PFOA and PFOS, mainly impacted peripheral sensitivity to thyroid hormones, instead of central sensitivity,” the authors wrote, adding that their results may support, “taking more steps to prevent and reduce” the harmful effects of PFASs.

SOURCE:

This study was led by Xinwen Yu and Yufei Liu, Department of Endocrinology, The Second Affiliated Hospital of Air Force Medical University, Xi’an, China. It was published online in The Journal of Clinical Endocrinology & Metabolism.

LIMITATIONS:

The cross-sectional design of this study limited the ability to establish causal relationships between PFAS exposure and thyroid function. The assessment of thyroid homeostasis parameters was conducted indirectly by measuring thyroid hormone levels. Inaccuracies in self-reported data on long-term exposure to PFASs and the exclusion of other endocrine-disrupting chemicals may have affected the study’s conclusions.

DISCLOSURES:

This study was supported by grants from the Natural Science Foundation of Shaanxi Province, China; the Key Research and Development Project of Shaanxi Province; and the Clinical Research Program of Air Force Medical University. The authors reported having no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

Exposure to individual or mixed per- and polyfluoroalkyl substances (PFASs) is associated with changes in peripheral rather than central thyroid hormone sensitivity.

METHODOLOGY:

  • PFASs are widely recognized for their persistence in the environment and potential endocrine-disrupting effects.
  • A cross-sectional study investigated associations between PFAS exposures and thyroid homeostasis parameters in adult participants in two National Health and Nutrition Examination Survey cycles (2007-2008 and 2011-2012).
  • Participants were required to have complete thyroid hormone profiles and measurements of PFAS concentration, not be pregnant, and not have thyroid disease or a history of using thyroid drugs such as thyroxine, methimazole, and propylthiouracil.
  • Levels of six PFASs were measured in the serum: Perfluorooctanoic acid (PFOA), perfluorooctanesulfonic acid (PFOS), perfluorononanoic acid (PFNA), perfluorodecanoic acid, perfluorohexane sulfonic acid (PFHxS), and 2-(N-methyl-perfluorooctane sulfonamido) acetic acid.
  • Thyroid homeostasis parameters were assessed using serum concentrations of thyroid hormones.
  • Peripheral sensitivity was calculated using the ratio of free triiodothyronine to free thyroxine (FT3/FT4) and the sum activity of peripheral deiodinases (SPINA-GD).
  • Central sensitivity was assessed with thyrotroph thyroxine resistance index, thyroid-stimulating hormone index, thyroid feedback quantile–based index (TFQI), and parametric TFQI.

TAKEAWAY:

  • Researchers included 2386 adults (mean age, 47.59 years; 53.94% men; 42.88% White).
  • FT3/FT4 and SPINA-GD were positively associated with PFOA, PFOS, PFNA, and PFHxS (P < .05 for all) in an adjusted analysis; however, no link was found between central thyroid sensitivity parameters and PFAS exposures.
  • Specifically, higher quartiles of PFOA and PFOS concentrations were associated with an increased FT3/FT4 and SPINA-GD, indicating an increased conversion efficiency of FT4 to FT3 or peripheral deiodinase.
  • Exposure to a mixture of different PFASs was also positively correlated with FT3/FT4 (beta, 0.013; P < .001) and SPINA-GD (beta, 1.230; P < .001), with PFOA showing the highest contribution.
  • Men and smokers showed higher correlations of PFOA with peripheral thyroid hormone sensitivity indicators than women and nonsmokers, respectively.

IN PRACTICE:

“PFAS exposure, especially PFOA and PFOS, mainly impacted peripheral sensitivity to thyroid hormones, instead of central sensitivity,” the authors wrote, adding that their results may support, “taking more steps to prevent and reduce” the harmful effects of PFASs.

SOURCE:

This study was led by Xinwen Yu and Yufei Liu, Department of Endocrinology, The Second Affiliated Hospital of Air Force Medical University, Xi’an, China. It was published online in The Journal of Clinical Endocrinology & Metabolism.

LIMITATIONS:

The cross-sectional design of this study limited the ability to establish causal relationships between PFAS exposure and thyroid function. The assessment of thyroid homeostasis parameters was conducted indirectly by measuring thyroid hormone levels. Inaccuracies in self-reported data on long-term exposure to PFASs and the exclusion of other endocrine-disrupting chemicals may have affected the study’s conclusions.

DISCLOSURES:

This study was supported by grants from the Natural Science Foundation of Shaanxi Province, China; the Key Research and Development Project of Shaanxi Province; and the Clinical Research Program of Air Force Medical University. The authors reported having no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Flu Vaccine Guards Household Contacts of Infected People

Article Type
Changed

TOPLINE:

About one in five people who live in the same household as an individual infected with the influenza virus develop secondary infections within a 7-day follow-up period, with children facing the highest risk. Vaccination lowers the risk of contracting the infection among household contacts.

METHODOLOGY:

  • Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
  • Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
  • Participants collected daily symptom diaries and nasal swabs during the follow-up period.
  • Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
  • The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.

TAKEAWAY:

  • Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
  • Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
  • The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
  • The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
  • The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.

IN PRACTICE:

“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.

SOURCE:

The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.

LIMITATIONS:

The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.

DISCLOSURES:

This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.

This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

About one in five people who live in the same household as an individual infected with the influenza virus develop secondary infections within a 7-day follow-up period, with children facing the highest risk. Vaccination lowers the risk of contracting the infection among household contacts.

METHODOLOGY:

  • Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
  • Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
  • Participants collected daily symptom diaries and nasal swabs during the follow-up period.
  • Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
  • The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.

TAKEAWAY:

  • Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
  • Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
  • The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
  • The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
  • The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.

IN PRACTICE:

“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.

SOURCE:

The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.

LIMITATIONS:

The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.

DISCLOSURES:

This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.

This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

About one in five people who live in the same household as an individual infected with the influenza virus develop secondary infections within a 7-day follow-up period, with children facing the highest risk. Vaccination lowers the risk of contracting the infection among household contacts.

METHODOLOGY:

  • Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
  • Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
  • Participants collected daily symptom diaries and nasal swabs during the follow-up period.
  • Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
  • The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.

TAKEAWAY:

  • Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
  • Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
  • The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
  • The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
  • The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.

IN PRACTICE:

“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.

SOURCE:

The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.

LIMITATIONS:

The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.

DISCLOSURES:

This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.

This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Watch That Attitude: Is There Ageism in Healthcare?

Article Type
Changed

People are living longer in Europe. Life expectancy increased on the continent by around 12 years between 1960 and 2022. And despite slower progress during the COVID-19 pandemic, the trend appears to be continuing.

Not only are Europeans living longer, their fertility rates are declining. This means that the number of people aged 75-84 years is projected to grow in Europe a full 56.1% by 2050, while the population younger than 55 years is expected to fall by 13.5%.

This means that attitudes toward age need to change, and fast — even among healthcare professionals.

 

Healthcare Is Not Exempt From Ageist Attitudes

A systematic review published in the journal PLOS ONE in 2020 found that age was a determinant factor in dictating who received certain medical procedures or treatments. For example, a study of 9105 hospitalized patients found that healthcare providers were significantly more likely to withhold life-sustaining treatments from older patients. Another study found evidence that older people are excluded from clinical trials, even when the trials are for diseases that appear later in life, like Parkinson’s.

“In healthcare, there are different levels of ageism,” explained Hannah Swift, PhD, reader in social and organizational psychology at the University of Kent in the United Kingdom. 

Ageism is embedded in the laws, rules, and practices of institutions, she explained. This became especially obvious during the pandemic, when health professionals had to decide who to treat, possibly using age as a proxy for making some of these decisions, she said. 

“When you categorize people, you might be using stereotypes, assumptions, and expectations about age and that age group to make those decisions, and that’s where errors can occur.”

She added that ageist attitudes also become apparent at the interpersonal level by using patronizing language or offering unnecessary help to older people based on assumptions about their cognitive and physical abilities.

“Older age is often wrongly associated with declining levels of health and activity,” said Ittay Mannheim, PhD, guest postdoctoral researcher on aging and ageism at the Open University of the Netherlands. “However, older adults are a very diverse group, varying widely in many aspects, including health conditions. This stereotype can influence how healthcare professionals interact with them, assuming frailty or memory issues simply based on age. It’s important to recognize that being older doesn’t necessarily mean being ill.” 

Mannheim’s research found that healthcare professionals often stand in the way of older people using technology-based treatments due to negative attitudes towards age. “So, actually, a barrier to using these technologies could be that healthcare professionals don’t think that someone can use it or won’t even offer it because someone looks old or is old,” he said.

 

The Impacts

Discrimination impacts the physical, mental, and social well-being of its victims. This includes attitudes towards age.

The PLOS ONE review of research on the global reach of ageism found that experienced or self-determined ageism was associated with significantly worse health outcomes across all countries examined. The same research team calculated that an estimated 6.3 million cases of depression worldwide are linked to ageism.

Other research has found that exposure to negative age stereotyping impacts willingness to adopt a healthy lifestyle in addition to increasing the risk for cardiovascular events.

 

What Can Be Done?

“Healthcare professionals frequently interact with older adults at their most vulnerable, which can reinforce negative stereotypes of older people being vulnerable or ill,” said Swift. “However, not all older adults fit these stereotypes. Many can live well and independently. Perhaps healthcare education should include reminders of the diverse experiences of older individuals rather than solely focusing on the moments when they require help.”

Research indicates that although progress has been made in geriatric training and the care of older individuals by healthcare education institutions, improved education and training are still needed at all levels of geriatric healthcare, including hospital administrators, physicians, nurses, personal caregivers, and associated health professions.

“Generally speaking, what healthcare professionals learn about aging tends to focus more on the biological aspects,” said Mannheim. “However, they may not fully understand what it means to be old or how to interact with older individuals, especially regarding technology. It is important to raise awareness about ageism because, in my experience working with healthcare professionals, even a single workshop on ageism can have a profound impact. Participants often respond with surprise, saying something like, ‘Wow, I never thought about this before.’”

Mannheim said that training healthcare providers to understand the aging process better could help to reduce any biases they might have and better prepare them to respond more adequately to the needs of older patients.

“We cannot devalue the lives of older people simply because they are older. It is crucial for all of us, especially governments, to acknowledge our responsibility to protect and promote human rights for individuals of all ages. If we fail to do this, the strategies we’ve witnessed during this pandemic will be repeated in the future,” said Nena Georgantzi, PhD, Barcelona-based human rights manager at AGE Platform Europe, an EU network of organizations of and for older people.

 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

People are living longer in Europe. Life expectancy increased on the continent by around 12 years between 1960 and 2022. And despite slower progress during the COVID-19 pandemic, the trend appears to be continuing.

Not only are Europeans living longer, their fertility rates are declining. This means that the number of people aged 75-84 years is projected to grow in Europe a full 56.1% by 2050, while the population younger than 55 years is expected to fall by 13.5%.

This means that attitudes toward age need to change, and fast — even among healthcare professionals.

 

Healthcare Is Not Exempt From Ageist Attitudes

A systematic review published in the journal PLOS ONE in 2020 found that age was a determinant factor in dictating who received certain medical procedures or treatments. For example, a study of 9105 hospitalized patients found that healthcare providers were significantly more likely to withhold life-sustaining treatments from older patients. Another study found evidence that older people are excluded from clinical trials, even when the trials are for diseases that appear later in life, like Parkinson’s.

“In healthcare, there are different levels of ageism,” explained Hannah Swift, PhD, reader in social and organizational psychology at the University of Kent in the United Kingdom. 

Ageism is embedded in the laws, rules, and practices of institutions, she explained. This became especially obvious during the pandemic, when health professionals had to decide who to treat, possibly using age as a proxy for making some of these decisions, she said. 

“When you categorize people, you might be using stereotypes, assumptions, and expectations about age and that age group to make those decisions, and that’s where errors can occur.”

She added that ageist attitudes also become apparent at the interpersonal level by using patronizing language or offering unnecessary help to older people based on assumptions about their cognitive and physical abilities.

“Older age is often wrongly associated with declining levels of health and activity,” said Ittay Mannheim, PhD, guest postdoctoral researcher on aging and ageism at the Open University of the Netherlands. “However, older adults are a very diverse group, varying widely in many aspects, including health conditions. This stereotype can influence how healthcare professionals interact with them, assuming frailty or memory issues simply based on age. It’s important to recognize that being older doesn’t necessarily mean being ill.” 

Mannheim’s research found that healthcare professionals often stand in the way of older people using technology-based treatments due to negative attitudes towards age. “So, actually, a barrier to using these technologies could be that healthcare professionals don’t think that someone can use it or won’t even offer it because someone looks old or is old,” he said.

 

The Impacts

Discrimination impacts the physical, mental, and social well-being of its victims. This includes attitudes towards age.

The PLOS ONE review of research on the global reach of ageism found that experienced or self-determined ageism was associated with significantly worse health outcomes across all countries examined. The same research team calculated that an estimated 6.3 million cases of depression worldwide are linked to ageism.

Other research has found that exposure to negative age stereotyping impacts willingness to adopt a healthy lifestyle in addition to increasing the risk for cardiovascular events.

 

What Can Be Done?

“Healthcare professionals frequently interact with older adults at their most vulnerable, which can reinforce negative stereotypes of older people being vulnerable or ill,” said Swift. “However, not all older adults fit these stereotypes. Many can live well and independently. Perhaps healthcare education should include reminders of the diverse experiences of older individuals rather than solely focusing on the moments when they require help.”

Research indicates that although progress has been made in geriatric training and the care of older individuals by healthcare education institutions, improved education and training are still needed at all levels of geriatric healthcare, including hospital administrators, physicians, nurses, personal caregivers, and associated health professions.

“Generally speaking, what healthcare professionals learn about aging tends to focus more on the biological aspects,” said Mannheim. “However, they may not fully understand what it means to be old or how to interact with older individuals, especially regarding technology. It is important to raise awareness about ageism because, in my experience working with healthcare professionals, even a single workshop on ageism can have a profound impact. Participants often respond with surprise, saying something like, ‘Wow, I never thought about this before.’”

Mannheim said that training healthcare providers to understand the aging process better could help to reduce any biases they might have and better prepare them to respond more adequately to the needs of older patients.

“We cannot devalue the lives of older people simply because they are older. It is crucial for all of us, especially governments, to acknowledge our responsibility to protect and promote human rights for individuals of all ages. If we fail to do this, the strategies we’ve witnessed during this pandemic will be repeated in the future,” said Nena Georgantzi, PhD, Barcelona-based human rights manager at AGE Platform Europe, an EU network of organizations of and for older people.

 

A version of this article appeared on Medscape.com.

People are living longer in Europe. Life expectancy increased on the continent by around 12 years between 1960 and 2022. And despite slower progress during the COVID-19 pandemic, the trend appears to be continuing.

Not only are Europeans living longer, their fertility rates are declining. This means that the number of people aged 75-84 years is projected to grow in Europe a full 56.1% by 2050, while the population younger than 55 years is expected to fall by 13.5%.

This means that attitudes toward age need to change, and fast — even among healthcare professionals.

 

Healthcare Is Not Exempt From Ageist Attitudes

A systematic review published in the journal PLOS ONE in 2020 found that age was a determinant factor in dictating who received certain medical procedures or treatments. For example, a study of 9105 hospitalized patients found that healthcare providers were significantly more likely to withhold life-sustaining treatments from older patients. Another study found evidence that older people are excluded from clinical trials, even when the trials are for diseases that appear later in life, like Parkinson’s.

“In healthcare, there are different levels of ageism,” explained Hannah Swift, PhD, reader in social and organizational psychology at the University of Kent in the United Kingdom. 

Ageism is embedded in the laws, rules, and practices of institutions, she explained. This became especially obvious during the pandemic, when health professionals had to decide who to treat, possibly using age as a proxy for making some of these decisions, she said. 

“When you categorize people, you might be using stereotypes, assumptions, and expectations about age and that age group to make those decisions, and that’s where errors can occur.”

She added that ageist attitudes also become apparent at the interpersonal level by using patronizing language or offering unnecessary help to older people based on assumptions about their cognitive and physical abilities.

“Older age is often wrongly associated with declining levels of health and activity,” said Ittay Mannheim, PhD, guest postdoctoral researcher on aging and ageism at the Open University of the Netherlands. “However, older adults are a very diverse group, varying widely in many aspects, including health conditions. This stereotype can influence how healthcare professionals interact with them, assuming frailty or memory issues simply based on age. It’s important to recognize that being older doesn’t necessarily mean being ill.” 

Mannheim’s research found that healthcare professionals often stand in the way of older people using technology-based treatments due to negative attitudes towards age. “So, actually, a barrier to using these technologies could be that healthcare professionals don’t think that someone can use it or won’t even offer it because someone looks old or is old,” he said.

 

The Impacts

Discrimination impacts the physical, mental, and social well-being of its victims. This includes attitudes towards age.

The PLOS ONE review of research on the global reach of ageism found that experienced or self-determined ageism was associated with significantly worse health outcomes across all countries examined. The same research team calculated that an estimated 6.3 million cases of depression worldwide are linked to ageism.

Other research has found that exposure to negative age stereotyping impacts willingness to adopt a healthy lifestyle in addition to increasing the risk for cardiovascular events.

 

What Can Be Done?

“Healthcare professionals frequently interact with older adults at their most vulnerable, which can reinforce negative stereotypes of older people being vulnerable or ill,” said Swift. “However, not all older adults fit these stereotypes. Many can live well and independently. Perhaps healthcare education should include reminders of the diverse experiences of older individuals rather than solely focusing on the moments when they require help.”

Research indicates that although progress has been made in geriatric training and the care of older individuals by healthcare education institutions, improved education and training are still needed at all levels of geriatric healthcare, including hospital administrators, physicians, nurses, personal caregivers, and associated health professions.

“Generally speaking, what healthcare professionals learn about aging tends to focus more on the biological aspects,” said Mannheim. “However, they may not fully understand what it means to be old or how to interact with older individuals, especially regarding technology. It is important to raise awareness about ageism because, in my experience working with healthcare professionals, even a single workshop on ageism can have a profound impact. Participants often respond with surprise, saying something like, ‘Wow, I never thought about this before.’”

Mannheim said that training healthcare providers to understand the aging process better could help to reduce any biases they might have and better prepare them to respond more adequately to the needs of older patients.

“We cannot devalue the lives of older people simply because they are older. It is crucial for all of us, especially governments, to acknowledge our responsibility to protect and promote human rights for individuals of all ages. If we fail to do this, the strategies we’ve witnessed during this pandemic will be repeated in the future,” said Nena Georgantzi, PhD, Barcelona-based human rights manager at AGE Platform Europe, an EU network of organizations of and for older people.

 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Is 1-Week Radiotherapy Safe for Breast Cancer?

Article Type
Changed

TOPLINE:

At 1 year, 82% of patients with breast cancer who received a 1-week ultrahypofractionated breast radiotherapy regimen reported no or mild toxicities. Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.

METHODOLOGY:

  • In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
  • In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
  • Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
  • Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
  • Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.

TAKEAWAY:

  • Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
  • At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
  • Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
  • Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.

IN PRACTICE:

“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”

SOURCE:

The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.

LIMITATIONS:

The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.

DISCLOSURES:

Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

At 1 year, 82% of patients with breast cancer who received a 1-week ultrahypofractionated breast radiotherapy regimen reported no or mild toxicities. Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.

METHODOLOGY:

  • In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
  • In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
  • Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
  • Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
  • Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.

TAKEAWAY:

  • Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
  • At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
  • Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
  • Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.

IN PRACTICE:

“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”

SOURCE:

The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.

LIMITATIONS:

The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.

DISCLOSURES:

Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

At 1 year, 82% of patients with breast cancer who received a 1-week ultrahypofractionated breast radiotherapy regimen reported no or mild toxicities. Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.

METHODOLOGY:

  • In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
  • In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
  • Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
  • Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
  • Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.

TAKEAWAY:

  • Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
  • At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
  • Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
  • Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.

IN PRACTICE:

“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”

SOURCE:

The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.

LIMITATIONS:

The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.

DISCLOSURES:

Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Belly Fat Beats BMI in Predicting Colorectal Cancer Risk

Article Type
Changed

TOPLINE:

Individuals with normal body mass index (BMI) measurements may still face an increased risk for colorectal cancer if they have central obesity, characterized by excess fat around the abdomen.

METHODOLOGY:

  • General obesity, often measured using BMI, is a recognized risk factor for colorectal cancer, but how much of this association is due to central obesity is unclear.
  • Researchers assessed the associations between BMI, waist-to-hip ratio (WHR), and waist circumference (WC) with colorectal cancer risk and the degree of independence among these associations in patients aged 40-69 years recruited in the UK Biobank cohort study from 2006 to 2010.
  • Anthropometric measurements were performed using standardized methods.
  • Cancer registry and hospital data linkage identified colorectal cancer cases in the UK Biobank.

TAKEAWAY:

  • Researchers included 460,784 participants (mean age, 56.3 years; 46.7% men), of whom 67.1% had either overweight or obesity, and 49.4% and 60.5% had high or very high WHR and WC, respectively.
  • During the median 12.5-year follow-up period, 5977 participants developed colorectal cancer.
  • Every SD increase in WHR (hazard ratio [HR], 1.18) showed a stronger association with colorectal cancer risk than in BMI (HR, 1.10).
  • After adjustment for BMI, the association between WHR and colorectal cancer risk became slightly attenuated while still staying robust (HR, 1.15); however, after adjusting for WHR, the association between BMI and colorectal cancer risk became substantially weakened (HR, 1.04).
  • WHR showed strongly significant associations with colorectal cancer risk across all BMI categories, whereas associations of BMI with colorectal cancer risk were weak and not statistically significant within all WHR categories.
  • Central obesity demonstrated consistent associations with both colon and rectal cancer risks in both sexes before and after adjustment for BMI, whereas BMI showed no significant association with colorectal cancer risk in women or with rectal cancer risk after WHR adjustment.

IN PRACTICE:

“[The study] results also underline the importance of integrating additional anthropometric measures such as WHR alongside BMI into routine clinical practice for more effective prevention and management of obesity, whose prevalence is steadily increasing in many countries worldwide, in order to limit the global burden of colorectal cancer and many other obesity-related adverse health outcomes,” the authors wrote.

SOURCE:

The study was led by Fatemeh Safizadeh, German Cancer Research Center (DKFZ), Heidelberg. It was published online in The International Journal of Obesity.

LIMITATIONS:

This study relied on only one-time measurements of anthropometric measures at baseline, without considering previous lifetime history of overweight and obesity or changes during follow-up. Additionally, WHR and WC may not be the most accurate measures of central obesity, as WC includes both visceral and subcutaneous adipose tissue. The study population also showed evidence of healthy volunteer bias, with more health-conscious and socioeconomically advantaged participants being somewhat overrepresented.

DISCLOSURES:

The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Individuals with normal body mass index (BMI) measurements may still face an increased risk for colorectal cancer if they have central obesity, characterized by excess fat around the abdomen.

METHODOLOGY:

  • General obesity, often measured using BMI, is a recognized risk factor for colorectal cancer, but how much of this association is due to central obesity is unclear.
  • Researchers assessed the associations between BMI, waist-to-hip ratio (WHR), and waist circumference (WC) with colorectal cancer risk and the degree of independence among these associations in patients aged 40-69 years recruited in the UK Biobank cohort study from 2006 to 2010.
  • Anthropometric measurements were performed using standardized methods.
  • Cancer registry and hospital data linkage identified colorectal cancer cases in the UK Biobank.

TAKEAWAY:

  • Researchers included 460,784 participants (mean age, 56.3 years; 46.7% men), of whom 67.1% had either overweight or obesity, and 49.4% and 60.5% had high or very high WHR and WC, respectively.
  • During the median 12.5-year follow-up period, 5977 participants developed colorectal cancer.
  • Every SD increase in WHR (hazard ratio [HR], 1.18) showed a stronger association with colorectal cancer risk than in BMI (HR, 1.10).
  • After adjustment for BMI, the association between WHR and colorectal cancer risk became slightly attenuated while still staying robust (HR, 1.15); however, after adjusting for WHR, the association between BMI and colorectal cancer risk became substantially weakened (HR, 1.04).
  • WHR showed strongly significant associations with colorectal cancer risk across all BMI categories, whereas associations of BMI with colorectal cancer risk were weak and not statistically significant within all WHR categories.
  • Central obesity demonstrated consistent associations with both colon and rectal cancer risks in both sexes before and after adjustment for BMI, whereas BMI showed no significant association with colorectal cancer risk in women or with rectal cancer risk after WHR adjustment.

IN PRACTICE:

“[The study] results also underline the importance of integrating additional anthropometric measures such as WHR alongside BMI into routine clinical practice for more effective prevention and management of obesity, whose prevalence is steadily increasing in many countries worldwide, in order to limit the global burden of colorectal cancer and many other obesity-related adverse health outcomes,” the authors wrote.

SOURCE:

The study was led by Fatemeh Safizadeh, German Cancer Research Center (DKFZ), Heidelberg. It was published online in The International Journal of Obesity.

LIMITATIONS:

This study relied on only one-time measurements of anthropometric measures at baseline, without considering previous lifetime history of overweight and obesity or changes during follow-up. Additionally, WHR and WC may not be the most accurate measures of central obesity, as WC includes both visceral and subcutaneous adipose tissue. The study population also showed evidence of healthy volunteer bias, with more health-conscious and socioeconomically advantaged participants being somewhat overrepresented.

DISCLOSURES:

The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Individuals with normal body mass index (BMI) measurements may still face an increased risk for colorectal cancer if they have central obesity, characterized by excess fat around the abdomen.

METHODOLOGY:

  • General obesity, often measured using BMI, is a recognized risk factor for colorectal cancer, but how much of this association is due to central obesity is unclear.
  • Researchers assessed the associations between BMI, waist-to-hip ratio (WHR), and waist circumference (WC) with colorectal cancer risk and the degree of independence among these associations in patients aged 40-69 years recruited in the UK Biobank cohort study from 2006 to 2010.
  • Anthropometric measurements were performed using standardized methods.
  • Cancer registry and hospital data linkage identified colorectal cancer cases in the UK Biobank.

TAKEAWAY:

  • Researchers included 460,784 participants (mean age, 56.3 years; 46.7% men), of whom 67.1% had either overweight or obesity, and 49.4% and 60.5% had high or very high WHR and WC, respectively.
  • During the median 12.5-year follow-up period, 5977 participants developed colorectal cancer.
  • Every SD increase in WHR (hazard ratio [HR], 1.18) showed a stronger association with colorectal cancer risk than in BMI (HR, 1.10).
  • After adjustment for BMI, the association between WHR and colorectal cancer risk became slightly attenuated while still staying robust (HR, 1.15); however, after adjusting for WHR, the association between BMI and colorectal cancer risk became substantially weakened (HR, 1.04).
  • WHR showed strongly significant associations with colorectal cancer risk across all BMI categories, whereas associations of BMI with colorectal cancer risk were weak and not statistically significant within all WHR categories.
  • Central obesity demonstrated consistent associations with both colon and rectal cancer risks in both sexes before and after adjustment for BMI, whereas BMI showed no significant association with colorectal cancer risk in women or with rectal cancer risk after WHR adjustment.

IN PRACTICE:

“[The study] results also underline the importance of integrating additional anthropometric measures such as WHR alongside BMI into routine clinical practice for more effective prevention and management of obesity, whose prevalence is steadily increasing in many countries worldwide, in order to limit the global burden of colorectal cancer and many other obesity-related adverse health outcomes,” the authors wrote.

SOURCE:

The study was led by Fatemeh Safizadeh, German Cancer Research Center (DKFZ), Heidelberg. It was published online in The International Journal of Obesity.

LIMITATIONS:

This study relied on only one-time measurements of anthropometric measures at baseline, without considering previous lifetime history of overweight and obesity or changes during follow-up. Additionally, WHR and WC may not be the most accurate measures of central obesity, as WC includes both visceral and subcutaneous adipose tissue. The study population also showed evidence of healthy volunteer bias, with more health-conscious and socioeconomically advantaged participants being somewhat overrepresented.

DISCLOSURES:

The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Adalimumab for Psoriasis: Study Compares Biosimilars Vs. Originator

Article Type
Changed

TOPLINE:

Biosimilars demonstrate comparable drug survival and safety with adalimumab among new users, but patients switching from Humira (the originator product) to biosimilars had a 35% higher discontinuation rate than those who remained on Humira.

 

METHODOLOGY:

  • Researchers conducted a cohort study using data on patients with psoriasis who were treated with adalimumab, a tumor necrosis factor alpha inhibitor used to treat moderate to severe psoriasis, from the French National Health Data System, British Association of Dermatologists Biologics and Immunomodulators Register, and Spanish Registry of Systemic Therapy in Psoriasis.
  • The analysis included 7387 adalimumab-naive patients who were new users of an adalimumab biosimilar and 3654 patients (switchers) who switched from Humira to a biosimilar. Patients were matched and compared with patients receiving Humira.
  • Co-primary outcomes of the study were drug discontinuation and serious adverse events.
  • Researchers assessed the following adalimumab biosimilar brands: Amgevita, Imraldi, Hyrimoz, Idacio, and Hulio.

TAKEAWAY:

  • All-cause drug discontinuation rates were similar between new users of biosimilars and Humira new users (hazard ratio [HR], 0.99; 95% CI, 0.94-1.04).
  • Discontinuation rates were higher among those who switched from Humira to a biosimilar (HR, 1.35; 95% CI, 1.19-1.52) than among those who stayed on Humira. Switching to Amgevita (HR, 1.25; 95% CI, 1.13-1.27), Imraldi (HR, 1.53; 95% CI, 1.33-1.76), and Hyrimoz (HR, 1.80; 95% CI, 1.29-2.52) was associated with higher discontinuation rates.
  • Serious adverse events were not significantly different between new users of Humira and biosimilar new users (incidence rate ratio [IRR], 0.91; 95% CI, 0.80-1.05), and between patients who switched from a biosimilar to Humira and those who stayed on Humira (IRR, 0.92; 95% CI, 0.83-1.01).
  • No significant differences in discontinuation because of ineffectiveness were found between biosimilar and Humira new users (HR, 0.97; 95% CI, 0.88-1.08). Discontinuation because of adverse events was also comparable for all biosimilars among new users, except for Hyrimoz (HR, 0.54; 95% CI, 0.35-0.85), which showed fewer discontinuations than Humira.

IN PRACTICE:

“This study found comparable drug survival and safety between adalimumab biosimilars and Humira in adalimumab-naive patients, supporting the use of biosimilars as viable alternatives for new patients,” the authors wrote. However, noting that discontinuation was more likely among those who switched from Humira to a biosimilar, they added: “Changes in treatment response, skin or injection site reactions, and nocebo effects may contribute to treatment discontinuation post-switch. Thus, patients who switch from Humira to biosimilars may require closer monitoring and support to alleviate these challenges.”

SOURCE:

The study was led by Duc Binh Phan, Dermatology Centre, Northern Care Alliance NHS Foundation Trust in Manchester, England. It was published online in The British Journal of Dermatology.

LIMITATIONS:

Unmeasured factors including psychological perceptions, regional policies, and drug availability could influence drug survival, making the results not fully reflective of treatment effectiveness or safety. Most Humira users in registries were enrolled before biosimilars became available, making it impractical to match new users on the basis of treatment initiation years. Additionally, reasons for discontinuation were not available in the French National Health Data System.

DISCLOSURES:

In the United Kingdom, the research was funded by the Psoriasis Association PhD studentship and supported by the NIHR Manchester Biomedical Research Centre. In France, the authors are employees of the French National Health Insurance, the French National Agency for the Safety of Medicines and Health Products, and the Assistance Publique — Hôpitaux de Paris and received no funding. The authors reported receiving consulting and speaker fees and clinical trial sponsorship from various pharmaceutical companies. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Biosimilars demonstrate comparable drug survival and safety with adalimumab among new users, but patients switching from Humira (the originator product) to biosimilars had a 35% higher discontinuation rate than those who remained on Humira.

 

METHODOLOGY:

  • Researchers conducted a cohort study using data on patients with psoriasis who were treated with adalimumab, a tumor necrosis factor alpha inhibitor used to treat moderate to severe psoriasis, from the French National Health Data System, British Association of Dermatologists Biologics and Immunomodulators Register, and Spanish Registry of Systemic Therapy in Psoriasis.
  • The analysis included 7387 adalimumab-naive patients who were new users of an adalimumab biosimilar and 3654 patients (switchers) who switched from Humira to a biosimilar. Patients were matched and compared with patients receiving Humira.
  • Co-primary outcomes of the study were drug discontinuation and serious adverse events.
  • Researchers assessed the following adalimumab biosimilar brands: Amgevita, Imraldi, Hyrimoz, Idacio, and Hulio.

TAKEAWAY:

  • All-cause drug discontinuation rates were similar between new users of biosimilars and Humira new users (hazard ratio [HR], 0.99; 95% CI, 0.94-1.04).
  • Discontinuation rates were higher among those who switched from Humira to a biosimilar (HR, 1.35; 95% CI, 1.19-1.52) than among those who stayed on Humira. Switching to Amgevita (HR, 1.25; 95% CI, 1.13-1.27), Imraldi (HR, 1.53; 95% CI, 1.33-1.76), and Hyrimoz (HR, 1.80; 95% CI, 1.29-2.52) was associated with higher discontinuation rates.
  • Serious adverse events were not significantly different between new users of Humira and biosimilar new users (incidence rate ratio [IRR], 0.91; 95% CI, 0.80-1.05), and between patients who switched from a biosimilar to Humira and those who stayed on Humira (IRR, 0.92; 95% CI, 0.83-1.01).
  • No significant differences in discontinuation because of ineffectiveness were found between biosimilar and Humira new users (HR, 0.97; 95% CI, 0.88-1.08). Discontinuation because of adverse events was also comparable for all biosimilars among new users, except for Hyrimoz (HR, 0.54; 95% CI, 0.35-0.85), which showed fewer discontinuations than Humira.

IN PRACTICE:

“This study found comparable drug survival and safety between adalimumab biosimilars and Humira in adalimumab-naive patients, supporting the use of biosimilars as viable alternatives for new patients,” the authors wrote. However, noting that discontinuation was more likely among those who switched from Humira to a biosimilar, they added: “Changes in treatment response, skin or injection site reactions, and nocebo effects may contribute to treatment discontinuation post-switch. Thus, patients who switch from Humira to biosimilars may require closer monitoring and support to alleviate these challenges.”

SOURCE:

The study was led by Duc Binh Phan, Dermatology Centre, Northern Care Alliance NHS Foundation Trust in Manchester, England. It was published online in The British Journal of Dermatology.

LIMITATIONS:

Unmeasured factors including psychological perceptions, regional policies, and drug availability could influence drug survival, making the results not fully reflective of treatment effectiveness or safety. Most Humira users in registries were enrolled before biosimilars became available, making it impractical to match new users on the basis of treatment initiation years. Additionally, reasons for discontinuation were not available in the French National Health Data System.

DISCLOSURES:

In the United Kingdom, the research was funded by the Psoriasis Association PhD studentship and supported by the NIHR Manchester Biomedical Research Centre. In France, the authors are employees of the French National Health Insurance, the French National Agency for the Safety of Medicines and Health Products, and the Assistance Publique — Hôpitaux de Paris and received no funding. The authors reported receiving consulting and speaker fees and clinical trial sponsorship from various pharmaceutical companies. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Biosimilars demonstrate comparable drug survival and safety with adalimumab among new users, but patients switching from Humira (the originator product) to biosimilars had a 35% higher discontinuation rate than those who remained on Humira.

 

METHODOLOGY:

  • Researchers conducted a cohort study using data on patients with psoriasis who were treated with adalimumab, a tumor necrosis factor alpha inhibitor used to treat moderate to severe psoriasis, from the French National Health Data System, British Association of Dermatologists Biologics and Immunomodulators Register, and Spanish Registry of Systemic Therapy in Psoriasis.
  • The analysis included 7387 adalimumab-naive patients who were new users of an adalimumab biosimilar and 3654 patients (switchers) who switched from Humira to a biosimilar. Patients were matched and compared with patients receiving Humira.
  • Co-primary outcomes of the study were drug discontinuation and serious adverse events.
  • Researchers assessed the following adalimumab biosimilar brands: Amgevita, Imraldi, Hyrimoz, Idacio, and Hulio.

TAKEAWAY:

  • All-cause drug discontinuation rates were similar between new users of biosimilars and Humira new users (hazard ratio [HR], 0.99; 95% CI, 0.94-1.04).
  • Discontinuation rates were higher among those who switched from Humira to a biosimilar (HR, 1.35; 95% CI, 1.19-1.52) than among those who stayed on Humira. Switching to Amgevita (HR, 1.25; 95% CI, 1.13-1.27), Imraldi (HR, 1.53; 95% CI, 1.33-1.76), and Hyrimoz (HR, 1.80; 95% CI, 1.29-2.52) was associated with higher discontinuation rates.
  • Serious adverse events were not significantly different between new users of Humira and biosimilar new users (incidence rate ratio [IRR], 0.91; 95% CI, 0.80-1.05), and between patients who switched from a biosimilar to Humira and those who stayed on Humira (IRR, 0.92; 95% CI, 0.83-1.01).
  • No significant differences in discontinuation because of ineffectiveness were found between biosimilar and Humira new users (HR, 0.97; 95% CI, 0.88-1.08). Discontinuation because of adverse events was also comparable for all biosimilars among new users, except for Hyrimoz (HR, 0.54; 95% CI, 0.35-0.85), which showed fewer discontinuations than Humira.

IN PRACTICE:

“This study found comparable drug survival and safety between adalimumab biosimilars and Humira in adalimumab-naive patients, supporting the use of biosimilars as viable alternatives for new patients,” the authors wrote. However, noting that discontinuation was more likely among those who switched from Humira to a biosimilar, they added: “Changes in treatment response, skin or injection site reactions, and nocebo effects may contribute to treatment discontinuation post-switch. Thus, patients who switch from Humira to biosimilars may require closer monitoring and support to alleviate these challenges.”

SOURCE:

The study was led by Duc Binh Phan, Dermatology Centre, Northern Care Alliance NHS Foundation Trust in Manchester, England. It was published online in The British Journal of Dermatology.

LIMITATIONS:

Unmeasured factors including psychological perceptions, regional policies, and drug availability could influence drug survival, making the results not fully reflective of treatment effectiveness or safety. Most Humira users in registries were enrolled before biosimilars became available, making it impractical to match new users on the basis of treatment initiation years. Additionally, reasons for discontinuation were not available in the French National Health Data System.

DISCLOSURES:

In the United Kingdom, the research was funded by the Psoriasis Association PhD studentship and supported by the NIHR Manchester Biomedical Research Centre. In France, the authors are employees of the French National Health Insurance, the French National Agency for the Safety of Medicines and Health Products, and the Assistance Publique — Hôpitaux de Paris and received no funding. The authors reported receiving consulting and speaker fees and clinical trial sponsorship from various pharmaceutical companies. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date