User login
Is Metformin An Unexpected Ally Against Long COVID?
TOPLINE:
METHODOLOGY:
- Previous studies have shown that metformin use before and during SARS-CoV-2 infection reduces severe COVID-19 and postacute sequelae of SARS-CoV-2 (PASC), also referred to as long COVID, in adults.
- A retrospective cohort analysis was conducted to evaluate the association between metformin use before and during SARS-CoV-2 infection and the subsequent incidence of PASC.
- Researchers used data from the National COVID Cohort Collaborative (N3C) and National Patient-Centered Clinical Research Network (PCORnet) electronic health record (EHR) databases to identify adults (age, ≥ 21 years) with T2D prescribed a diabetes medication within the past 12 months.
- Participants were categorized into those using metformin (metformin group) and those using other noninsulin diabetes medications such as sulfonylureas, dipeptidyl peptidase-4 inhibitors, or thiazolidinediones (the comparator group); those who used glucagon-like peptide 1 receptor agonists or sodium-glucose cotransporter-2 inhibitors were excluded.
- The primary outcome was the incidence of PASC or death within 180 days after SARS-CoV-2 infection, defined using International Classification of Diseases U09.9 diagnosis code and/or computable phenotype defined by a predicted probability of > 75% for PASC using a machine learning model trained on patients diagnosed using U09.9 (PASC computable phenotype).
TAKEAWAY:
- Researchers identified 51,385 and 37,947 participants from the N3C and PCORnet datasets, respectively.
- Metformin use was associated with a 21% lower risk for death or PASC using the U09.9 diagnosis code (P < .001) and a 15% lower risk using the PASC computable phenotype (P < .001) in the N3C dataset than non-metformin use.
- In the PCORnet dataset, the risk for death or PASC was 13% lower using the U09.9 diagnosis code (P = .08) with metformin use vs non-metformin use, whereas the risk did not differ significantly between the groups when using the PASC computable phenotype (P = .58).
- The incidence of PASC using the U09.9 diagnosis code for the metformin and comparator groups was similar between the two datasets (1.6% and 2.0% in N3C and 2.2 and 2.6% in PCORnet, respectively).
- However, when using the computable phenotype, the incidence rates of PASC for the metformin and comparator groups were 4.8% and 5.2% in N3C and 25.2% and 24.2% in PCORnet, respectively.
IN PRACTICE:
“The incidence of PASC was lower when defined by [International Classification of Diseases] code, compared with a computable phenotype in both databases,” the authors wrote. “This may reflect the challenges of clinical care for adults needing chronic medication management and the likelihood of those adults receiving a formal PASC diagnosis.”
SOURCE:
The study was led by Steven G. Johnson, PhD, Institute for Health Informatics, University of Minnesota, Minneapolis. It was published online in Diabetes Care.
LIMITATIONS:
The use of EHR data had several limitations, including the inability to examine a dose-dependent relationship and the lack of information on whether medications were taken before, during, or after the acute infection. The outcome definition involved the need for a medical encounter and, thus, may not capture data on all patients experiencing symptoms of PASC. The analysis focused on the prevalent use of chronic medications, limiting the assessment of initiating metformin in those diagnosed with COVID-19.
DISCLOSURES:
The study was supported by the National Institutes of Health Agreement as part of the RECOVER research program. One author reported receiving salary support from the Center for Pharmacoepidemiology and owning stock options in various pharmaceutical and biopharmaceutical companies. Another author reported receiving grant support and consulting contracts, being involved in expert witness engagement, and owning stock options in various pharmaceutical, biopharmaceutical, diabetes management, and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Previous studies have shown that metformin use before and during SARS-CoV-2 infection reduces severe COVID-19 and postacute sequelae of SARS-CoV-2 (PASC), also referred to as long COVID, in adults.
- A retrospective cohort analysis was conducted to evaluate the association between metformin use before and during SARS-CoV-2 infection and the subsequent incidence of PASC.
- Researchers used data from the National COVID Cohort Collaborative (N3C) and National Patient-Centered Clinical Research Network (PCORnet) electronic health record (EHR) databases to identify adults (age, ≥ 21 years) with T2D prescribed a diabetes medication within the past 12 months.
- Participants were categorized into those using metformin (metformin group) and those using other noninsulin diabetes medications such as sulfonylureas, dipeptidyl peptidase-4 inhibitors, or thiazolidinediones (the comparator group); those who used glucagon-like peptide 1 receptor agonists or sodium-glucose cotransporter-2 inhibitors were excluded.
- The primary outcome was the incidence of PASC or death within 180 days after SARS-CoV-2 infection, defined using International Classification of Diseases U09.9 diagnosis code and/or computable phenotype defined by a predicted probability of > 75% for PASC using a machine learning model trained on patients diagnosed using U09.9 (PASC computable phenotype).
TAKEAWAY:
- Researchers identified 51,385 and 37,947 participants from the N3C and PCORnet datasets, respectively.
- Metformin use was associated with a 21% lower risk for death or PASC using the U09.9 diagnosis code (P < .001) and a 15% lower risk using the PASC computable phenotype (P < .001) in the N3C dataset than non-metformin use.
- In the PCORnet dataset, the risk for death or PASC was 13% lower using the U09.9 diagnosis code (P = .08) with metformin use vs non-metformin use, whereas the risk did not differ significantly between the groups when using the PASC computable phenotype (P = .58).
- The incidence of PASC using the U09.9 diagnosis code for the metformin and comparator groups was similar between the two datasets (1.6% and 2.0% in N3C and 2.2 and 2.6% in PCORnet, respectively).
- However, when using the computable phenotype, the incidence rates of PASC for the metformin and comparator groups were 4.8% and 5.2% in N3C and 25.2% and 24.2% in PCORnet, respectively.
IN PRACTICE:
“The incidence of PASC was lower when defined by [International Classification of Diseases] code, compared with a computable phenotype in both databases,” the authors wrote. “This may reflect the challenges of clinical care for adults needing chronic medication management and the likelihood of those adults receiving a formal PASC diagnosis.”
SOURCE:
The study was led by Steven G. Johnson, PhD, Institute for Health Informatics, University of Minnesota, Minneapolis. It was published online in Diabetes Care.
LIMITATIONS:
The use of EHR data had several limitations, including the inability to examine a dose-dependent relationship and the lack of information on whether medications were taken before, during, or after the acute infection. The outcome definition involved the need for a medical encounter and, thus, may not capture data on all patients experiencing symptoms of PASC. The analysis focused on the prevalent use of chronic medications, limiting the assessment of initiating metformin in those diagnosed with COVID-19.
DISCLOSURES:
The study was supported by the National Institutes of Health Agreement as part of the RECOVER research program. One author reported receiving salary support from the Center for Pharmacoepidemiology and owning stock options in various pharmaceutical and biopharmaceutical companies. Another author reported receiving grant support and consulting contracts, being involved in expert witness engagement, and owning stock options in various pharmaceutical, biopharmaceutical, diabetes management, and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Previous studies have shown that metformin use before and during SARS-CoV-2 infection reduces severe COVID-19 and postacute sequelae of SARS-CoV-2 (PASC), also referred to as long COVID, in adults.
- A retrospective cohort analysis was conducted to evaluate the association between metformin use before and during SARS-CoV-2 infection and the subsequent incidence of PASC.
- Researchers used data from the National COVID Cohort Collaborative (N3C) and National Patient-Centered Clinical Research Network (PCORnet) electronic health record (EHR) databases to identify adults (age, ≥ 21 years) with T2D prescribed a diabetes medication within the past 12 months.
- Participants were categorized into those using metformin (metformin group) and those using other noninsulin diabetes medications such as sulfonylureas, dipeptidyl peptidase-4 inhibitors, or thiazolidinediones (the comparator group); those who used glucagon-like peptide 1 receptor agonists or sodium-glucose cotransporter-2 inhibitors were excluded.
- The primary outcome was the incidence of PASC or death within 180 days after SARS-CoV-2 infection, defined using International Classification of Diseases U09.9 diagnosis code and/or computable phenotype defined by a predicted probability of > 75% for PASC using a machine learning model trained on patients diagnosed using U09.9 (PASC computable phenotype).
TAKEAWAY:
- Researchers identified 51,385 and 37,947 participants from the N3C and PCORnet datasets, respectively.
- Metformin use was associated with a 21% lower risk for death or PASC using the U09.9 diagnosis code (P < .001) and a 15% lower risk using the PASC computable phenotype (P < .001) in the N3C dataset than non-metformin use.
- In the PCORnet dataset, the risk for death or PASC was 13% lower using the U09.9 diagnosis code (P = .08) with metformin use vs non-metformin use, whereas the risk did not differ significantly between the groups when using the PASC computable phenotype (P = .58).
- The incidence of PASC using the U09.9 diagnosis code for the metformin and comparator groups was similar between the two datasets (1.6% and 2.0% in N3C and 2.2 and 2.6% in PCORnet, respectively).
- However, when using the computable phenotype, the incidence rates of PASC for the metformin and comparator groups were 4.8% and 5.2% in N3C and 25.2% and 24.2% in PCORnet, respectively.
IN PRACTICE:
“The incidence of PASC was lower when defined by [International Classification of Diseases] code, compared with a computable phenotype in both databases,” the authors wrote. “This may reflect the challenges of clinical care for adults needing chronic medication management and the likelihood of those adults receiving a formal PASC diagnosis.”
SOURCE:
The study was led by Steven G. Johnson, PhD, Institute for Health Informatics, University of Minnesota, Minneapolis. It was published online in Diabetes Care.
LIMITATIONS:
The use of EHR data had several limitations, including the inability to examine a dose-dependent relationship and the lack of information on whether medications were taken before, during, or after the acute infection. The outcome definition involved the need for a medical encounter and, thus, may not capture data on all patients experiencing symptoms of PASC. The analysis focused on the prevalent use of chronic medications, limiting the assessment of initiating metformin in those diagnosed with COVID-19.
DISCLOSURES:
The study was supported by the National Institutes of Health Agreement as part of the RECOVER research program. One author reported receiving salary support from the Center for Pharmacoepidemiology and owning stock options in various pharmaceutical and biopharmaceutical companies. Another author reported receiving grant support and consulting contracts, being involved in expert witness engagement, and owning stock options in various pharmaceutical, biopharmaceutical, diabetes management, and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Vitamin D in Pregnancy Results in Stronger Bones for Kids
TOPLINE:
Gestational supplementation of 1000 IU/d cholecalciferol (vitamin D3) from early pregnancy until delivery increases the bone mineral content, bone mineral density (BMD), and bone mineral apparent density in children at age 6-7 years.
METHODOLOGY:
- The double-blinded, placebo-controlled MAVIDOS trial of gestational vitamin D supplementation previously showed increased BMD at age 4 years (but no difference at birth), and it is unclear how the effect may persist or change over time.
- In the original trial, researchers randomized 1134 pregnant women with singleton pregnancy from three UK hospitals from 2008 to 2014, and the 723 children born to mothers recruited in Southampton were invited to continue in offspring follow-up.
- Mothers were randomly assigned to receive either 1000-IU/d vitamin D or placebo from 14-17 weeks’ gestation until delivery; women in the placebo arm could take up to 400-IU/d vitamin D.
- In this post hoc analysis, among 454 children who were followed up at age 6-7 years, 447 had a usable whole body and lumbar spine dual-energy x-ray absorptiometry scan (placebo group: n = 216; 48% boys; 98% White mothers and vitamin D group: n = 231; 56% boys; 96% White mothers).
- Offspring follow-up measures at birth and 4 and 6-7 years were bone area, bone mineral content, BMD, and bone mineral apparent density, derived from a dual-energy x-ray absorptiometry scan of whole body less head (WBLH), as well as fat and lean mass.
TAKEAWAY:
- The effect of gestational vitamin D supplementation on bone outcomes in children was similar at ages 4 and 6-7 years.
- At age 6-7 years, gestational vitamin D supplementation resulted in higher WBLH bone mineral content (0.15 SD; 95% CI, 0.04-0.26) and BMD (0.18 SD; 95% CI, 0.06-0.31) than placebo.
- The WBLH bone mineral apparent density (0.18 SD; 95% CI, 0.04-0.32) was also higher in the vitamin D group.
- The lean mass was greater in the vitamin D group (0.09 SD; 95% CI, 0.00-0.17) than in the placebo group.
IN PRACTICE:
“These findings suggest that pregnancy vitamin D supplementation may be an important population health strategy to improve bone health,” the authors wrote.
SOURCE:
This study was led by Rebecca J. Moon, PhD, MRC Lifecourse Epidemiology Centre, University of Southampton, Southampton General Hospital, England. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
Only individuals with baseline vitamin D levels of 25-100 nmol/L were eligible, excluding those with severe deficiency who might have benefited the most from supplementation. The participants were mostly White and well-educated, commonly overweight, which may have limited generalizability to other populations. Only 47% of the original cohort participated in the follow-up phase. Differences in maternal age, smoking status, and education between participants who remained in the study and those who did not may have introduced bias and affected generalizability.
DISCLOSURES:
The study was supported by Versus Arthritis UK, Medical Research Council, Bupa Foundation, and National Institute for Health and Care Research, Southampton Biomedical Research Centre, and other sources. Some authors disclosed receiving travel reimbursement, speaker or lecture fees, honoraria, research funding, or personal or consultancy fees from Alliance for Better Bone Health and various pharmaceutical, biotechnology, medical device, healthcare, and food and nutrition companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Gestational supplementation of 1000 IU/d cholecalciferol (vitamin D3) from early pregnancy until delivery increases the bone mineral content, bone mineral density (BMD), and bone mineral apparent density in children at age 6-7 years.
METHODOLOGY:
- The double-blinded, placebo-controlled MAVIDOS trial of gestational vitamin D supplementation previously showed increased BMD at age 4 years (but no difference at birth), and it is unclear how the effect may persist or change over time.
- In the original trial, researchers randomized 1134 pregnant women with singleton pregnancy from three UK hospitals from 2008 to 2014, and the 723 children born to mothers recruited in Southampton were invited to continue in offspring follow-up.
- Mothers were randomly assigned to receive either 1000-IU/d vitamin D or placebo from 14-17 weeks’ gestation until delivery; women in the placebo arm could take up to 400-IU/d vitamin D.
- In this post hoc analysis, among 454 children who were followed up at age 6-7 years, 447 had a usable whole body and lumbar spine dual-energy x-ray absorptiometry scan (placebo group: n = 216; 48% boys; 98% White mothers and vitamin D group: n = 231; 56% boys; 96% White mothers).
- Offspring follow-up measures at birth and 4 and 6-7 years were bone area, bone mineral content, BMD, and bone mineral apparent density, derived from a dual-energy x-ray absorptiometry scan of whole body less head (WBLH), as well as fat and lean mass.
TAKEAWAY:
- The effect of gestational vitamin D supplementation on bone outcomes in children was similar at ages 4 and 6-7 years.
- At age 6-7 years, gestational vitamin D supplementation resulted in higher WBLH bone mineral content (0.15 SD; 95% CI, 0.04-0.26) and BMD (0.18 SD; 95% CI, 0.06-0.31) than placebo.
- The WBLH bone mineral apparent density (0.18 SD; 95% CI, 0.04-0.32) was also higher in the vitamin D group.
- The lean mass was greater in the vitamin D group (0.09 SD; 95% CI, 0.00-0.17) than in the placebo group.
IN PRACTICE:
“These findings suggest that pregnancy vitamin D supplementation may be an important population health strategy to improve bone health,” the authors wrote.
SOURCE:
This study was led by Rebecca J. Moon, PhD, MRC Lifecourse Epidemiology Centre, University of Southampton, Southampton General Hospital, England. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
Only individuals with baseline vitamin D levels of 25-100 nmol/L were eligible, excluding those with severe deficiency who might have benefited the most from supplementation. The participants were mostly White and well-educated, commonly overweight, which may have limited generalizability to other populations. Only 47% of the original cohort participated in the follow-up phase. Differences in maternal age, smoking status, and education between participants who remained in the study and those who did not may have introduced bias and affected generalizability.
DISCLOSURES:
The study was supported by Versus Arthritis UK, Medical Research Council, Bupa Foundation, and National Institute for Health and Care Research, Southampton Biomedical Research Centre, and other sources. Some authors disclosed receiving travel reimbursement, speaker or lecture fees, honoraria, research funding, or personal or consultancy fees from Alliance for Better Bone Health and various pharmaceutical, biotechnology, medical device, healthcare, and food and nutrition companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Gestational supplementation of 1000 IU/d cholecalciferol (vitamin D3) from early pregnancy until delivery increases the bone mineral content, bone mineral density (BMD), and bone mineral apparent density in children at age 6-7 years.
METHODOLOGY:
- The double-blinded, placebo-controlled MAVIDOS trial of gestational vitamin D supplementation previously showed increased BMD at age 4 years (but no difference at birth), and it is unclear how the effect may persist or change over time.
- In the original trial, researchers randomized 1134 pregnant women with singleton pregnancy from three UK hospitals from 2008 to 2014, and the 723 children born to mothers recruited in Southampton were invited to continue in offspring follow-up.
- Mothers were randomly assigned to receive either 1000-IU/d vitamin D or placebo from 14-17 weeks’ gestation until delivery; women in the placebo arm could take up to 400-IU/d vitamin D.
- In this post hoc analysis, among 454 children who were followed up at age 6-7 years, 447 had a usable whole body and lumbar spine dual-energy x-ray absorptiometry scan (placebo group: n = 216; 48% boys; 98% White mothers and vitamin D group: n = 231; 56% boys; 96% White mothers).
- Offspring follow-up measures at birth and 4 and 6-7 years were bone area, bone mineral content, BMD, and bone mineral apparent density, derived from a dual-energy x-ray absorptiometry scan of whole body less head (WBLH), as well as fat and lean mass.
TAKEAWAY:
- The effect of gestational vitamin D supplementation on bone outcomes in children was similar at ages 4 and 6-7 years.
- At age 6-7 years, gestational vitamin D supplementation resulted in higher WBLH bone mineral content (0.15 SD; 95% CI, 0.04-0.26) and BMD (0.18 SD; 95% CI, 0.06-0.31) than placebo.
- The WBLH bone mineral apparent density (0.18 SD; 95% CI, 0.04-0.32) was also higher in the vitamin D group.
- The lean mass was greater in the vitamin D group (0.09 SD; 95% CI, 0.00-0.17) than in the placebo group.
IN PRACTICE:
“These findings suggest that pregnancy vitamin D supplementation may be an important population health strategy to improve bone health,” the authors wrote.
SOURCE:
This study was led by Rebecca J. Moon, PhD, MRC Lifecourse Epidemiology Centre, University of Southampton, Southampton General Hospital, England. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
Only individuals with baseline vitamin D levels of 25-100 nmol/L were eligible, excluding those with severe deficiency who might have benefited the most from supplementation. The participants were mostly White and well-educated, commonly overweight, which may have limited generalizability to other populations. Only 47% of the original cohort participated in the follow-up phase. Differences in maternal age, smoking status, and education between participants who remained in the study and those who did not may have introduced bias and affected generalizability.
DISCLOSURES:
The study was supported by Versus Arthritis UK, Medical Research Council, Bupa Foundation, and National Institute for Health and Care Research, Southampton Biomedical Research Centre, and other sources. Some authors disclosed receiving travel reimbursement, speaker or lecture fees, honoraria, research funding, or personal or consultancy fees from Alliance for Better Bone Health and various pharmaceutical, biotechnology, medical device, healthcare, and food and nutrition companies outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Frailty, Not Just Advanced Age, Affects ANCA Vasculitis Outcomes
TOPLINE:
Older adults with antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis face a higher risk for end-stage renal disease or mortality and severe infections. However, frailty, more than age, predicts severe infections within 2 years of diagnosis.
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the Mass General Brigham ANCA-associated vasculitis cohort in the United States.
- They included 234 individuals (median age, 75 years) with incident ANCA-associated vasculitis who were treated from January 2002 to December 2019.
- Baseline frailty was measured using a claims-based frailty index, with data collected in the year before treatment initiation; individuals were categorized as those who were nonfrail, prefrail, mildly frail, and moderately to severely frail.
- Frailty, either mild or moderate to severe, was noted in 44 of 118 individuals aged ≥ 75 years and in 25 of 116 individuals aged 65-74 years.
- The outcomes of interest were the incidences of end-stage renal disease or death and severe infections within 2 years of diagnosis. The association of age and frailty with clinical outcomes was assessed in those aged 65-74 years and ≥ 75 years.
TAKEAWAY:
- Frailty was a significant predictor of severe infections within 2 years of ANCA-associated vasculitis diagnosis (adjusted hazard ratio [aHR], 8.46; 95% CI, 3.95-18.14), showing a stronger association than seen for chronological age ≥ 75 years (aHR, 2.52; 95% CI, 1.26-5.04).
- The incidence of severe infections was higher in those with vs without frailty in the age groups 65-74 years (38.9 vs 0.8 cases per 100 person-years) and ≥ 75 years (61.9 vs 12.3 cases per 100 person-years).
- Older age (≥ 75 years) was associated with an increased risk for end-stage renal disease or death (aHR, 4.50; 95% CI, 1.83-11.09); however, frailty was not.
- The effect of frailty on end-stage renal disease or death varied by age, with a larger difference observed in individuals aged 65-74 years (frail vs nonfrail, 7.5 vs 2.0 cases per 100 person-years) than in those aged ≥ 75 years (13.5 vs 16.0 cases per 100 person-years).
IN PRACTICE:
“Our results highlight the fact that assessment of frailty in the care of older adults with ANCA-associated vasculitis can distinguish a group of patients at increased risk of severe infections who might benefit from interventions to minimize infection risk and optimize outcomes,” the authors wrote.
“Incorporating frailty screening into the management of ANCA-associated vasculitis provides an opportunity to offer personalized, evidence-based care to frail older adults,” Alexandra Legge, MD, Dalhousie University, Halifax, Nova Scotia, Canada, wrote in an associated comment published online in The Lancet Rheumatology.
SOURCE:
This study was led by Sebastian E. Sattui, MD, Division of Rheumatology and Clinical Immunology, Department of Medicine, University of Pittsburgh in Pennsylvania. It was published online on September 18, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study’s observational design and single-center setting limited the generalizability of the findings. Residual confounding may have been possible despite adjustments for relevant baseline factors. The requirement of at least one healthcare encounter before baseline may have underrepresented individuals without frailty. Differences in treatment patterns after the baseline period were not controlled for.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors reported receiving grants, research support, consulting fees, honoraria, or royalties or serving on advisory boards of pharmaceutical companies and other sources outside of the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Older adults with antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis face a higher risk for end-stage renal disease or mortality and severe infections. However, frailty, more than age, predicts severe infections within 2 years of diagnosis.
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the Mass General Brigham ANCA-associated vasculitis cohort in the United States.
- They included 234 individuals (median age, 75 years) with incident ANCA-associated vasculitis who were treated from January 2002 to December 2019.
- Baseline frailty was measured using a claims-based frailty index, with data collected in the year before treatment initiation; individuals were categorized as those who were nonfrail, prefrail, mildly frail, and moderately to severely frail.
- Frailty, either mild or moderate to severe, was noted in 44 of 118 individuals aged ≥ 75 years and in 25 of 116 individuals aged 65-74 years.
- The outcomes of interest were the incidences of end-stage renal disease or death and severe infections within 2 years of diagnosis. The association of age and frailty with clinical outcomes was assessed in those aged 65-74 years and ≥ 75 years.
TAKEAWAY:
- Frailty was a significant predictor of severe infections within 2 years of ANCA-associated vasculitis diagnosis (adjusted hazard ratio [aHR], 8.46; 95% CI, 3.95-18.14), showing a stronger association than seen for chronological age ≥ 75 years (aHR, 2.52; 95% CI, 1.26-5.04).
- The incidence of severe infections was higher in those with vs without frailty in the age groups 65-74 years (38.9 vs 0.8 cases per 100 person-years) and ≥ 75 years (61.9 vs 12.3 cases per 100 person-years).
- Older age (≥ 75 years) was associated with an increased risk for end-stage renal disease or death (aHR, 4.50; 95% CI, 1.83-11.09); however, frailty was not.
- The effect of frailty on end-stage renal disease or death varied by age, with a larger difference observed in individuals aged 65-74 years (frail vs nonfrail, 7.5 vs 2.0 cases per 100 person-years) than in those aged ≥ 75 years (13.5 vs 16.0 cases per 100 person-years).
IN PRACTICE:
“Our results highlight the fact that assessment of frailty in the care of older adults with ANCA-associated vasculitis can distinguish a group of patients at increased risk of severe infections who might benefit from interventions to minimize infection risk and optimize outcomes,” the authors wrote.
“Incorporating frailty screening into the management of ANCA-associated vasculitis provides an opportunity to offer personalized, evidence-based care to frail older adults,” Alexandra Legge, MD, Dalhousie University, Halifax, Nova Scotia, Canada, wrote in an associated comment published online in The Lancet Rheumatology.
SOURCE:
This study was led by Sebastian E. Sattui, MD, Division of Rheumatology and Clinical Immunology, Department of Medicine, University of Pittsburgh in Pennsylvania. It was published online on September 18, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study’s observational design and single-center setting limited the generalizability of the findings. Residual confounding may have been possible despite adjustments for relevant baseline factors. The requirement of at least one healthcare encounter before baseline may have underrepresented individuals without frailty. Differences in treatment patterns after the baseline period were not controlled for.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors reported receiving grants, research support, consulting fees, honoraria, or royalties or serving on advisory boards of pharmaceutical companies and other sources outside of the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Older adults with antineutrophil cytoplasmic antibody (ANCA)–associated vasculitis face a higher risk for end-stage renal disease or mortality and severe infections. However, frailty, more than age, predicts severe infections within 2 years of diagnosis.
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the Mass General Brigham ANCA-associated vasculitis cohort in the United States.
- They included 234 individuals (median age, 75 years) with incident ANCA-associated vasculitis who were treated from January 2002 to December 2019.
- Baseline frailty was measured using a claims-based frailty index, with data collected in the year before treatment initiation; individuals were categorized as those who were nonfrail, prefrail, mildly frail, and moderately to severely frail.
- Frailty, either mild or moderate to severe, was noted in 44 of 118 individuals aged ≥ 75 years and in 25 of 116 individuals aged 65-74 years.
- The outcomes of interest were the incidences of end-stage renal disease or death and severe infections within 2 years of diagnosis. The association of age and frailty with clinical outcomes was assessed in those aged 65-74 years and ≥ 75 years.
TAKEAWAY:
- Frailty was a significant predictor of severe infections within 2 years of ANCA-associated vasculitis diagnosis (adjusted hazard ratio [aHR], 8.46; 95% CI, 3.95-18.14), showing a stronger association than seen for chronological age ≥ 75 years (aHR, 2.52; 95% CI, 1.26-5.04).
- The incidence of severe infections was higher in those with vs without frailty in the age groups 65-74 years (38.9 vs 0.8 cases per 100 person-years) and ≥ 75 years (61.9 vs 12.3 cases per 100 person-years).
- Older age (≥ 75 years) was associated with an increased risk for end-stage renal disease or death (aHR, 4.50; 95% CI, 1.83-11.09); however, frailty was not.
- The effect of frailty on end-stage renal disease or death varied by age, with a larger difference observed in individuals aged 65-74 years (frail vs nonfrail, 7.5 vs 2.0 cases per 100 person-years) than in those aged ≥ 75 years (13.5 vs 16.0 cases per 100 person-years).
IN PRACTICE:
“Our results highlight the fact that assessment of frailty in the care of older adults with ANCA-associated vasculitis can distinguish a group of patients at increased risk of severe infections who might benefit from interventions to minimize infection risk and optimize outcomes,” the authors wrote.
“Incorporating frailty screening into the management of ANCA-associated vasculitis provides an opportunity to offer personalized, evidence-based care to frail older adults,” Alexandra Legge, MD, Dalhousie University, Halifax, Nova Scotia, Canada, wrote in an associated comment published online in The Lancet Rheumatology.
SOURCE:
This study was led by Sebastian E. Sattui, MD, Division of Rheumatology and Clinical Immunology, Department of Medicine, University of Pittsburgh in Pennsylvania. It was published online on September 18, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study’s observational design and single-center setting limited the generalizability of the findings. Residual confounding may have been possible despite adjustments for relevant baseline factors. The requirement of at least one healthcare encounter before baseline may have underrepresented individuals without frailty. Differences in treatment patterns after the baseline period were not controlled for.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some authors reported receiving grants, research support, consulting fees, honoraria, or royalties or serving on advisory boards of pharmaceutical companies and other sources outside of the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Hypothyroidism Treatment Does Not Affect Cognitive Decline in Menopausal Women
TOPLINE:
Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.
METHODOLOGY:
- Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
- Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
- Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
- Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.
TAKEAWAY:
- Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; P = .010).
- At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
- Over the study period, there was no significant difference in cognitive decline between the groups.
- There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
- Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.
IN PRACTICE:
When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”
SOURCE:
The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.
LIMITATIONS:
The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.
DISCLOSURES:
The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.
METHODOLOGY:
- Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
- Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
- Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
- Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.
TAKEAWAY:
- Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; P = .010).
- At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
- Over the study period, there was no significant difference in cognitive decline between the groups.
- There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
- Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.
IN PRACTICE:
When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”
SOURCE:
The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.
LIMITATIONS:
The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.
DISCLOSURES:
The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.
METHODOLOGY:
- Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
- Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
- Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
- Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.
TAKEAWAY:
- Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; P = .010).
- At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
- Over the study period, there was no significant difference in cognitive decline between the groups.
- There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
- Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.
IN PRACTICE:
When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”
SOURCE:
The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.
LIMITATIONS:
The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.
DISCLOSURES:
The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Race Adjustments in Algorithms Boost CRC Risk Prediction
TOPLINE:
Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.
METHODOLOGY:
- The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
- To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
- Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
- The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
- The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.
TAKEAWAY:
- The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
- Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
- The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
- Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
- The race-adjusted algorithm improved the goodness of fit (P < .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).
IN PRACTICE:
“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.
SOURCE:
The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .
LIMITATIONS:
The study did not report any limitations.
DISCLOSURES:
The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.
METHODOLOGY:
- The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
- To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
- Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
- The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
- The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.
TAKEAWAY:
- The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
- Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
- The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
- Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
- The race-adjusted algorithm improved the goodness of fit (P < .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).
IN PRACTICE:
“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.
SOURCE:
The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .
LIMITATIONS:
The study did not report any limitations.
DISCLOSURES:
The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.
METHODOLOGY:
- The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
- To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
- Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
- The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
- The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.
TAKEAWAY:
- The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
- Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
- The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
- Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
- The race-adjusted algorithm improved the goodness of fit (P < .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).
IN PRACTICE:
“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.
SOURCE:
The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .
LIMITATIONS:
The study did not report any limitations.
DISCLOSURES:
The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Does Screening for CKD Benefit Older Adults?
TOPLINE:
Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.
METHODOLOGY:
- Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
- Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
- The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
- The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.
TAKEAWAY:
- The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
- Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
- Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
- No cases of end-stage kidney disease were reported during the study period.
IN PRACTICE:
“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.
SOURCE:
The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.
LIMITATIONS:
The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.
DISCLOSURES:
The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.
METHODOLOGY:
- Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
- Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
- The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
- The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.
TAKEAWAY:
- The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
- Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
- Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
- No cases of end-stage kidney disease were reported during the study period.
IN PRACTICE:
“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.
SOURCE:
The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.
LIMITATIONS:
The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.
DISCLOSURES:
The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.
METHODOLOGY:
- Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
- Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
- The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
- The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.
TAKEAWAY:
- The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
- Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
- Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
- No cases of end-stage kidney disease were reported during the study period.
IN PRACTICE:
“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.
SOURCE:
The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.
LIMITATIONS:
The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.
DISCLOSURES:
The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
The Uneven Surge in Diabetes in the United States
TOPLINE:
METHODOLOGY:
- Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
- To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
- Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
- The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
- Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.
TAKEAWAY:
- The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
- The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
- The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
- Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.
IN PRACTICE:
“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.
SOURCE:
The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.
LIMITATIONS:
The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.
DISCLOSURES:
The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
- To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
- Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
- The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
- Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.
TAKEAWAY:
- The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
- The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
- The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
- Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.
IN PRACTICE:
“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.
SOURCE:
The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.
LIMITATIONS:
The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.
DISCLOSURES:
The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
- To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
- Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
- The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
- Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.
TAKEAWAY:
- The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
- The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
- The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
- Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.
IN PRACTICE:
“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.
SOURCE:
The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.
LIMITATIONS:
The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.
DISCLOSURES:
The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Healthy Lifestyle Mitigates Brain Aging in Diabetes
TOPLINE:
with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.
METHODOLOGY:
- Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
- Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
- The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
- The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
- The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.
TAKEAWAY:
- Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
- The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
- In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
- A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.
IN PRACTICE:
“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.
SOURCE:
This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.
LIMITATIONS:
The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.
DISCLOSURES:
The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.
METHODOLOGY:
- Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
- Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
- The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
- The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
- The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.
TAKEAWAY:
- Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
- The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
- In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
- A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.
IN PRACTICE:
“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.
SOURCE:
This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.
LIMITATIONS:
The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.
DISCLOSURES:
The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.
METHODOLOGY:
- Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
- Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
- The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
- The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
- The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.
TAKEAWAY:
- Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
- The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
- In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
- A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.
IN PRACTICE:
“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.
SOURCE:
This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.
LIMITATIONS:
The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.
DISCLOSURES:
The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Muscle Relaxants for Chronic Pain: Where Is the Greatest Evidence?
TOPLINE:
The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.
METHODOLOGY:
- Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
- They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
- Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.
TAKEAWAY:
- The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
- While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
- Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
- The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.
IN PRACTICE:
“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”
SOURCE:
The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open
LIMITATIONS:
This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.
METHODOLOGY:
- Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
- They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
- Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.
TAKEAWAY:
- The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
- While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
- Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
- The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.
IN PRACTICE:
“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”
SOURCE:
The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open
LIMITATIONS:
This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.
METHODOLOGY:
- Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
- They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
- Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.
TAKEAWAY:
- The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
- While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
- Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
- The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.
IN PRACTICE:
“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”
SOURCE:
The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open
LIMITATIONS:
This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Hypnosis May Offer Relief During Sharp Debridement of Skin Ulcers
TOPLINE:
Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.
METHODOLOGY:
- Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
- They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
- Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
- Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.
TAKEAWAY:
- Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
- Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
- The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
- Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.
IN PRACTICE:
“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.
SOURCE:
The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.
DISCLOSURES:
The study did not have a funding source. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.
METHODOLOGY:
- Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
- They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
- Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
- Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.
TAKEAWAY:
- Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
- Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
- The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
- Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.
IN PRACTICE:
“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.
SOURCE:
The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.
DISCLOSURES:
The study did not have a funding source. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.
METHODOLOGY:
- Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
- They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
- Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
- Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.
TAKEAWAY:
- Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
- Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
- The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
- Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.
IN PRACTICE:
“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.
SOURCE:
The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.
DISCLOSURES:
The study did not have a funding source. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.