User login
Rurality and Age May Shape Phone-Only Mental Health Care Access Among Veterans
TOPLINE:
Patients living in rural areas and those aged ≥ 65 y had increased odds of receiving mental health care exclusively by phone.
METHODOLOGY:
- Researchers explored factors linked to receiving phone-only mental health care among patients within the Department of Veterans Affairs.
- They included data for 1,156,146 veteran patients with at least one mental health-specific outpatient encounter between October 2021 and September 2022 and at least one between October 2022 and September 2023.
- Patients were categorized as those who received care through phone only (n = 49,125) and those who received care through other methods (n = 1,107,021. Care was received exclusively through video (6.39%), in-person (6.63%), or a combination of in-person, video, and/or phone (86.98%).
- Demographic and clinical predictors, including rurality, age, sex, race, ethnicity, and the number of mental health diagnoses (< 3 vs ≥ 3), were evaluated.
TAKEAWAY:
- The phone-only group had a mean of 6.27 phone visits, whereas those who received care through other methods had a mean of 4.79 phone visits.
- Highly rural patients had 1.50 times higher odds of receiving phone-only mental health care than their urban counterparts (adjusted odds ratio [aOR], 1.50; P < .0001).
- Patients aged 65 years or older were more than twice as likely to receive phone-only care than those younger than 30 years (aOR, ≥ 2.17; P < .0001).
- Having fewer than three mental health diagnoses and more than 50% of mental health visits conducted by medical providers was associated with higher odds of receiving mental health care exclusively by phone (aORs, 2.03 and 1.87, respectively; P < .0001).
IN PRACTICE:
“The results of this work help to characterize the phone-only patient population and can serve to inform future implementation efforts to ensure that patients are receiving care via the modality that best meets their needs,” the authors wrote.
SOURCE:
This study was led by Samantha L. Connolly, PhD, at the VA Boston Healthcare System in Boston. It was published online in The Journal of Rural Health.
LIMITATIONS:
This study focused on a veteran population which may limit the generalizability of the findings to other groups. Additionally, its cross-sectional design restricted the ability to determine cause-and-effect relationships between factors and phone-only care.
DISCLOSURES:
This study was supported by the US Department of Veterans Affairs. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Patients living in rural areas and those aged ≥ 65 y had increased odds of receiving mental health care exclusively by phone.
METHODOLOGY:
- Researchers explored factors linked to receiving phone-only mental health care among patients within the Department of Veterans Affairs.
- They included data for 1,156,146 veteran patients with at least one mental health-specific outpatient encounter between October 2021 and September 2022 and at least one between October 2022 and September 2023.
- Patients were categorized as those who received care through phone only (n = 49,125) and those who received care through other methods (n = 1,107,021. Care was received exclusively through video (6.39%), in-person (6.63%), or a combination of in-person, video, and/or phone (86.98%).
- Demographic and clinical predictors, including rurality, age, sex, race, ethnicity, and the number of mental health diagnoses (< 3 vs ≥ 3), were evaluated.
TAKEAWAY:
- The phone-only group had a mean of 6.27 phone visits, whereas those who received care through other methods had a mean of 4.79 phone visits.
- Highly rural patients had 1.50 times higher odds of receiving phone-only mental health care than their urban counterparts (adjusted odds ratio [aOR], 1.50; P < .0001).
- Patients aged 65 years or older were more than twice as likely to receive phone-only care than those younger than 30 years (aOR, ≥ 2.17; P < .0001).
- Having fewer than three mental health diagnoses and more than 50% of mental health visits conducted by medical providers was associated with higher odds of receiving mental health care exclusively by phone (aORs, 2.03 and 1.87, respectively; P < .0001).
IN PRACTICE:
“The results of this work help to characterize the phone-only patient population and can serve to inform future implementation efforts to ensure that patients are receiving care via the modality that best meets their needs,” the authors wrote.
SOURCE:
This study was led by Samantha L. Connolly, PhD, at the VA Boston Healthcare System in Boston. It was published online in The Journal of Rural Health.
LIMITATIONS:
This study focused on a veteran population which may limit the generalizability of the findings to other groups. Additionally, its cross-sectional design restricted the ability to determine cause-and-effect relationships between factors and phone-only care.
DISCLOSURES:
This study was supported by the US Department of Veterans Affairs. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Patients living in rural areas and those aged ≥ 65 y had increased odds of receiving mental health care exclusively by phone.
METHODOLOGY:
- Researchers explored factors linked to receiving phone-only mental health care among patients within the Department of Veterans Affairs.
- They included data for 1,156,146 veteran patients with at least one mental health-specific outpatient encounter between October 2021 and September 2022 and at least one between October 2022 and September 2023.
- Patients were categorized as those who received care through phone only (n = 49,125) and those who received care through other methods (n = 1,107,021. Care was received exclusively through video (6.39%), in-person (6.63%), or a combination of in-person, video, and/or phone (86.98%).
- Demographic and clinical predictors, including rurality, age, sex, race, ethnicity, and the number of mental health diagnoses (< 3 vs ≥ 3), were evaluated.
TAKEAWAY:
- The phone-only group had a mean of 6.27 phone visits, whereas those who received care through other methods had a mean of 4.79 phone visits.
- Highly rural patients had 1.50 times higher odds of receiving phone-only mental health care than their urban counterparts (adjusted odds ratio [aOR], 1.50; P < .0001).
- Patients aged 65 years or older were more than twice as likely to receive phone-only care than those younger than 30 years (aOR, ≥ 2.17; P < .0001).
- Having fewer than three mental health diagnoses and more than 50% of mental health visits conducted by medical providers was associated with higher odds of receiving mental health care exclusively by phone (aORs, 2.03 and 1.87, respectively; P < .0001).
IN PRACTICE:
“The results of this work help to characterize the phone-only patient population and can serve to inform future implementation efforts to ensure that patients are receiving care via the modality that best meets their needs,” the authors wrote.
SOURCE:
This study was led by Samantha L. Connolly, PhD, at the VA Boston Healthcare System in Boston. It was published online in The Journal of Rural Health.
LIMITATIONS:
This study focused on a veteran population which may limit the generalizability of the findings to other groups. Additionally, its cross-sectional design restricted the ability to determine cause-and-effect relationships between factors and phone-only care.
DISCLOSURES:
This study was supported by the US Department of Veterans Affairs. The authors declared having no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TNF Inhibitors Show Comparable Safety With Non-TNF Inhibitors in US Veterans With RA-ILD
TOPLINE:
Tumor necrosis factor (TNF) inhibitors led to no significant difference in survival or respiratory-related hospitalizations, compared with non-TNF inhibitors, in patients with rheumatoid arthritis–associated interstitial lung disease (RA-ILD).
METHODOLOGY:
- Guidelines from the American College of Rheumatology and the American College of Chest Physicians conditionally advise against the use of TNF inhibitors for treating ILD in patients with RA-ILD, with persisting uncertainty about the safety of TNF inhibitors.
- Researchers conducted a retrospective cohort study using data from the US Department of Veterans Affairs, with a focus on comparing outcomes in patients with RA-ILD who initiated TNF or non-TNF inhibitors between 2006 and 2018.
- A total of 1047 US veterans with RA-ILD were included, with 237 who initiated TNF inhibitors propensity matched in a 1:1 ratio with 237 who initiated non-TNF inhibitors (mean age, 68 years; 92% men).
- The primary composite outcome was time to death or respiratory-related hospitalization over a follow-up period of up to 3 years.
- The secondary outcomes included all-cause mortality, respiratory-related mortality, and respiratory-related hospitalization, with additional assessments over a 1-year period.
TAKEAWAY:
- No significant difference was observed in the composite outcome of death or respiratory-related hospitalization between the TNF and non-TNF inhibitor groups (adjusted hazard ratio, 1.21; 95% CI, 0.92-1.58).
- No significant differences in the risk for respiratory-related hospitalization and all-cause or respiratory-related mortality were found between the TNF and non-TNF inhibitor groups. Similar findings were observed for all the outcomes during 1 year of follow-up.
- The mean duration of medication use prior to discontinuation, the time to discontinuation, and the mean predicted forced vital capacity percentage were similar for both groups.
- In a subgroup analysis of patients aged ≥ 65 years, those treated with non-TNF inhibitors had a higher risk for the composite outcome and all-cause and respiratory-related mortality than those treated with TNF inhibitors. No significant differences in outcomes were observed between the two treatment groups among patients aged < 65 years.
IN PRACTICE:
“Our results do not suggest that systematic avoidance of TNF inhibitors is required in all patients with rheumatoid arthritis–associated ILD. However, given disease heterogeneity and imprecision of our estimates, some subpopulations of patients with rheumatoid arthritis–associated ILD might benefit from specific biological or targeted synthetic DMARD [disease-modifying antirheumatic drug] treatment strategies,” the authors wrote.
SOURCE:
The study was led by Bryant R. England, MD, PhD, University of Nebraska Medical Center, Omaha It was published online on January 7, 2025, in The Lancet Rheumatology.
LIMITATIONS:
Administrative algorithms were used for identifying RA-ILD, potentially leading to misclassification and limiting phenotyping accuracy. Even with the use of propensity score methods, there might still be residual selection bias or unmeasured confounding. The study lacked comprehensive measures of posttreatment forced vital capacity and other indicators of ILD severity. The study population, predominantly men and those with a smoking history, may limit the generalizability of the findings to other groups.
DISCLOSURES:
The study was primarily funded by the US Department of Veterans Affairs. Some authors reported having financial relationships with pharmaceutical companies unrelated to the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Tumor necrosis factor (TNF) inhibitors led to no significant difference in survival or respiratory-related hospitalizations, compared with non-TNF inhibitors, in patients with rheumatoid arthritis–associated interstitial lung disease (RA-ILD).
METHODOLOGY:
- Guidelines from the American College of Rheumatology and the American College of Chest Physicians conditionally advise against the use of TNF inhibitors for treating ILD in patients with RA-ILD, with persisting uncertainty about the safety of TNF inhibitors.
- Researchers conducted a retrospective cohort study using data from the US Department of Veterans Affairs, with a focus on comparing outcomes in patients with RA-ILD who initiated TNF or non-TNF inhibitors between 2006 and 2018.
- A total of 1047 US veterans with RA-ILD were included, with 237 who initiated TNF inhibitors propensity matched in a 1:1 ratio with 237 who initiated non-TNF inhibitors (mean age, 68 years; 92% men).
- The primary composite outcome was time to death or respiratory-related hospitalization over a follow-up period of up to 3 years.
- The secondary outcomes included all-cause mortality, respiratory-related mortality, and respiratory-related hospitalization, with additional assessments over a 1-year period.
TAKEAWAY:
- No significant difference was observed in the composite outcome of death or respiratory-related hospitalization between the TNF and non-TNF inhibitor groups (adjusted hazard ratio, 1.21; 95% CI, 0.92-1.58).
- No significant differences in the risk for respiratory-related hospitalization and all-cause or respiratory-related mortality were found between the TNF and non-TNF inhibitor groups. Similar findings were observed for all the outcomes during 1 year of follow-up.
- The mean duration of medication use prior to discontinuation, the time to discontinuation, and the mean predicted forced vital capacity percentage were similar for both groups.
- In a subgroup analysis of patients aged ≥ 65 years, those treated with non-TNF inhibitors had a higher risk for the composite outcome and all-cause and respiratory-related mortality than those treated with TNF inhibitors. No significant differences in outcomes were observed between the two treatment groups among patients aged < 65 years.
IN PRACTICE:
“Our results do not suggest that systematic avoidance of TNF inhibitors is required in all patients with rheumatoid arthritis–associated ILD. However, given disease heterogeneity and imprecision of our estimates, some subpopulations of patients with rheumatoid arthritis–associated ILD might benefit from specific biological or targeted synthetic DMARD [disease-modifying antirheumatic drug] treatment strategies,” the authors wrote.
SOURCE:
The study was led by Bryant R. England, MD, PhD, University of Nebraska Medical Center, Omaha It was published online on January 7, 2025, in The Lancet Rheumatology.
LIMITATIONS:
Administrative algorithms were used for identifying RA-ILD, potentially leading to misclassification and limiting phenotyping accuracy. Even with the use of propensity score methods, there might still be residual selection bias or unmeasured confounding. The study lacked comprehensive measures of posttreatment forced vital capacity and other indicators of ILD severity. The study population, predominantly men and those with a smoking history, may limit the generalizability of the findings to other groups.
DISCLOSURES:
The study was primarily funded by the US Department of Veterans Affairs. Some authors reported having financial relationships with pharmaceutical companies unrelated to the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Tumor necrosis factor (TNF) inhibitors led to no significant difference in survival or respiratory-related hospitalizations, compared with non-TNF inhibitors, in patients with rheumatoid arthritis–associated interstitial lung disease (RA-ILD).
METHODOLOGY:
- Guidelines from the American College of Rheumatology and the American College of Chest Physicians conditionally advise against the use of TNF inhibitors for treating ILD in patients with RA-ILD, with persisting uncertainty about the safety of TNF inhibitors.
- Researchers conducted a retrospective cohort study using data from the US Department of Veterans Affairs, with a focus on comparing outcomes in patients with RA-ILD who initiated TNF or non-TNF inhibitors between 2006 and 2018.
- A total of 1047 US veterans with RA-ILD were included, with 237 who initiated TNF inhibitors propensity matched in a 1:1 ratio with 237 who initiated non-TNF inhibitors (mean age, 68 years; 92% men).
- The primary composite outcome was time to death or respiratory-related hospitalization over a follow-up period of up to 3 years.
- The secondary outcomes included all-cause mortality, respiratory-related mortality, and respiratory-related hospitalization, with additional assessments over a 1-year period.
TAKEAWAY:
- No significant difference was observed in the composite outcome of death or respiratory-related hospitalization between the TNF and non-TNF inhibitor groups (adjusted hazard ratio, 1.21; 95% CI, 0.92-1.58).
- No significant differences in the risk for respiratory-related hospitalization and all-cause or respiratory-related mortality were found between the TNF and non-TNF inhibitor groups. Similar findings were observed for all the outcomes during 1 year of follow-up.
- The mean duration of medication use prior to discontinuation, the time to discontinuation, and the mean predicted forced vital capacity percentage were similar for both groups.
- In a subgroup analysis of patients aged ≥ 65 years, those treated with non-TNF inhibitors had a higher risk for the composite outcome and all-cause and respiratory-related mortality than those treated with TNF inhibitors. No significant differences in outcomes were observed between the two treatment groups among patients aged < 65 years.
IN PRACTICE:
“Our results do not suggest that systematic avoidance of TNF inhibitors is required in all patients with rheumatoid arthritis–associated ILD. However, given disease heterogeneity and imprecision of our estimates, some subpopulations of patients with rheumatoid arthritis–associated ILD might benefit from specific biological or targeted synthetic DMARD [disease-modifying antirheumatic drug] treatment strategies,” the authors wrote.
SOURCE:
The study was led by Bryant R. England, MD, PhD, University of Nebraska Medical Center, Omaha It was published online on January 7, 2025, in The Lancet Rheumatology.
LIMITATIONS:
Administrative algorithms were used for identifying RA-ILD, potentially leading to misclassification and limiting phenotyping accuracy. Even with the use of propensity score methods, there might still be residual selection bias or unmeasured confounding. The study lacked comprehensive measures of posttreatment forced vital capacity and other indicators of ILD severity. The study population, predominantly men and those with a smoking history, may limit the generalizability of the findings to other groups.
DISCLOSURES:
The study was primarily funded by the US Department of Veterans Affairs. Some authors reported having financial relationships with pharmaceutical companies unrelated to the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
AI-Enhanced ECG Used to Predict Hypertension and Associated Risks
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Potassium Nitrate Fails to Boost Exercise Capacity in Patients With Heart Failure With Preserved Ejection Fraction
TOPLINE:
METHODOLOGY:
- This multicenter crossover trial, conducted across three centers in the United States, assessed the effect of administering KNO3 on exercise capacity and quality of life.
- It included 84 patients with symptomatic HFpEF (median age, 68 years; 69% women; 76% White) who had a left ventricular ejection fraction over 50% and elevated intracardiac pressures. Participants had obesity (mean body mass index, 36.22), with a high prevalence of hypertension, diabetes, and obstructive sleep apnea.
- Patients were randomly assigned to receive either 6 mmol KNO3 first (n = 41) or 6 mmol potassium chloride (KCl) first (n = 43) three times daily for 6 weeks, with a 1-week washout period in between.
- At the end of each intervention phase, a test of incremental cardiopulmonary exercise was conducted using a supine cycle ergometer.
- Primary endpoints were the difference in peak oxygen uptake and total work performed during the exercise test; secondary endpoints included quality of life, left ventricular systolic and diastolic function, exercise systemic vasodilatory reserve, and parameters related to pulsatile arterial load.
TAKEAWAY:
- The administration of KNO3 vs KCl increased the levels of serum metabolites of nitric oxide significantly after 6 weeks (418.44 vs 40.11 μM; P < .001).
- Peak oxygen uptake or the total work performed did not improve significantly with the administration of KNO3, compared with KCl. Quality of life also did not improve with the administration of KNO3.
- Mean arterial pressure at peak exercise was significantly lower after the administration of KNO3 than after KCl (122.5 vs 127.6 mm Hg; P = .04), but the vasodilatory reserve and resting and orthostatic blood pressure did not differ.
- Adverse events were mostly minor, with gastrointestinal issues being the most common side effects reported.
IN PRACTICE:
“In this randomized crossover trial, chronic KNO3 administration did not improve exercise capacity or quality of life, as compared with KCl among participants with HFpEF,” the authors of the study wrote.
SOURCE:
The study was led by Payman Zamani, MD, MTR, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia. It was published online on December 18, 2024, in JAMA Cardiology.
LIMITATIONS:
The potential activation of compensatory mechanisms by the chronic inorganic nitrate administration may have neutralized the short-term benefits. Various abnormalities in oxygen transport may be present simultaneously in patients with HFpEF, suggesting a combination of interventions may be required to improve exercise capacity.
DISCLOSURES:
This trial was supported by the National Heart, Lung, and Blood Institute. The study was supported by the National Center for Advancing Translational Sciences and National Institutes of Health. Some authors reported receiving grants, personal fees, and consulting fees and having patents from various pharmaceutical and medical device companies and institutes. One author reported having full-time employment with a healthcare company.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- This multicenter crossover trial, conducted across three centers in the United States, assessed the effect of administering KNO3 on exercise capacity and quality of life.
- It included 84 patients with symptomatic HFpEF (median age, 68 years; 69% women; 76% White) who had a left ventricular ejection fraction over 50% and elevated intracardiac pressures. Participants had obesity (mean body mass index, 36.22), with a high prevalence of hypertension, diabetes, and obstructive sleep apnea.
- Patients were randomly assigned to receive either 6 mmol KNO3 first (n = 41) or 6 mmol potassium chloride (KCl) first (n = 43) three times daily for 6 weeks, with a 1-week washout period in between.
- At the end of each intervention phase, a test of incremental cardiopulmonary exercise was conducted using a supine cycle ergometer.
- Primary endpoints were the difference in peak oxygen uptake and total work performed during the exercise test; secondary endpoints included quality of life, left ventricular systolic and diastolic function, exercise systemic vasodilatory reserve, and parameters related to pulsatile arterial load.
TAKEAWAY:
- The administration of KNO3 vs KCl increased the levels of serum metabolites of nitric oxide significantly after 6 weeks (418.44 vs 40.11 μM; P < .001).
- Peak oxygen uptake or the total work performed did not improve significantly with the administration of KNO3, compared with KCl. Quality of life also did not improve with the administration of KNO3.
- Mean arterial pressure at peak exercise was significantly lower after the administration of KNO3 than after KCl (122.5 vs 127.6 mm Hg; P = .04), but the vasodilatory reserve and resting and orthostatic blood pressure did not differ.
- Adverse events were mostly minor, with gastrointestinal issues being the most common side effects reported.
IN PRACTICE:
“In this randomized crossover trial, chronic KNO3 administration did not improve exercise capacity or quality of life, as compared with KCl among participants with HFpEF,” the authors of the study wrote.
SOURCE:
The study was led by Payman Zamani, MD, MTR, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia. It was published online on December 18, 2024, in JAMA Cardiology.
LIMITATIONS:
The potential activation of compensatory mechanisms by the chronic inorganic nitrate administration may have neutralized the short-term benefits. Various abnormalities in oxygen transport may be present simultaneously in patients with HFpEF, suggesting a combination of interventions may be required to improve exercise capacity.
DISCLOSURES:
This trial was supported by the National Heart, Lung, and Blood Institute. The study was supported by the National Center for Advancing Translational Sciences and National Institutes of Health. Some authors reported receiving grants, personal fees, and consulting fees and having patents from various pharmaceutical and medical device companies and institutes. One author reported having full-time employment with a healthcare company.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- This multicenter crossover trial, conducted across three centers in the United States, assessed the effect of administering KNO3 on exercise capacity and quality of life.
- It included 84 patients with symptomatic HFpEF (median age, 68 years; 69% women; 76% White) who had a left ventricular ejection fraction over 50% and elevated intracardiac pressures. Participants had obesity (mean body mass index, 36.22), with a high prevalence of hypertension, diabetes, and obstructive sleep apnea.
- Patients were randomly assigned to receive either 6 mmol KNO3 first (n = 41) or 6 mmol potassium chloride (KCl) first (n = 43) three times daily for 6 weeks, with a 1-week washout period in between.
- At the end of each intervention phase, a test of incremental cardiopulmonary exercise was conducted using a supine cycle ergometer.
- Primary endpoints were the difference in peak oxygen uptake and total work performed during the exercise test; secondary endpoints included quality of life, left ventricular systolic and diastolic function, exercise systemic vasodilatory reserve, and parameters related to pulsatile arterial load.
TAKEAWAY:
- The administration of KNO3 vs KCl increased the levels of serum metabolites of nitric oxide significantly after 6 weeks (418.44 vs 40.11 μM; P < .001).
- Peak oxygen uptake or the total work performed did not improve significantly with the administration of KNO3, compared with KCl. Quality of life also did not improve with the administration of KNO3.
- Mean arterial pressure at peak exercise was significantly lower after the administration of KNO3 than after KCl (122.5 vs 127.6 mm Hg; P = .04), but the vasodilatory reserve and resting and orthostatic blood pressure did not differ.
- Adverse events were mostly minor, with gastrointestinal issues being the most common side effects reported.
IN PRACTICE:
“In this randomized crossover trial, chronic KNO3 administration did not improve exercise capacity or quality of life, as compared with KCl among participants with HFpEF,” the authors of the study wrote.
SOURCE:
The study was led by Payman Zamani, MD, MTR, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia. It was published online on December 18, 2024, in JAMA Cardiology.
LIMITATIONS:
The potential activation of compensatory mechanisms by the chronic inorganic nitrate administration may have neutralized the short-term benefits. Various abnormalities in oxygen transport may be present simultaneously in patients with HFpEF, suggesting a combination of interventions may be required to improve exercise capacity.
DISCLOSURES:
This trial was supported by the National Heart, Lung, and Blood Institute. The study was supported by the National Center for Advancing Translational Sciences and National Institutes of Health. Some authors reported receiving grants, personal fees, and consulting fees and having patents from various pharmaceutical and medical device companies and institutes. One author reported having full-time employment with a healthcare company.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Lowering Urate May Protect Kidneys in Gout Patients With CKD
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Genetic Markers May Predict TNF Inhibitor Response in Rheumatoid Arthritis
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Fitness Watch Bands Laden With PFHxA May Pose Health Risks
TOPLINE:
Perfluorohexanoic acid (PFHxA) is found in fluoroelastomer watch bands at concentrations of up to 16,662 ng/g, highlighting the need for further research on dermal absorption and exposure risks.
METHODOLOGY:
- Fluoroelastomers are a subclass of polymeric per- and polyfluoroalkyl substances (PFAS), which are used to help wearable device materials maintain their appearance and structure after contact with the skin, sweat, and personal care products (eg, sunscreen).
- Researchers investigated the presence of PFAS in 22 new and used US fitness and smart watch bands from a range of brands and price points, of which 13 were advertised as containing fluoroelastomers.
- Total fluorine concentrations were measured using particle-induced gamma-ray emission spectroscopy with cut pieces of the watch bands.
- Solvent extraction was performed, and targeted analysis for 20 PFAS compounds was conducted using liquid chromatography-tandem mass spectrometry.
- A subset of six watch bands, with the highest and lowest detectable PFAS concentrations (three each), was subjected to a direct total oxidative precursor assay to determine the presence of PFAS precursors.
TAKEAWAY:
- Watch bands advertised as containing fluoroelastomers had total fluorine concentrations ranging from 28.5% to 90.7%; only two of the nine bands not advertised to contain fluoroelastomers had concentrations of this PFAS, which ranged from 28.1% to 49.7%.
- Expensive watch bands showed high fluorine levels, with concentrations ranging from 49.7% to 90.7%, whereas inexpensive bands contained less than 1% fluorine on their surface.
- PFHxA was the most common PFAS, detected in 41% of the watch bands.
- PFXxA had a median concentration of 773 ng/g, much higher than the concentrations found in other consumer products, with one sample showing a concentration of 16,662 ng/g.
IN PRACTICE:
“The thousands of ng/g of PFHxA available, paired with watch band users often wearing these items for more than 12 h per day, poses an opportunity for significant transfer to the dermis and subsequent human exposure,” the authors wrote.
“If the consumer wishes to purchase a higher-priced band, we suggest that they read the product descriptions and avoid any that are listed as containing fluoroelastomers,” said the study’s lead author in a press release.
SOURCE:
The study was led by Alyssa Wicks, University of Notre Dame in Indiana, and published online in Environmental Science & Technology Letters.
LIMITATIONS:
No limitations were reported in the study.
DISCLOSURES:
The study received funding from the University of Notre Dame. The authors declared no competing financial interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Perfluorohexanoic acid (PFHxA) is found in fluoroelastomer watch bands at concentrations of up to 16,662 ng/g, highlighting the need for further research on dermal absorption and exposure risks.
METHODOLOGY:
- Fluoroelastomers are a subclass of polymeric per- and polyfluoroalkyl substances (PFAS), which are used to help wearable device materials maintain their appearance and structure after contact with the skin, sweat, and personal care products (eg, sunscreen).
- Researchers investigated the presence of PFAS in 22 new and used US fitness and smart watch bands from a range of brands and price points, of which 13 were advertised as containing fluoroelastomers.
- Total fluorine concentrations were measured using particle-induced gamma-ray emission spectroscopy with cut pieces of the watch bands.
- Solvent extraction was performed, and targeted analysis for 20 PFAS compounds was conducted using liquid chromatography-tandem mass spectrometry.
- A subset of six watch bands, with the highest and lowest detectable PFAS concentrations (three each), was subjected to a direct total oxidative precursor assay to determine the presence of PFAS precursors.
TAKEAWAY:
- Watch bands advertised as containing fluoroelastomers had total fluorine concentrations ranging from 28.5% to 90.7%; only two of the nine bands not advertised to contain fluoroelastomers had concentrations of this PFAS, which ranged from 28.1% to 49.7%.
- Expensive watch bands showed high fluorine levels, with concentrations ranging from 49.7% to 90.7%, whereas inexpensive bands contained less than 1% fluorine on their surface.
- PFHxA was the most common PFAS, detected in 41% of the watch bands.
- PFXxA had a median concentration of 773 ng/g, much higher than the concentrations found in other consumer products, with one sample showing a concentration of 16,662 ng/g.
IN PRACTICE:
“The thousands of ng/g of PFHxA available, paired with watch band users often wearing these items for more than 12 h per day, poses an opportunity for significant transfer to the dermis and subsequent human exposure,” the authors wrote.
“If the consumer wishes to purchase a higher-priced band, we suggest that they read the product descriptions and avoid any that are listed as containing fluoroelastomers,” said the study’s lead author in a press release.
SOURCE:
The study was led by Alyssa Wicks, University of Notre Dame in Indiana, and published online in Environmental Science & Technology Letters.
LIMITATIONS:
No limitations were reported in the study.
DISCLOSURES:
The study received funding from the University of Notre Dame. The authors declared no competing financial interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Perfluorohexanoic acid (PFHxA) is found in fluoroelastomer watch bands at concentrations of up to 16,662 ng/g, highlighting the need for further research on dermal absorption and exposure risks.
METHODOLOGY:
- Fluoroelastomers are a subclass of polymeric per- and polyfluoroalkyl substances (PFAS), which are used to help wearable device materials maintain their appearance and structure after contact with the skin, sweat, and personal care products (eg, sunscreen).
- Researchers investigated the presence of PFAS in 22 new and used US fitness and smart watch bands from a range of brands and price points, of which 13 were advertised as containing fluoroelastomers.
- Total fluorine concentrations were measured using particle-induced gamma-ray emission spectroscopy with cut pieces of the watch bands.
- Solvent extraction was performed, and targeted analysis for 20 PFAS compounds was conducted using liquid chromatography-tandem mass spectrometry.
- A subset of six watch bands, with the highest and lowest detectable PFAS concentrations (three each), was subjected to a direct total oxidative precursor assay to determine the presence of PFAS precursors.
TAKEAWAY:
- Watch bands advertised as containing fluoroelastomers had total fluorine concentrations ranging from 28.5% to 90.7%; only two of the nine bands not advertised to contain fluoroelastomers had concentrations of this PFAS, which ranged from 28.1% to 49.7%.
- Expensive watch bands showed high fluorine levels, with concentrations ranging from 49.7% to 90.7%, whereas inexpensive bands contained less than 1% fluorine on their surface.
- PFHxA was the most common PFAS, detected in 41% of the watch bands.
- PFXxA had a median concentration of 773 ng/g, much higher than the concentrations found in other consumer products, with one sample showing a concentration of 16,662 ng/g.
IN PRACTICE:
“The thousands of ng/g of PFHxA available, paired with watch band users often wearing these items for more than 12 h per day, poses an opportunity for significant transfer to the dermis and subsequent human exposure,” the authors wrote.
“If the consumer wishes to purchase a higher-priced band, we suggest that they read the product descriptions and avoid any that are listed as containing fluoroelastomers,” said the study’s lead author in a press release.
SOURCE:
The study was led by Alyssa Wicks, University of Notre Dame in Indiana, and published online in Environmental Science & Technology Letters.
LIMITATIONS:
No limitations were reported in the study.
DISCLOSURES:
The study received funding from the University of Notre Dame. The authors declared no competing financial interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Which Biologics May Contribute to Cancer Risk in Patients With Rheumatoid Arthritis?
TOPLINE:
The initiation of biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs), particularly rituximab and abatacept, is associated with an increased risk for incident cancer in patients with rheumatoid arthritis (RA) within 2 years of starting treatment.
METHODOLOGY:
- The researchers conducted a retrospective cohort study to assess the safety of tumor necrosis factor (TNF) inhibitors, non-TNF inhibitors, and Janus kinase (JAK) inhibitors in patients with RA using US administrative claims data from the Merative Marketscan Research Databases from November 2012 to December 2021.
- A total of 25,305 patients with RA (median age, 50 years; 79% women; 49% from the southern United States) were identified using diagnostic codes on or before treatment initiation.
- Treatment exposures, including the initiation of TNF inhibitors (adalimumab, etanercept, certolizumab pegol, golimumab, and infliximab), non-TNF inhibitors (abatacept, interleukin 6 [IL-6] inhibitors, and rituximab), and JAK inhibitors (tofacitinib, baricitinib, and upadacitinib), were compared.
- The primary outcome was any incident cancer (excluding nonmelanoma skin cancer) occurring after a minimum of 90 days and within 2 years of treatment initiation.
- Sensitivity analyses used 1:1 propensity matching to compare cancer rates between populations treated with rituximab, IL-6 inhibitors, abatacept, or JAK inhibitors and matched reference populations treated with TNF inhibitors.
TAKEAWAY:
- Rituximab (adjusted hazard ratio [aHR], 1.91; 95% CI, 1.17-3.14) and abatacept (aHR, 1.47; 95% CI, 1.03-2.11) were significantly associated with an increased risk for incident cancer, compared with TNF inhibitors.
- In the propensity-matched analysis, a statistically significant increase in risk was observed in patients treated with rituximab (aHR, 4.37; 95% CI, 1.48-12.93) and abatacept (aHR, 3.12; 95%CI, 1.52-6.44).
- IL-6 inhibitors showed no significant association with cancer in the primary analysis, but a significantly increased risk was observed in the propensity-matched analysis (HR, 5.65; 95% CI, 1.11-28.79).
- JAK inhibitors were not associated with a significant increase in the risk for cancer, compared with TNF inhibitors.
IN PRACTICE:
“Given the limitations of using private insurance claims data and confounding by indication, it is likely that these patients may have a higher disease burden, resulting in channeling bias,” the authors wrote. “To understand these associations, larger studies with longer follow-up and more granular collection of data, including medication indications and RA disease activity measures, would be needed for better comparison of incident cancer risk among these drugs,” they added.
SOURCE:
The study was led by Xavier Sendaydiego, MD, University of Washington, Seattle. It was published online in JAMA Network Open.
LIMITATIONS:
A relatively small number of cancer outcomes may have affected the ability to adjust for confounders. The follow-up period was limited to 2 years, potentially missing long-term cancer risks. The use of US-specific administrative claims data, including only patients aged 18-64 years, may limit the generalizability of the findings. Additionally, the claims data lacked direct measures of disease activity or severity of RA, and information on treatment adherence was unavailable, leading to potential misclassification.
DISCLOSURES:
The study was supported by grants from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the National Institute on Aging. Some authors reported receiving personal fees, nonfinancial support, and grants from various pharmaceutical companies or government sources. One author reported having a pending patent and another author reported receiving a fellowship, travel reimbursement, and royalties outside the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The initiation of biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs), particularly rituximab and abatacept, is associated with an increased risk for incident cancer in patients with rheumatoid arthritis (RA) within 2 years of starting treatment.
METHODOLOGY:
- The researchers conducted a retrospective cohort study to assess the safety of tumor necrosis factor (TNF) inhibitors, non-TNF inhibitors, and Janus kinase (JAK) inhibitors in patients with RA using US administrative claims data from the Merative Marketscan Research Databases from November 2012 to December 2021.
- A total of 25,305 patients with RA (median age, 50 years; 79% women; 49% from the southern United States) were identified using diagnostic codes on or before treatment initiation.
- Treatment exposures, including the initiation of TNF inhibitors (adalimumab, etanercept, certolizumab pegol, golimumab, and infliximab), non-TNF inhibitors (abatacept, interleukin 6 [IL-6] inhibitors, and rituximab), and JAK inhibitors (tofacitinib, baricitinib, and upadacitinib), were compared.
- The primary outcome was any incident cancer (excluding nonmelanoma skin cancer) occurring after a minimum of 90 days and within 2 years of treatment initiation.
- Sensitivity analyses used 1:1 propensity matching to compare cancer rates between populations treated with rituximab, IL-6 inhibitors, abatacept, or JAK inhibitors and matched reference populations treated with TNF inhibitors.
TAKEAWAY:
- Rituximab (adjusted hazard ratio [aHR], 1.91; 95% CI, 1.17-3.14) and abatacept (aHR, 1.47; 95% CI, 1.03-2.11) were significantly associated with an increased risk for incident cancer, compared with TNF inhibitors.
- In the propensity-matched analysis, a statistically significant increase in risk was observed in patients treated with rituximab (aHR, 4.37; 95% CI, 1.48-12.93) and abatacept (aHR, 3.12; 95%CI, 1.52-6.44).
- IL-6 inhibitors showed no significant association with cancer in the primary analysis, but a significantly increased risk was observed in the propensity-matched analysis (HR, 5.65; 95% CI, 1.11-28.79).
- JAK inhibitors were not associated with a significant increase in the risk for cancer, compared with TNF inhibitors.
IN PRACTICE:
“Given the limitations of using private insurance claims data and confounding by indication, it is likely that these patients may have a higher disease burden, resulting in channeling bias,” the authors wrote. “To understand these associations, larger studies with longer follow-up and more granular collection of data, including medication indications and RA disease activity measures, would be needed for better comparison of incident cancer risk among these drugs,” they added.
SOURCE:
The study was led by Xavier Sendaydiego, MD, University of Washington, Seattle. It was published online in JAMA Network Open.
LIMITATIONS:
A relatively small number of cancer outcomes may have affected the ability to adjust for confounders. The follow-up period was limited to 2 years, potentially missing long-term cancer risks. The use of US-specific administrative claims data, including only patients aged 18-64 years, may limit the generalizability of the findings. Additionally, the claims data lacked direct measures of disease activity or severity of RA, and information on treatment adherence was unavailable, leading to potential misclassification.
DISCLOSURES:
The study was supported by grants from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the National Institute on Aging. Some authors reported receiving personal fees, nonfinancial support, and grants from various pharmaceutical companies or government sources. One author reported having a pending patent and another author reported receiving a fellowship, travel reimbursement, and royalties outside the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The initiation of biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs), particularly rituximab and abatacept, is associated with an increased risk for incident cancer in patients with rheumatoid arthritis (RA) within 2 years of starting treatment.
METHODOLOGY:
- The researchers conducted a retrospective cohort study to assess the safety of tumor necrosis factor (TNF) inhibitors, non-TNF inhibitors, and Janus kinase (JAK) inhibitors in patients with RA using US administrative claims data from the Merative Marketscan Research Databases from November 2012 to December 2021.
- A total of 25,305 patients with RA (median age, 50 years; 79% women; 49% from the southern United States) were identified using diagnostic codes on or before treatment initiation.
- Treatment exposures, including the initiation of TNF inhibitors (adalimumab, etanercept, certolizumab pegol, golimumab, and infliximab), non-TNF inhibitors (abatacept, interleukin 6 [IL-6] inhibitors, and rituximab), and JAK inhibitors (tofacitinib, baricitinib, and upadacitinib), were compared.
- The primary outcome was any incident cancer (excluding nonmelanoma skin cancer) occurring after a minimum of 90 days and within 2 years of treatment initiation.
- Sensitivity analyses used 1:1 propensity matching to compare cancer rates between populations treated with rituximab, IL-6 inhibitors, abatacept, or JAK inhibitors and matched reference populations treated with TNF inhibitors.
TAKEAWAY:
- Rituximab (adjusted hazard ratio [aHR], 1.91; 95% CI, 1.17-3.14) and abatacept (aHR, 1.47; 95% CI, 1.03-2.11) were significantly associated with an increased risk for incident cancer, compared with TNF inhibitors.
- In the propensity-matched analysis, a statistically significant increase in risk was observed in patients treated with rituximab (aHR, 4.37; 95% CI, 1.48-12.93) and abatacept (aHR, 3.12; 95%CI, 1.52-6.44).
- IL-6 inhibitors showed no significant association with cancer in the primary analysis, but a significantly increased risk was observed in the propensity-matched analysis (HR, 5.65; 95% CI, 1.11-28.79).
- JAK inhibitors were not associated with a significant increase in the risk for cancer, compared with TNF inhibitors.
IN PRACTICE:
“Given the limitations of using private insurance claims data and confounding by indication, it is likely that these patients may have a higher disease burden, resulting in channeling bias,” the authors wrote. “To understand these associations, larger studies with longer follow-up and more granular collection of data, including medication indications and RA disease activity measures, would be needed for better comparison of incident cancer risk among these drugs,” they added.
SOURCE:
The study was led by Xavier Sendaydiego, MD, University of Washington, Seattle. It was published online in JAMA Network Open.
LIMITATIONS:
A relatively small number of cancer outcomes may have affected the ability to adjust for confounders. The follow-up period was limited to 2 years, potentially missing long-term cancer risks. The use of US-specific administrative claims data, including only patients aged 18-64 years, may limit the generalizability of the findings. Additionally, the claims data lacked direct measures of disease activity or severity of RA, and information on treatment adherence was unavailable, leading to potential misclassification.
DISCLOSURES:
The study was supported by grants from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the National Institute on Aging. Some authors reported receiving personal fees, nonfinancial support, and grants from various pharmaceutical companies or government sources. One author reported having a pending patent and another author reported receiving a fellowship, travel reimbursement, and royalties outside the submitted work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Self-Care Can Elevate Quality of Life in Chronic Diseases
TOPLINE:
Self-care preparedness is positively associated with improved health-related quality of life (HRQOL) in patients with chronic conditions over 36 months, and patients who enhance their self-care preparedness experience better QOL outcomes.
METHODOLOGY:
- A secondary analysis of a randomized controlled trial conducted in Finland from 2017 to 2021 aimed to analyze the longitudinal associations between self-care preparedness and HRQOL over a 36-month follow-up period.
- A total of 256 adults with hypertension, diabetes, or coronary artery disease who participated in a patient care planning process in primary healthcare and completed the self-care intervention were included.
- The intervention comprised individualized care plans with a self-care form, including the self-care preparedness index (SCPI), which was initially mailed to the participants; the form explained self-care concepts and included assessments of health behaviors and willingness to change.
- Self-care preparedness was measured using SCPI scores, which were divided into tertiles: Low (−5 to 0), moderate (1-3), and high (4-5) preparedness.
- Outcome measures assessed at baseline and at 12 and 36 months included changes in the SCPI; HRQOL, assessed using 15D, which is a 15-dimensional measure; depressive symptoms; self-rated health; life satisfaction; and physical activity. The associations were analyzed using regression models.
TAKEAWAY:
- At baseline, participants with a higher SCPI score showed higher physical activity, life satisfaction, self-rated health, and management of their overall health; however, body mass index and the presence of depressive symptoms had a negative relationship with SCPI.
- Various dimensions of 15D, particularly usual activities, discomfort and symptoms, distress, depression, vitality, and sexual activity, showed a positive linear relationship with SCPI at baseline.
- A lower SCPI score at baseline was associated with greater improvements in the measures of HRQOL.
- A significant positive longitudinal association was observed between changes in SCPI and 15D from baseline to 36 months (beta coefficient, +0.19; P = .002), showing that QOL can improve if patients manage to improve their SCPI.
IN PRACTICE:
“SCPI could be used as an indicative index, keeping in mind that participants with lower SCPI have the potential to benefit and change their health behavior the most. The patient and the healthcare provider should consider which areas of self-care the patient needs support,” the authors wrote. “This study provides further knowledge of this tool for the purpose of aiding healthcare professionals in screening self-care preparedness in primary healthcare,” they added.
SOURCE:
The study was led by Ulla Mikkonen, Institute of Public Health and Clinical Nutrition, University of Eastern Finland, Kuopio, Finland. It was published online in Family Practice.
LIMITATIONS:
The relatively small sample size limited to a local area in Finland may have affected the generalizability of the findings. Additionally, variations in the implementation of the intervention in real-life settings could have influenced the results. The data on whether general practitioners used the SCPI to formulate care plans were lacking.
DISCLOSURES:
The study received funding from the Primary Health Care Unit of the Northern Savo Hospital District and Siilinjärvi Health Center. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Self-care preparedness is positively associated with improved health-related quality of life (HRQOL) in patients with chronic conditions over 36 months, and patients who enhance their self-care preparedness experience better QOL outcomes.
METHODOLOGY:
- A secondary analysis of a randomized controlled trial conducted in Finland from 2017 to 2021 aimed to analyze the longitudinal associations between self-care preparedness and HRQOL over a 36-month follow-up period.
- A total of 256 adults with hypertension, diabetes, or coronary artery disease who participated in a patient care planning process in primary healthcare and completed the self-care intervention were included.
- The intervention comprised individualized care plans with a self-care form, including the self-care preparedness index (SCPI), which was initially mailed to the participants; the form explained self-care concepts and included assessments of health behaviors and willingness to change.
- Self-care preparedness was measured using SCPI scores, which were divided into tertiles: Low (−5 to 0), moderate (1-3), and high (4-5) preparedness.
- Outcome measures assessed at baseline and at 12 and 36 months included changes in the SCPI; HRQOL, assessed using 15D, which is a 15-dimensional measure; depressive symptoms; self-rated health; life satisfaction; and physical activity. The associations were analyzed using regression models.
TAKEAWAY:
- At baseline, participants with a higher SCPI score showed higher physical activity, life satisfaction, self-rated health, and management of their overall health; however, body mass index and the presence of depressive symptoms had a negative relationship with SCPI.
- Various dimensions of 15D, particularly usual activities, discomfort and symptoms, distress, depression, vitality, and sexual activity, showed a positive linear relationship with SCPI at baseline.
- A lower SCPI score at baseline was associated with greater improvements in the measures of HRQOL.
- A significant positive longitudinal association was observed between changes in SCPI and 15D from baseline to 36 months (beta coefficient, +0.19; P = .002), showing that QOL can improve if patients manage to improve their SCPI.
IN PRACTICE:
“SCPI could be used as an indicative index, keeping in mind that participants with lower SCPI have the potential to benefit and change their health behavior the most. The patient and the healthcare provider should consider which areas of self-care the patient needs support,” the authors wrote. “This study provides further knowledge of this tool for the purpose of aiding healthcare professionals in screening self-care preparedness in primary healthcare,” they added.
SOURCE:
The study was led by Ulla Mikkonen, Institute of Public Health and Clinical Nutrition, University of Eastern Finland, Kuopio, Finland. It was published online in Family Practice.
LIMITATIONS:
The relatively small sample size limited to a local area in Finland may have affected the generalizability of the findings. Additionally, variations in the implementation of the intervention in real-life settings could have influenced the results. The data on whether general practitioners used the SCPI to formulate care plans were lacking.
DISCLOSURES:
The study received funding from the Primary Health Care Unit of the Northern Savo Hospital District and Siilinjärvi Health Center. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Self-care preparedness is positively associated with improved health-related quality of life (HRQOL) in patients with chronic conditions over 36 months, and patients who enhance their self-care preparedness experience better QOL outcomes.
METHODOLOGY:
- A secondary analysis of a randomized controlled trial conducted in Finland from 2017 to 2021 aimed to analyze the longitudinal associations between self-care preparedness and HRQOL over a 36-month follow-up period.
- A total of 256 adults with hypertension, diabetes, or coronary artery disease who participated in a patient care planning process in primary healthcare and completed the self-care intervention were included.
- The intervention comprised individualized care plans with a self-care form, including the self-care preparedness index (SCPI), which was initially mailed to the participants; the form explained self-care concepts and included assessments of health behaviors and willingness to change.
- Self-care preparedness was measured using SCPI scores, which were divided into tertiles: Low (−5 to 0), moderate (1-3), and high (4-5) preparedness.
- Outcome measures assessed at baseline and at 12 and 36 months included changes in the SCPI; HRQOL, assessed using 15D, which is a 15-dimensional measure; depressive symptoms; self-rated health; life satisfaction; and physical activity. The associations were analyzed using regression models.
TAKEAWAY:
- At baseline, participants with a higher SCPI score showed higher physical activity, life satisfaction, self-rated health, and management of their overall health; however, body mass index and the presence of depressive symptoms had a negative relationship with SCPI.
- Various dimensions of 15D, particularly usual activities, discomfort and symptoms, distress, depression, vitality, and sexual activity, showed a positive linear relationship with SCPI at baseline.
- A lower SCPI score at baseline was associated with greater improvements in the measures of HRQOL.
- A significant positive longitudinal association was observed between changes in SCPI and 15D from baseline to 36 months (beta coefficient, +0.19; P = .002), showing that QOL can improve if patients manage to improve their SCPI.
IN PRACTICE:
“SCPI could be used as an indicative index, keeping in mind that participants with lower SCPI have the potential to benefit and change their health behavior the most. The patient and the healthcare provider should consider which areas of self-care the patient needs support,” the authors wrote. “This study provides further knowledge of this tool for the purpose of aiding healthcare professionals in screening self-care preparedness in primary healthcare,” they added.
SOURCE:
The study was led by Ulla Mikkonen, Institute of Public Health and Clinical Nutrition, University of Eastern Finland, Kuopio, Finland. It was published online in Family Practice.
LIMITATIONS:
The relatively small sample size limited to a local area in Finland may have affected the generalizability of the findings. Additionally, variations in the implementation of the intervention in real-life settings could have influenced the results. The data on whether general practitioners used the SCPI to formulate care plans were lacking.
DISCLOSURES:
The study received funding from the Primary Health Care Unit of the Northern Savo Hospital District and Siilinjärvi Health Center. The authors declared no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Skin Stress Biomarker May Predict Nerve Damage in Early T2D
TOPLINE:
Increased cutaneous carbonyl stress is linked to slower nerve conduction in patients with metabolically well-controlled, recent-onset type 2 diabetes (T2D) and can predict the development of neuropathic deficits over 5 years.
METHODOLOGY:
- Accumulation of advanced glycation end products (AGEs), which results from endogenous carbonyl stress, may be a potential target for preventing and treating the diabetic sensorimotor polyneuropathy (DSPN) that is a common complication of T2D.
- Researchers investigated novel cutaneous biomarkers for the development and progression of DSPN in 160 individuals with recent-onset T2D (diagnosed within 12 months or less) and 144 individuals with normal glucose tolerance, all recruited consecutively from the German Diabetes Study baseline cohort.
- Peripheral nerve function was assessed through nerve conduction studies, quantitative sensory testing, and clinical neuropathy scores.
- Skin biopsies were used to analyze intraepidermal nerve fiber density, endothelial integrity, cutaneous oxidative stress markers, and cutaneous carbonyl stress markers, including AGE autofluorescence and argpyrimidine area.
- Skin autofluorescence was measured noninvasively using an AGE reader device.
- A subgroup of 80 patients with T2D were reassessed after 5 years to evaluate the progression of neurophysiological deficits.
TAKEAWAY:
- Patients with recent-onset T2D had greater AGE autofluorescence and argpyrimidine area (P ≤ .05 for both) and lower nerve fiber density (P ≤ .05) than individuals with normal glucose tolerance.
- In patients with T2D, AGE autofluorescence was inversely associated with nerve conduction (P = .0002, P = .002, and P = .001 for peroneal motor, median motor, and sural sensory nerve conduction velocity, respectively) and positively associated with AGE reader measurements (P < .05); no such associations were observed in those with normal glucose tolerance.
- In the prospective T2D cohort, associations were noted between cutaneous markers for AGEs and endothelial cells at baseline and changes in nerve function indices over a 5-year period.
IN PRACTICE:
“Prospective analyses revealed some predictive value of cutaneous AGEs and lower endothelial integrity for declining nerve function, supporting the role of carbonyl stress in the development and progression of DSPN, representing a potential therapeutic target,” the authors wrote.
SOURCE:
The study was led by Gidon J. Bönhof, Department of Endocrinology and Diabetology, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany. It was published online in Diabetes Care.
LIMITATIONS:
The observational design of the study limited the ability to draw causal conclusions. The groups were not matched for age or body mass index. Various mechanisms related to DSPN were analyzed; however, specific pathways of AGEs were not studied in detail. The relatively low number of individuals with clinically manifested DSPN limited the exploration of different stages of the condition.
DISCLOSURES:
The study was supported by a German Center for Diabetes Research grant. The German Diabetes Study was supported by the German Diabetes Center funded by the German Federal Ministry of Health (Berlin), the Ministry of Innovation, Science, Research and Technology of North Rhine-Westphalia (Düsseldorf, Germany), and grants from the German Federal Ministry of Education and Research to the German Center for Diabetes Research e.V. No relevant conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Increased cutaneous carbonyl stress is linked to slower nerve conduction in patients with metabolically well-controlled, recent-onset type 2 diabetes (T2D) and can predict the development of neuropathic deficits over 5 years.
METHODOLOGY:
- Accumulation of advanced glycation end products (AGEs), which results from endogenous carbonyl stress, may be a potential target for preventing and treating the diabetic sensorimotor polyneuropathy (DSPN) that is a common complication of T2D.
- Researchers investigated novel cutaneous biomarkers for the development and progression of DSPN in 160 individuals with recent-onset T2D (diagnosed within 12 months or less) and 144 individuals with normal glucose tolerance, all recruited consecutively from the German Diabetes Study baseline cohort.
- Peripheral nerve function was assessed through nerve conduction studies, quantitative sensory testing, and clinical neuropathy scores.
- Skin biopsies were used to analyze intraepidermal nerve fiber density, endothelial integrity, cutaneous oxidative stress markers, and cutaneous carbonyl stress markers, including AGE autofluorescence and argpyrimidine area.
- Skin autofluorescence was measured noninvasively using an AGE reader device.
- A subgroup of 80 patients with T2D were reassessed after 5 years to evaluate the progression of neurophysiological deficits.
TAKEAWAY:
- Patients with recent-onset T2D had greater AGE autofluorescence and argpyrimidine area (P ≤ .05 for both) and lower nerve fiber density (P ≤ .05) than individuals with normal glucose tolerance.
- In patients with T2D, AGE autofluorescence was inversely associated with nerve conduction (P = .0002, P = .002, and P = .001 for peroneal motor, median motor, and sural sensory nerve conduction velocity, respectively) and positively associated with AGE reader measurements (P < .05); no such associations were observed in those with normal glucose tolerance.
- In the prospective T2D cohort, associations were noted between cutaneous markers for AGEs and endothelial cells at baseline and changes in nerve function indices over a 5-year period.
IN PRACTICE:
“Prospective analyses revealed some predictive value of cutaneous AGEs and lower endothelial integrity for declining nerve function, supporting the role of carbonyl stress in the development and progression of DSPN, representing a potential therapeutic target,” the authors wrote.
SOURCE:
The study was led by Gidon J. Bönhof, Department of Endocrinology and Diabetology, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany. It was published online in Diabetes Care.
LIMITATIONS:
The observational design of the study limited the ability to draw causal conclusions. The groups were not matched for age or body mass index. Various mechanisms related to DSPN were analyzed; however, specific pathways of AGEs were not studied in detail. The relatively low number of individuals with clinically manifested DSPN limited the exploration of different stages of the condition.
DISCLOSURES:
The study was supported by a German Center for Diabetes Research grant. The German Diabetes Study was supported by the German Diabetes Center funded by the German Federal Ministry of Health (Berlin), the Ministry of Innovation, Science, Research and Technology of North Rhine-Westphalia (Düsseldorf, Germany), and grants from the German Federal Ministry of Education and Research to the German Center for Diabetes Research e.V. No relevant conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Increased cutaneous carbonyl stress is linked to slower nerve conduction in patients with metabolically well-controlled, recent-onset type 2 diabetes (T2D) and can predict the development of neuropathic deficits over 5 years.
METHODOLOGY:
- Accumulation of advanced glycation end products (AGEs), which results from endogenous carbonyl stress, may be a potential target for preventing and treating the diabetic sensorimotor polyneuropathy (DSPN) that is a common complication of T2D.
- Researchers investigated novel cutaneous biomarkers for the development and progression of DSPN in 160 individuals with recent-onset T2D (diagnosed within 12 months or less) and 144 individuals with normal glucose tolerance, all recruited consecutively from the German Diabetes Study baseline cohort.
- Peripheral nerve function was assessed through nerve conduction studies, quantitative sensory testing, and clinical neuropathy scores.
- Skin biopsies were used to analyze intraepidermal nerve fiber density, endothelial integrity, cutaneous oxidative stress markers, and cutaneous carbonyl stress markers, including AGE autofluorescence and argpyrimidine area.
- Skin autofluorescence was measured noninvasively using an AGE reader device.
- A subgroup of 80 patients with T2D were reassessed after 5 years to evaluate the progression of neurophysiological deficits.
TAKEAWAY:
- Patients with recent-onset T2D had greater AGE autofluorescence and argpyrimidine area (P ≤ .05 for both) and lower nerve fiber density (P ≤ .05) than individuals with normal glucose tolerance.
- In patients with T2D, AGE autofluorescence was inversely associated with nerve conduction (P = .0002, P = .002, and P = .001 for peroneal motor, median motor, and sural sensory nerve conduction velocity, respectively) and positively associated with AGE reader measurements (P < .05); no such associations were observed in those with normal glucose tolerance.
- In the prospective T2D cohort, associations were noted between cutaneous markers for AGEs and endothelial cells at baseline and changes in nerve function indices over a 5-year period.
IN PRACTICE:
“Prospective analyses revealed some predictive value of cutaneous AGEs and lower endothelial integrity for declining nerve function, supporting the role of carbonyl stress in the development and progression of DSPN, representing a potential therapeutic target,” the authors wrote.
SOURCE:
The study was led by Gidon J. Bönhof, Department of Endocrinology and Diabetology, Medical Faculty, University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany. It was published online in Diabetes Care.
LIMITATIONS:
The observational design of the study limited the ability to draw causal conclusions. The groups were not matched for age or body mass index. Various mechanisms related to DSPN were analyzed; however, specific pathways of AGEs were not studied in detail. The relatively low number of individuals with clinically manifested DSPN limited the exploration of different stages of the condition.
DISCLOSURES:
The study was supported by a German Center for Diabetes Research grant. The German Diabetes Study was supported by the German Diabetes Center funded by the German Federal Ministry of Health (Berlin), the Ministry of Innovation, Science, Research and Technology of North Rhine-Westphalia (Düsseldorf, Germany), and grants from the German Federal Ministry of Education and Research to the German Center for Diabetes Research e.V. No relevant conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.