User login
Noninvasive Microbiome Test May Specifically Identify Crohn’s and Ulcerative Colitis
International researchers have uncovered potentially diagnostic gut microbiome signatures and metabolic pathways associated specifically with ulcerative colitis (UC) and Crohn’s disease (CD).
Targeted droplet digital polymerase chain reaction (ddPCR)‒based quantification of bacterial species led to convenient inflammatory bowel disease (IBD) diagnostic assays that “are sufficiently robust, sensitive and cost-effective for clinical application,” the investigators wrote in a recent study published in Nature Medicine.
“Although traditional modalities used for diagnosis of IBD, including colonoscopy and cross-sectional imaging, are well established, the inconvenience of bowel preparation and radiation represents relevant concerns,” senior author Siew C. Ng, MBBS, PhD, a professor in the Department of Medicine and Therapeutics at the Chinese University of Hong Kong, said in an interview. “Furthermore, existing serological and fecal markers indicate inflammation but lack specificity for IBD.”
Identifying reproducible bacterial biomarkers specific to CD and IBD should enable precise and personalized approaches to detection and management.
As a starting point, the researchers hypothesized that changes in the gut microbiome of IBD patients may reflect underlying functional associations, if not causes, of the disease, said Ng, who is also director of Hong Kong’s Microbiota I-Center (MagIC). “Unlike inflammation, which is a manifestation of the disease, the gut microbiome may serve as a more reliable biomarker less affected by the disease’s fluctuating cycle.”
The study findings showed that bacterial markers remain consistent even during the inactive disease phase. , she added. “With a better performance than the commonly used noninvasive test, fecal calprotectin, we believe the test will be a valuable addition to clinician’s toolbox and a strong option for first-line diagnostics.”
The Study
The group used metagenomic data from 5979 fecal samples from persons with and without IBD from different regions (including the United States) and of different ethnicities. Identifying several microbiota alterations in IBD, they selected bacterial species to construct diagnostic models for UC (n = 10) and CD (n = 9). Some species were deleted and some were enriched in IBD.
Metagenomic findings confirmed, for example, enrichments of Escherichia coli and Bacteroides fragilis in the guts of CD patients, with adherent invasive E coli present in more than half of these. This pathogen has been linked to mucosal dysbiosis and functional alteration, and has been associated with disease activity and endoscopic recurrence following surgery. B fragilis may induce intestinal inflammation through toxin production.
The researchers also identified a new oral bacterium, Actinomyces species oral taxon 181, which was significantly enriched in stool samples with both CD and UC.
The diagnostic models achieved areas under the curve of >.90 for distinguishing IBD patients from controls in the discovery cohort and maintained satisfactory performance in transethnic validation cohorts from eight populations.
Ng’s group further developed a multiplex droplet digital PCR test targeting selected IBD-associated bacterial species. Models based on this test showed numerically higher performance than fecal calprotectin in discriminating UC and CD samples from controls. These universally IBD-associated bacteria suggest the potential applicability of a biomarker panel for noninvasive diagnosis.
Commenting on the paper but not involved in it, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, director of the Crohn’s and Colitis Center at Massachusetts General Hospital in Boston and associate professor of medicine at Harvard Medical School, called it “a very important study that highlights the potential role of a microbiome-based diagnostic for screening. It could have application in a wide variety of settings and is very promising.”
More work, however, is necessary to clarify such testing’s role. “The study’s validation in independent cohorts is an important strength, but the sizes of those cohorts are still quite small,” he said in an interview. “It’s important to understand its accuracy across a spectrum of IBD phenotypes and severity.”
Furthermore, endoscopic evaluation at diagnosis is important to establish severity and extent of disease. “It’s not clear this diagnostic biomarker can help supplant that role. But I see potential value to it for patients for whom we may not be considering endoscopy yet but who would like to risk-stratify.”
The Test’s Future
“We expect to see a real shift in clinical practice,” Ng said. “As a cost-effective test, it will help millions of people dealing with nonspecific gastrointestinal symptoms get the diagnoses they need.” Because the bacterial test can identify IBD at an inactive stage, it has the potential for early diagnosis. “This capability allows clinicians to initiate treatment sooner, helping to prevent progression from subclinical to clinical stages of the disease.”
The next research steps involve prospective studies with a larger and more diverse group of patients with various gastrointestinal symptoms. “This will enable a comprehensive evaluation of bacterial biomarkers in real-world populations,” she said. In vivo and in vitro experiments are expected to provide mechanistic insights into the causal role of these bacteria and metabolic dysregulations in the pathogenesis of IBD, as well as their future clinical utility in disease monitoring and predicting treatment response.
Her group plans to work with the biotech industry and regulatory agencies to transform these biomarkers into an approved test kit. “The rollout is likely to be gradual, but we’re optimistic that supportive international and national guidelines will be developed and will pave the way for widespread implementation.”
This study was supported by various academic, charitable, and governmental research-funding bodies, including the governments of Hong Kong and the People’s Republic of China. Ng has served as an advisory board member or speaker for Pfizer, Ferring, Janssen, AbbVie, Tillotts, Menarini, and Takeda. She has received research grants through her institutions from Olympus, Ferring, and AbbVie and is a founding member and shareholder of GenieBiome. She receives patent royalties through her institutions, including MagIC, which holds patents on the therapeutic and diagnostic use of the microbiome in IBD. Several co-authors reported various relationships, including patent holding, with private-sector companies. Ananthakrishnan had no relevant competing interests.
A version of this article first appeared on Medscape.com.
International researchers have uncovered potentially diagnostic gut microbiome signatures and metabolic pathways associated specifically with ulcerative colitis (UC) and Crohn’s disease (CD).
Targeted droplet digital polymerase chain reaction (ddPCR)‒based quantification of bacterial species led to convenient inflammatory bowel disease (IBD) diagnostic assays that “are sufficiently robust, sensitive and cost-effective for clinical application,” the investigators wrote in a recent study published in Nature Medicine.
“Although traditional modalities used for diagnosis of IBD, including colonoscopy and cross-sectional imaging, are well established, the inconvenience of bowel preparation and radiation represents relevant concerns,” senior author Siew C. Ng, MBBS, PhD, a professor in the Department of Medicine and Therapeutics at the Chinese University of Hong Kong, said in an interview. “Furthermore, existing serological and fecal markers indicate inflammation but lack specificity for IBD.”
Identifying reproducible bacterial biomarkers specific to CD and IBD should enable precise and personalized approaches to detection and management.
As a starting point, the researchers hypothesized that changes in the gut microbiome of IBD patients may reflect underlying functional associations, if not causes, of the disease, said Ng, who is also director of Hong Kong’s Microbiota I-Center (MagIC). “Unlike inflammation, which is a manifestation of the disease, the gut microbiome may serve as a more reliable biomarker less affected by the disease’s fluctuating cycle.”
The study findings showed that bacterial markers remain consistent even during the inactive disease phase. , she added. “With a better performance than the commonly used noninvasive test, fecal calprotectin, we believe the test will be a valuable addition to clinician’s toolbox and a strong option for first-line diagnostics.”
The Study
The group used metagenomic data from 5979 fecal samples from persons with and without IBD from different regions (including the United States) and of different ethnicities. Identifying several microbiota alterations in IBD, they selected bacterial species to construct diagnostic models for UC (n = 10) and CD (n = 9). Some species were deleted and some were enriched in IBD.
Metagenomic findings confirmed, for example, enrichments of Escherichia coli and Bacteroides fragilis in the guts of CD patients, with adherent invasive E coli present in more than half of these. This pathogen has been linked to mucosal dysbiosis and functional alteration, and has been associated with disease activity and endoscopic recurrence following surgery. B fragilis may induce intestinal inflammation through toxin production.
The researchers also identified a new oral bacterium, Actinomyces species oral taxon 181, which was significantly enriched in stool samples with both CD and UC.
The diagnostic models achieved areas under the curve of >.90 for distinguishing IBD patients from controls in the discovery cohort and maintained satisfactory performance in transethnic validation cohorts from eight populations.
Ng’s group further developed a multiplex droplet digital PCR test targeting selected IBD-associated bacterial species. Models based on this test showed numerically higher performance than fecal calprotectin in discriminating UC and CD samples from controls. These universally IBD-associated bacteria suggest the potential applicability of a biomarker panel for noninvasive diagnosis.
Commenting on the paper but not involved in it, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, director of the Crohn’s and Colitis Center at Massachusetts General Hospital in Boston and associate professor of medicine at Harvard Medical School, called it “a very important study that highlights the potential role of a microbiome-based diagnostic for screening. It could have application in a wide variety of settings and is very promising.”
More work, however, is necessary to clarify such testing’s role. “The study’s validation in independent cohorts is an important strength, but the sizes of those cohorts are still quite small,” he said in an interview. “It’s important to understand its accuracy across a spectrum of IBD phenotypes and severity.”
Furthermore, endoscopic evaluation at diagnosis is important to establish severity and extent of disease. “It’s not clear this diagnostic biomarker can help supplant that role. But I see potential value to it for patients for whom we may not be considering endoscopy yet but who would like to risk-stratify.”
The Test’s Future
“We expect to see a real shift in clinical practice,” Ng said. “As a cost-effective test, it will help millions of people dealing with nonspecific gastrointestinal symptoms get the diagnoses they need.” Because the bacterial test can identify IBD at an inactive stage, it has the potential for early diagnosis. “This capability allows clinicians to initiate treatment sooner, helping to prevent progression from subclinical to clinical stages of the disease.”
The next research steps involve prospective studies with a larger and more diverse group of patients with various gastrointestinal symptoms. “This will enable a comprehensive evaluation of bacterial biomarkers in real-world populations,” she said. In vivo and in vitro experiments are expected to provide mechanistic insights into the causal role of these bacteria and metabolic dysregulations in the pathogenesis of IBD, as well as their future clinical utility in disease monitoring and predicting treatment response.
Her group plans to work with the biotech industry and regulatory agencies to transform these biomarkers into an approved test kit. “The rollout is likely to be gradual, but we’re optimistic that supportive international and national guidelines will be developed and will pave the way for widespread implementation.”
This study was supported by various academic, charitable, and governmental research-funding bodies, including the governments of Hong Kong and the People’s Republic of China. Ng has served as an advisory board member or speaker for Pfizer, Ferring, Janssen, AbbVie, Tillotts, Menarini, and Takeda. She has received research grants through her institutions from Olympus, Ferring, and AbbVie and is a founding member and shareholder of GenieBiome. She receives patent royalties through her institutions, including MagIC, which holds patents on the therapeutic and diagnostic use of the microbiome in IBD. Several co-authors reported various relationships, including patent holding, with private-sector companies. Ananthakrishnan had no relevant competing interests.
A version of this article first appeared on Medscape.com.
International researchers have uncovered potentially diagnostic gut microbiome signatures and metabolic pathways associated specifically with ulcerative colitis (UC) and Crohn’s disease (CD).
Targeted droplet digital polymerase chain reaction (ddPCR)‒based quantification of bacterial species led to convenient inflammatory bowel disease (IBD) diagnostic assays that “are sufficiently robust, sensitive and cost-effective for clinical application,” the investigators wrote in a recent study published in Nature Medicine.
“Although traditional modalities used for diagnosis of IBD, including colonoscopy and cross-sectional imaging, are well established, the inconvenience of bowel preparation and radiation represents relevant concerns,” senior author Siew C. Ng, MBBS, PhD, a professor in the Department of Medicine and Therapeutics at the Chinese University of Hong Kong, said in an interview. “Furthermore, existing serological and fecal markers indicate inflammation but lack specificity for IBD.”
Identifying reproducible bacterial biomarkers specific to CD and IBD should enable precise and personalized approaches to detection and management.
As a starting point, the researchers hypothesized that changes in the gut microbiome of IBD patients may reflect underlying functional associations, if not causes, of the disease, said Ng, who is also director of Hong Kong’s Microbiota I-Center (MagIC). “Unlike inflammation, which is a manifestation of the disease, the gut microbiome may serve as a more reliable biomarker less affected by the disease’s fluctuating cycle.”
The study findings showed that bacterial markers remain consistent even during the inactive disease phase. , she added. “With a better performance than the commonly used noninvasive test, fecal calprotectin, we believe the test will be a valuable addition to clinician’s toolbox and a strong option for first-line diagnostics.”
The Study
The group used metagenomic data from 5979 fecal samples from persons with and without IBD from different regions (including the United States) and of different ethnicities. Identifying several microbiota alterations in IBD, they selected bacterial species to construct diagnostic models for UC (n = 10) and CD (n = 9). Some species were deleted and some were enriched in IBD.
Metagenomic findings confirmed, for example, enrichments of Escherichia coli and Bacteroides fragilis in the guts of CD patients, with adherent invasive E coli present in more than half of these. This pathogen has been linked to mucosal dysbiosis and functional alteration, and has been associated with disease activity and endoscopic recurrence following surgery. B fragilis may induce intestinal inflammation through toxin production.
The researchers also identified a new oral bacterium, Actinomyces species oral taxon 181, which was significantly enriched in stool samples with both CD and UC.
The diagnostic models achieved areas under the curve of >.90 for distinguishing IBD patients from controls in the discovery cohort and maintained satisfactory performance in transethnic validation cohorts from eight populations.
Ng’s group further developed a multiplex droplet digital PCR test targeting selected IBD-associated bacterial species. Models based on this test showed numerically higher performance than fecal calprotectin in discriminating UC and CD samples from controls. These universally IBD-associated bacteria suggest the potential applicability of a biomarker panel for noninvasive diagnosis.
Commenting on the paper but not involved in it, Ashwin N. Ananthakrishnan, MBBS, MPH, AGAF, director of the Crohn’s and Colitis Center at Massachusetts General Hospital in Boston and associate professor of medicine at Harvard Medical School, called it “a very important study that highlights the potential role of a microbiome-based diagnostic for screening. It could have application in a wide variety of settings and is very promising.”
More work, however, is necessary to clarify such testing’s role. “The study’s validation in independent cohorts is an important strength, but the sizes of those cohorts are still quite small,” he said in an interview. “It’s important to understand its accuracy across a spectrum of IBD phenotypes and severity.”
Furthermore, endoscopic evaluation at diagnosis is important to establish severity and extent of disease. “It’s not clear this diagnostic biomarker can help supplant that role. But I see potential value to it for patients for whom we may not be considering endoscopy yet but who would like to risk-stratify.”
The Test’s Future
“We expect to see a real shift in clinical practice,” Ng said. “As a cost-effective test, it will help millions of people dealing with nonspecific gastrointestinal symptoms get the diagnoses they need.” Because the bacterial test can identify IBD at an inactive stage, it has the potential for early diagnosis. “This capability allows clinicians to initiate treatment sooner, helping to prevent progression from subclinical to clinical stages of the disease.”
The next research steps involve prospective studies with a larger and more diverse group of patients with various gastrointestinal symptoms. “This will enable a comprehensive evaluation of bacterial biomarkers in real-world populations,” she said. In vivo and in vitro experiments are expected to provide mechanistic insights into the causal role of these bacteria and metabolic dysregulations in the pathogenesis of IBD, as well as their future clinical utility in disease monitoring and predicting treatment response.
Her group plans to work with the biotech industry and regulatory agencies to transform these biomarkers into an approved test kit. “The rollout is likely to be gradual, but we’re optimistic that supportive international and national guidelines will be developed and will pave the way for widespread implementation.”
This study was supported by various academic, charitable, and governmental research-funding bodies, including the governments of Hong Kong and the People’s Republic of China. Ng has served as an advisory board member or speaker for Pfizer, Ferring, Janssen, AbbVie, Tillotts, Menarini, and Takeda. She has received research grants through her institutions from Olympus, Ferring, and AbbVie and is a founding member and shareholder of GenieBiome. She receives patent royalties through her institutions, including MagIC, which holds patents on the therapeutic and diagnostic use of the microbiome in IBD. Several co-authors reported various relationships, including patent holding, with private-sector companies. Ananthakrishnan had no relevant competing interests.
A version of this article first appeared on Medscape.com.
FROM NATURE MEDICINE
Lowering Urate May Protect Kidneys in Gout Patients With CKD
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Achieving serum urate to below 6 mg/dL with urate-lowering therapy (ULT) in patients with gout and chronic kidney disease (CKD) stage III is not linked to an increased risk for severe or end-stage kidney disease.
METHODOLOGY:
- Researchers emulated analyses of a hypothetical target trial using a cloning, censoring, and weighting approach to evaluate the association between achieving target serum urate level with ULT and the progression of CKD in patients with gout and CKD stage III.
- They included 14,972 patients (mean age, 73.1 years; 37.7% women) from a general practice database who had a mean baseline serum urate level of 8.9 mg/dL and initiated ULTs such as allopurinol or febuxostat.
- Participants were divided into two groups: Those who achieved a target serum urate level < 6 mg/dL and those who did not within 1 year after the initiation of ULT; the mean follow-up duration was a little more than 3 years in both groups.
- The primary outcome was the occurrence of severe or end-stage kidney disease over 5 years of initiating ULT, defined by an estimated glomerular filtration rate below 30 mL/min per 1.73 m2 on two occasions more than 90 days apart within 1 year, or at least one Read code for CKD stages IV or V, dialysis, or kidney transplant.
- A prespecified noninferiority margin for the hazard ratio was set at 1.2 to compare the outcomes between those who achieved the target serum urate level < 6 mg/dL and those who did not.
TAKEAWAY:
- Among the patients who initiated ULT, 31.8% achieved a target serum urate level < 6 mg/dL within 1 year.
- The 5-year risk for severe or end-stage kidney disease was lower (10.32%) in participants with gout and stage III CKD who achieved the target serum urate level than in those who did not (12.73%).
- The adjusted 5-year risk difference for severe to end-stage kidney disease was not inferior in patients who achieved the target serum urate level vs those who did not (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.80-0.98; P for noninferiority < .001); results were consistent for end-stage kidney disease alone (aHR, 0.67; P for noninferiority = .001).
- Similarly, in participants with gout and CKD stages II-III, the 5-year risks for severe or end-stage kidney disease (aHR, 0.91) and end-stage kidney disease alone (aHR, 0.73) were noninferior in the group that did vs that did not achieve target serum urate levels, with P for noninferiority being < .001 and .003, respectively.
IN PRACTICE:
“Our findings suggest that lowering serum urate levels to < 6 mg/dL is generally well tolerated and may even slow CKD progression in these individuals. Initiatives to optimize the use and adherence to ULT could benefit clinicians and patients,” the authors wrote.
SOURCE:
This study was led by Yilun Wang, MD, PhD, Xiangya Hospital, Central South University, Changsha, China. It was published online in JAMA Internal Medicine.
LIMITATIONS:
Residual confounding may still have been present despite rigorous methods to control it, as is common in observational studies. Participants who achieved target serum urate levels may have received better healthcare, adhered to other treatments more consistently, and used ULT for a longer duration. The findings may have limited generalizability, as participants who did not achieve target serum urate levels prior to initiation were excluded.
DISCLOSURES:
This study was supported by the China National Key Research and Development Plan, the National Natural Science Foundation of China, the Project Program of the National Clinical Research Center for Geriatric Disorders, and other sources. Two authors reported receiving personal fees and/or grants from multiple pharmaceutical companies.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Genetic Markers May Predict TNF Inhibitor Response in Rheumatoid Arthritis
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Genetic markers, specifically tumor necrosis factor alpha receptor 2 (TNFR2) gene polymorphisms, may predict response to TNF inhibitor therapy in patients with rheumatoid arthritis (RA). This approach could optimize treatment and improve patient outcomes.
METHODOLOGY:
- The study aimed to determine if TNFR2 gene polymorphisms could serve as biomarkers for treatment responsiveness to TNF inhibitors.
- It included 52 adult patients with RA (average age, 57.4 years; mean body mass index, 31.4; 65% women; 80% White) who had a mean disease duration of 8.9 years and started treatment with a single TNF inhibitor (infliximab, adalimumab, etanercept, golimumab, or certolizumab pegol).
- TNFR2-M (methionine) and TNFR2-R(arginine) gene polymorphisms were identified using genomic DNA isolated from patients’ blood samples to determine M/M, M/R, or R/R genotypes.
- The primary outcome was nonresponse to TNF inhibitors, defined as discontinuation of medication in < 3 months.
- The relationship between TNF inhibitor responsiveness and TNFR2 gene polymorphisms was analyzed using univariable logistic regression.
TAKEAWAY:
- Genomic DNA analysis revealed that 28 patients were homozygous for methionine, 22 were heterozygous, and two were homozygous for arginine.
- Of these, 96.4% of patients with the M/M genotype were responders to TNF inhibitors, whereas 75% of those with the M/R genotype and 50% with the R/R genotype were responders.
- Patients with the M/M genotype had approximately 10 times higher odds of responding to TNF inhibitors than those with the M/R and R/R genotypes (odds ratio, 10.12; P = .04).
IN PRACTICE:
“Identifying predictors for nonresponsiveness to TNF antagonists based on TNFR2 gene polymorphisms may become a valuable tool for personalized medicine, allowing for a more specific TNF [inhibitor] therapy in RA patients,” the authors wrote. “Given that TNF [inhibitor] therapy is used for many autoimmune conditions beyond RA, this genotyping could potentially serve as a useful framework for personalized treatment strategies in other autoimmune diseases to delay or reduce disease progression overall.”
SOURCE:
This study was led by Elaine Husni, MD, MPH, Lerner Research Institute, Cleveland Clinic in Ohio. It was published online on November 7, 2024, in Seminars in Arthritis and Rheumatism and presented as a poster at the American College of Rheumatology (ACR) 2024 Annual Meeting.
LIMITATIONS:
This study’s sample size was relatively small.
DISCLOSURES:
This study was supported by the Arthritis Foundation and in part by the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Skin Cancer Risk Elevated Among Blood, Marrow Transplant Survivors
TOPLINE:
with a cumulative incidence of 27.4% over 30 years, according to the results of a cohort study.
METHODOLOGY:
- The retrospective cohort study included 3880 BMT survivors (median age, 44 years; 55.8% men; 4.9% Black, 12.1 Hispanic, and 74.7% non-Hispanic White individuals) who underwent transplant between 1974 to 2014.
- Participants completed the BMT Survivor Study survey and were followed up for a median of 9.5 years.
- The primary outcomes were the development of subsequent cutaneous malignant neoplasms (BCC, SCC, or melanoma).
TAKEAWAY:
- The 30-year cumulative incidence of any cutaneous malignant neoplasm was 27.4% — 18% for BCC, 9.8% for SCC, and 3.7% for melanoma.
- A higher risk for skin cancer was reported for patients aged 50 years or more (subdistribution hazard ratio [SHR], 2.23; 95% CI, 1.83-2.71), and men (SHR, 1.40; 95% CI, 1.18-1.65).
- Allogeneic BMT with chronic graft-vs-host disease (cGVHD) increased the risk for skin cancer (SHR, 1.84; 95% CI, 1.37-2.47), compared with autologous BMT, while post-BMT immunosuppression increased risk for all types (overall SHR, 1.53; 95% CI, 1.26-1.86).
- The risk for any skin cancer was significantly lower in Black individuals (SHR, 0.14; 95% CI, 0.05-0.37), Hispanic individuals (SHR, 0.29; 95%CI, 0.20-0.62), and patients of other races or who were multiracial (SHR, 0.22; 95% CI, 0.13-0.37) than in non-Hispanic White patients.
IN PRACTICE:
In the study, “risk factors for post-BMT cutaneous malignant neoplasms included pretransplant treatment with a monoclonal antibody, cGVHD, and posttransplant immunosuppression,” the authors wrote, adding that the findings “could inform targeted surveillance of BMT survivors.” Most BMT survivors, “do not undergo routine dermatologic surveillance, highlighting the need to understand risk factors and incorporate risk-informed dermatologic surveillance into survivorship care plans.”
SOURCE:
The study was led by Kristy K. Broman, MD, MPH, University of Alabama at Birmingham, and was published online on December 18 in JAMA Dermatology.
LIMITATIONS:
Limitations included self-reported data and possible underreporting of melanoma cases in the SEER database. Additionally, the study did not capture other risk factors for cutaneous malignant neoplasms such as skin phototype, ultraviolet light exposure, or family history. The duration of posttransplant immunosuppression was not collected, and surveys were administered at variable intervals, though all were completed more than 2 years post BMT.
DISCLOSURES:
The study was supported by the National Cancer Institute (NCI) and the Leukemia and Lymphoma Society. Broman received grants from NCI, the National Center for Advancing Translational Sciences, the American Society of Clinical Oncology, and the American College of Surgeons. Another author reported receiving grants outside this work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
with a cumulative incidence of 27.4% over 30 years, according to the results of a cohort study.
METHODOLOGY:
- The retrospective cohort study included 3880 BMT survivors (median age, 44 years; 55.8% men; 4.9% Black, 12.1 Hispanic, and 74.7% non-Hispanic White individuals) who underwent transplant between 1974 to 2014.
- Participants completed the BMT Survivor Study survey and were followed up for a median of 9.5 years.
- The primary outcomes were the development of subsequent cutaneous malignant neoplasms (BCC, SCC, or melanoma).
TAKEAWAY:
- The 30-year cumulative incidence of any cutaneous malignant neoplasm was 27.4% — 18% for BCC, 9.8% for SCC, and 3.7% for melanoma.
- A higher risk for skin cancer was reported for patients aged 50 years or more (subdistribution hazard ratio [SHR], 2.23; 95% CI, 1.83-2.71), and men (SHR, 1.40; 95% CI, 1.18-1.65).
- Allogeneic BMT with chronic graft-vs-host disease (cGVHD) increased the risk for skin cancer (SHR, 1.84; 95% CI, 1.37-2.47), compared with autologous BMT, while post-BMT immunosuppression increased risk for all types (overall SHR, 1.53; 95% CI, 1.26-1.86).
- The risk for any skin cancer was significantly lower in Black individuals (SHR, 0.14; 95% CI, 0.05-0.37), Hispanic individuals (SHR, 0.29; 95%CI, 0.20-0.62), and patients of other races or who were multiracial (SHR, 0.22; 95% CI, 0.13-0.37) than in non-Hispanic White patients.
IN PRACTICE:
In the study, “risk factors for post-BMT cutaneous malignant neoplasms included pretransplant treatment with a monoclonal antibody, cGVHD, and posttransplant immunosuppression,” the authors wrote, adding that the findings “could inform targeted surveillance of BMT survivors.” Most BMT survivors, “do not undergo routine dermatologic surveillance, highlighting the need to understand risk factors and incorporate risk-informed dermatologic surveillance into survivorship care plans.”
SOURCE:
The study was led by Kristy K. Broman, MD, MPH, University of Alabama at Birmingham, and was published online on December 18 in JAMA Dermatology.
LIMITATIONS:
Limitations included self-reported data and possible underreporting of melanoma cases in the SEER database. Additionally, the study did not capture other risk factors for cutaneous malignant neoplasms such as skin phototype, ultraviolet light exposure, or family history. The duration of posttransplant immunosuppression was not collected, and surveys were administered at variable intervals, though all were completed more than 2 years post BMT.
DISCLOSURES:
The study was supported by the National Cancer Institute (NCI) and the Leukemia and Lymphoma Society. Broman received grants from NCI, the National Center for Advancing Translational Sciences, the American Society of Clinical Oncology, and the American College of Surgeons. Another author reported receiving grants outside this work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
with a cumulative incidence of 27.4% over 30 years, according to the results of a cohort study.
METHODOLOGY:
- The retrospective cohort study included 3880 BMT survivors (median age, 44 years; 55.8% men; 4.9% Black, 12.1 Hispanic, and 74.7% non-Hispanic White individuals) who underwent transplant between 1974 to 2014.
- Participants completed the BMT Survivor Study survey and were followed up for a median of 9.5 years.
- The primary outcomes were the development of subsequent cutaneous malignant neoplasms (BCC, SCC, or melanoma).
TAKEAWAY:
- The 30-year cumulative incidence of any cutaneous malignant neoplasm was 27.4% — 18% for BCC, 9.8% for SCC, and 3.7% for melanoma.
- A higher risk for skin cancer was reported for patients aged 50 years or more (subdistribution hazard ratio [SHR], 2.23; 95% CI, 1.83-2.71), and men (SHR, 1.40; 95% CI, 1.18-1.65).
- Allogeneic BMT with chronic graft-vs-host disease (cGVHD) increased the risk for skin cancer (SHR, 1.84; 95% CI, 1.37-2.47), compared with autologous BMT, while post-BMT immunosuppression increased risk for all types (overall SHR, 1.53; 95% CI, 1.26-1.86).
- The risk for any skin cancer was significantly lower in Black individuals (SHR, 0.14; 95% CI, 0.05-0.37), Hispanic individuals (SHR, 0.29; 95%CI, 0.20-0.62), and patients of other races or who were multiracial (SHR, 0.22; 95% CI, 0.13-0.37) than in non-Hispanic White patients.
IN PRACTICE:
In the study, “risk factors for post-BMT cutaneous malignant neoplasms included pretransplant treatment with a monoclonal antibody, cGVHD, and posttransplant immunosuppression,” the authors wrote, adding that the findings “could inform targeted surveillance of BMT survivors.” Most BMT survivors, “do not undergo routine dermatologic surveillance, highlighting the need to understand risk factors and incorporate risk-informed dermatologic surveillance into survivorship care plans.”
SOURCE:
The study was led by Kristy K. Broman, MD, MPH, University of Alabama at Birmingham, and was published online on December 18 in JAMA Dermatology.
LIMITATIONS:
Limitations included self-reported data and possible underreporting of melanoma cases in the SEER database. Additionally, the study did not capture other risk factors for cutaneous malignant neoplasms such as skin phototype, ultraviolet light exposure, or family history. The duration of posttransplant immunosuppression was not collected, and surveys were administered at variable intervals, though all were completed more than 2 years post BMT.
DISCLOSURES:
The study was supported by the National Cancer Institute (NCI) and the Leukemia and Lymphoma Society. Broman received grants from NCI, the National Center for Advancing Translational Sciences, the American Society of Clinical Oncology, and the American College of Surgeons. Another author reported receiving grants outside this work.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Novel Telecare Approach Transforms Alcohol Use Screening and Treatment in Primary Care Setting
TOPLINE:
A new telephone-based program implemented in a Federally Qualified Health Center (FQHC) demonstrates effectiveness in reducing unhealthy alcohol use among diverse adult patients screened using the Alcohol Use Disorders Identification Test (AUDIT).
METHODOLOGY:
- Researchers implemented a screening and team-based telephonic program within a large FQHC system in Texas in which adult patients were routinely screened using AUDIT-Consumption (AUDIT-C) questions.
- The team-based, telecare-centered program was designed to follow-up positive screening results with full AUDIT assessments and to provide a two-session brief intervention for all patients. Patients with AUDIT scores ≥ 12 received the brief intervention along with a referral for additional support or an assessment for pharmacotherapy prescription.
- The researchers screened 3959 patients between March 2021 and May 2023, of whom 412 patients with positive results were successfully contacted and had their AUDIT completed (mean age, 46 years; 32% women; 86% Hispanic/Latino; 65% preferred Spanish).
- Of these, 29 patients had full AUDIT scores ranging from 0 to 3, 252 had scores between 4 and 12, and 131 had scores > 12.
- Follow-up AUDIT assessments conducted at 3-6 months were completed for 251 patients (26% women; 90% Hispanic/Latino), and those with AUDIT scores ≥ 12 were offered additional treatment options, including telecare services, in-person appointments with the addiction medicine clinic, and/or pharmacotherapy.
TAKEAWAY:
- Among the patients with an initial AUDIT score > 12, 19 received pharmacotherapy and 13 had at least one appointment with the addiction medicine service.
- For patients who completed the initial and final follow-ups, the mean change in AUDIT score was −4.1 (95% CI, −3.4 to −4.7).
- Spanish-speaking patients demonstrated a greater reduction in AUDIT scores than English-speaking patients.
- The mean reduction in the AUDIT score at the 3- to 6-month follow-up was larger in those with initial AUDIT scores > 12 than in those with initial AUDIT scores ≤ 12 (7.99 vs 2.25).
IN PRACTICE:
“Our intervention was delivered outside of traditional office visits and did not disrupt clinic flow or add burden to the practice’s providers, who already face significant challenges in serving this high-needs population,” the authors wrote. “We believe this program offers a template for delivering evidence-based, equitable preventive care for unhealthy alcohol use in a diverse patient population.”
SOURCE:
The study was led by Michael Pignone, MD, MPH, Department of Medicine, Duke University School of Medicine, Durham, North Carolina. It was published online in the Journal of General Internal Medicine.
LIMITATIONS:
The lack of systematic tracking for the unsuccessful attempts at establishing contact limited the understanding of the variations in screening positivity and the subsequent engagement in the program. Program staffing and constraints in the budget limited the ability to reach all potentially interested participants. The absence of a control group made it difficult to attribute the observed reductions in the AUDIT scores solely to the intervention. The data on follow-up were collected from only 61% participants, raising the possibility that those who were not reached may have had different outcomes than those who were successfully contacted.
DISCLOSURES:
The Cancer Prevention and Research Institute of Texas provided funding for this program. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A new telephone-based program implemented in a Federally Qualified Health Center (FQHC) demonstrates effectiveness in reducing unhealthy alcohol use among diverse adult patients screened using the Alcohol Use Disorders Identification Test (AUDIT).
METHODOLOGY:
- Researchers implemented a screening and team-based telephonic program within a large FQHC system in Texas in which adult patients were routinely screened using AUDIT-Consumption (AUDIT-C) questions.
- The team-based, telecare-centered program was designed to follow-up positive screening results with full AUDIT assessments and to provide a two-session brief intervention for all patients. Patients with AUDIT scores ≥ 12 received the brief intervention along with a referral for additional support or an assessment for pharmacotherapy prescription.
- The researchers screened 3959 patients between March 2021 and May 2023, of whom 412 patients with positive results were successfully contacted and had their AUDIT completed (mean age, 46 years; 32% women; 86% Hispanic/Latino; 65% preferred Spanish).
- Of these, 29 patients had full AUDIT scores ranging from 0 to 3, 252 had scores between 4 and 12, and 131 had scores > 12.
- Follow-up AUDIT assessments conducted at 3-6 months were completed for 251 patients (26% women; 90% Hispanic/Latino), and those with AUDIT scores ≥ 12 were offered additional treatment options, including telecare services, in-person appointments with the addiction medicine clinic, and/or pharmacotherapy.
TAKEAWAY:
- Among the patients with an initial AUDIT score > 12, 19 received pharmacotherapy and 13 had at least one appointment with the addiction medicine service.
- For patients who completed the initial and final follow-ups, the mean change in AUDIT score was −4.1 (95% CI, −3.4 to −4.7).
- Spanish-speaking patients demonstrated a greater reduction in AUDIT scores than English-speaking patients.
- The mean reduction in the AUDIT score at the 3- to 6-month follow-up was larger in those with initial AUDIT scores > 12 than in those with initial AUDIT scores ≤ 12 (7.99 vs 2.25).
IN PRACTICE:
“Our intervention was delivered outside of traditional office visits and did not disrupt clinic flow or add burden to the practice’s providers, who already face significant challenges in serving this high-needs population,” the authors wrote. “We believe this program offers a template for delivering evidence-based, equitable preventive care for unhealthy alcohol use in a diverse patient population.”
SOURCE:
The study was led by Michael Pignone, MD, MPH, Department of Medicine, Duke University School of Medicine, Durham, North Carolina. It was published online in the Journal of General Internal Medicine.
LIMITATIONS:
The lack of systematic tracking for the unsuccessful attempts at establishing contact limited the understanding of the variations in screening positivity and the subsequent engagement in the program. Program staffing and constraints in the budget limited the ability to reach all potentially interested participants. The absence of a control group made it difficult to attribute the observed reductions in the AUDIT scores solely to the intervention. The data on follow-up were collected from only 61% participants, raising the possibility that those who were not reached may have had different outcomes than those who were successfully contacted.
DISCLOSURES:
The Cancer Prevention and Research Institute of Texas provided funding for this program. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
A new telephone-based program implemented in a Federally Qualified Health Center (FQHC) demonstrates effectiveness in reducing unhealthy alcohol use among diverse adult patients screened using the Alcohol Use Disorders Identification Test (AUDIT).
METHODOLOGY:
- Researchers implemented a screening and team-based telephonic program within a large FQHC system in Texas in which adult patients were routinely screened using AUDIT-Consumption (AUDIT-C) questions.
- The team-based, telecare-centered program was designed to follow-up positive screening results with full AUDIT assessments and to provide a two-session brief intervention for all patients. Patients with AUDIT scores ≥ 12 received the brief intervention along with a referral for additional support or an assessment for pharmacotherapy prescription.
- The researchers screened 3959 patients between March 2021 and May 2023, of whom 412 patients with positive results were successfully contacted and had their AUDIT completed (mean age, 46 years; 32% women; 86% Hispanic/Latino; 65% preferred Spanish).
- Of these, 29 patients had full AUDIT scores ranging from 0 to 3, 252 had scores between 4 and 12, and 131 had scores > 12.
- Follow-up AUDIT assessments conducted at 3-6 months were completed for 251 patients (26% women; 90% Hispanic/Latino), and those with AUDIT scores ≥ 12 were offered additional treatment options, including telecare services, in-person appointments with the addiction medicine clinic, and/or pharmacotherapy.
TAKEAWAY:
- Among the patients with an initial AUDIT score > 12, 19 received pharmacotherapy and 13 had at least one appointment with the addiction medicine service.
- For patients who completed the initial and final follow-ups, the mean change in AUDIT score was −4.1 (95% CI, −3.4 to −4.7).
- Spanish-speaking patients demonstrated a greater reduction in AUDIT scores than English-speaking patients.
- The mean reduction in the AUDIT score at the 3- to 6-month follow-up was larger in those with initial AUDIT scores > 12 than in those with initial AUDIT scores ≤ 12 (7.99 vs 2.25).
IN PRACTICE:
“Our intervention was delivered outside of traditional office visits and did not disrupt clinic flow or add burden to the practice’s providers, who already face significant challenges in serving this high-needs population,” the authors wrote. “We believe this program offers a template for delivering evidence-based, equitable preventive care for unhealthy alcohol use in a diverse patient population.”
SOURCE:
The study was led by Michael Pignone, MD, MPH, Department of Medicine, Duke University School of Medicine, Durham, North Carolina. It was published online in the Journal of General Internal Medicine.
LIMITATIONS:
The lack of systematic tracking for the unsuccessful attempts at establishing contact limited the understanding of the variations in screening positivity and the subsequent engagement in the program. Program staffing and constraints in the budget limited the ability to reach all potentially interested participants. The absence of a control group made it difficult to attribute the observed reductions in the AUDIT scores solely to the intervention. The data on follow-up were collected from only 61% participants, raising the possibility that those who were not reached may have had different outcomes than those who were successfully contacted.
DISCLOSURES:
The Cancer Prevention and Research Institute of Texas provided funding for this program. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
PPI-Responsive Disease a Subtype of EoE Rather Than GERD
, according to comparative proteomic analyses.
Notably, after PPI therapy, the protein profiles of responsive patients reverted and appeared similar to non-EoE patients, whereas the profiles of nonresponsive patients remained largely unchanged.
“Identifying protein biomarkers associated with PPI response may help distinguish EoE phenotypes and guide therapy selections,” said senior author Walter Chan, MD, AGAF, associate professor of medicine in the Division of Gastroenterology, Hepatology, and Endoscopy at Harvard Medical School and director of the center for gastrointestinal motility at Brigham and Women’s Hospital, Boston.
“These findings may provide the framework for developing protein biomarkers to assess response to therapy and monitor disease activity,” he added.
The study was published online in Gastroenterology.
Comparative Proteomic Analyses
Chan and colleagues conducted a prospective exploratory pilot study to identify the differences in esophageal protein profiles among PPI-responsive-EoE (PPI-R-EoE), PPI-nonresponsive-EoE (PPI-NR-EoE), and non-EoE controls using SOMAscan, a proteomics platform that allows simultaneous detection of 1305 human proteins.
The research team prospectively enrolled patients undergoing endoscopy for esophageal symptoms or for EoE follow-up, obtaining clinically indicated biopsies as well as extra samples from the midesophagus.
Patients who were diagnosed with EoE (at 15 or greater eosinophils per high-power field, or eos/hpf) were treated with 20 mg of omeprazole twice daily for 8 weeks, followed by repeat biopsies to assess treatment response.
Patients with histologic remission (fewer than 15 eos/hpf) were classified as PPI-R-EoE, whereas those with persistently active disease were classified as PPI-NR-EoE. Patients without EoE served as controls and were categorized as having erosive esophagitis (EE) or no esophagitis.
Overall, the study enrolled 32 patients, including 15 with PPI-R-EoE, eight with PPI-NR-EoE, three with EE, and six with no esophagitis. The demographics, symptoms, and endoscopic findings were similar between the PPI-R-EoE and PPI-NR-EoE patients.
At the index endoscopy, the PPI-R-EoE and PPI-NR-EoE patients had similar esophageal protein profiles, with only 20 proteins differentially expressed at a relaxed cutoff of P < .1. An analysis of the 20 proteins predicted lower expression of six proteins that may be associated with gastrointestinal inflammation in nonresponsive patients, including STAT1, STAT3, CFB, interleukin (IL)-17RA, TNFRSF1A, and SERPINA3.
In addition, 136 proteins — including 15 with corrected P < .05 — clearly discriminated PPI-R-EoE patients from non-EoE controls, and 255 proteins — including 249 with P < .05 — discriminated PPI-NR-EoE patients from controls. Both types of EoE patients had proteins associated with enhanced inflammation and vasculogenesis, as well as down-regulation of CRISP3 and DSG1 and upregulation of TNFAIP6.
The comparative analyses also showed that the follow-up biopsies of PPI-R-EoE patients had protein profiles that resembled non-EoE controls after PPI therapy.
“This further supports the hypothesis that despite the PPI response, PPI-R-EoE represents a subtype of EoE rather than gastroesophageal reflux disease (GERD),” Chan said.
Future EoE Considerations
Although most expressed proteins appeared similar between PPI-responsive and nonresponsive patients before treatment, a few proteins differed related to gastrointestinal inflammation, the study authors wrote, including some previously implicated in IL4 and IL13 inflammatory pathways.
“Further study of these proteins may provide insights into the EoE pathogenic pathway, explore their potential to predict PPI response at diagnosis, and identify possible therapeutic targets,” they wrote.
The authors pointed to the small study size as the primary limitation, noting that the pilot study was intended to explore the feasibility of using SomaScan to assess esophageal protein profiles in different EoE phenotypes. In the future, larger studies with more expansive candidate proteins could help characterize the differences and better identify specific proteins and pathways in EoE, they wrote.
“The takeaway is that PPI responsiveness does not distinguish EoE from GERD but rather PPI is a primary therapy for EoE independent of GERD,” said Marc Rothenberg, MD, director of allergy and immunology and director of the Cincinnati Center for Eosinophilic Disorders at Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio.
Rothenberg, who wasn’t involved with this study, has conducted transcriptome analyses of PPI-R-EoE, which showed PPI-reversible allergic inflammation.
“PPI-R-EoE and PPI-NR-EoE look the same at the molecular level,” he said. “After therapy, PPI-R-EoE normalizes, as per its definition.”
This study was supported by the Campaign Urging Research for Eosinophilic Disease Foundation Grant, the Kenneth and Louise Goldberg Junior Faculty Award, and a National Institutes of Health award. Chan declared advisory board positions with several pharmaceutical companies and Rothenberg reported no relevant disclosures.
A version of this article appeared on Medscape.com.
, according to comparative proteomic analyses.
Notably, after PPI therapy, the protein profiles of responsive patients reverted and appeared similar to non-EoE patients, whereas the profiles of nonresponsive patients remained largely unchanged.
“Identifying protein biomarkers associated with PPI response may help distinguish EoE phenotypes and guide therapy selections,” said senior author Walter Chan, MD, AGAF, associate professor of medicine in the Division of Gastroenterology, Hepatology, and Endoscopy at Harvard Medical School and director of the center for gastrointestinal motility at Brigham and Women’s Hospital, Boston.
“These findings may provide the framework for developing protein biomarkers to assess response to therapy and monitor disease activity,” he added.
The study was published online in Gastroenterology.
Comparative Proteomic Analyses
Chan and colleagues conducted a prospective exploratory pilot study to identify the differences in esophageal protein profiles among PPI-responsive-EoE (PPI-R-EoE), PPI-nonresponsive-EoE (PPI-NR-EoE), and non-EoE controls using SOMAscan, a proteomics platform that allows simultaneous detection of 1305 human proteins.
The research team prospectively enrolled patients undergoing endoscopy for esophageal symptoms or for EoE follow-up, obtaining clinically indicated biopsies as well as extra samples from the midesophagus.
Patients who were diagnosed with EoE (at 15 or greater eosinophils per high-power field, or eos/hpf) were treated with 20 mg of omeprazole twice daily for 8 weeks, followed by repeat biopsies to assess treatment response.
Patients with histologic remission (fewer than 15 eos/hpf) were classified as PPI-R-EoE, whereas those with persistently active disease were classified as PPI-NR-EoE. Patients without EoE served as controls and were categorized as having erosive esophagitis (EE) or no esophagitis.
Overall, the study enrolled 32 patients, including 15 with PPI-R-EoE, eight with PPI-NR-EoE, three with EE, and six with no esophagitis. The demographics, symptoms, and endoscopic findings were similar between the PPI-R-EoE and PPI-NR-EoE patients.
At the index endoscopy, the PPI-R-EoE and PPI-NR-EoE patients had similar esophageal protein profiles, with only 20 proteins differentially expressed at a relaxed cutoff of P < .1. An analysis of the 20 proteins predicted lower expression of six proteins that may be associated with gastrointestinal inflammation in nonresponsive patients, including STAT1, STAT3, CFB, interleukin (IL)-17RA, TNFRSF1A, and SERPINA3.
In addition, 136 proteins — including 15 with corrected P < .05 — clearly discriminated PPI-R-EoE patients from non-EoE controls, and 255 proteins — including 249 with P < .05 — discriminated PPI-NR-EoE patients from controls. Both types of EoE patients had proteins associated with enhanced inflammation and vasculogenesis, as well as down-regulation of CRISP3 and DSG1 and upregulation of TNFAIP6.
The comparative analyses also showed that the follow-up biopsies of PPI-R-EoE patients had protein profiles that resembled non-EoE controls after PPI therapy.
“This further supports the hypothesis that despite the PPI response, PPI-R-EoE represents a subtype of EoE rather than gastroesophageal reflux disease (GERD),” Chan said.
Future EoE Considerations
Although most expressed proteins appeared similar between PPI-responsive and nonresponsive patients before treatment, a few proteins differed related to gastrointestinal inflammation, the study authors wrote, including some previously implicated in IL4 and IL13 inflammatory pathways.
“Further study of these proteins may provide insights into the EoE pathogenic pathway, explore their potential to predict PPI response at diagnosis, and identify possible therapeutic targets,” they wrote.
The authors pointed to the small study size as the primary limitation, noting that the pilot study was intended to explore the feasibility of using SomaScan to assess esophageal protein profiles in different EoE phenotypes. In the future, larger studies with more expansive candidate proteins could help characterize the differences and better identify specific proteins and pathways in EoE, they wrote.
“The takeaway is that PPI responsiveness does not distinguish EoE from GERD but rather PPI is a primary therapy for EoE independent of GERD,” said Marc Rothenberg, MD, director of allergy and immunology and director of the Cincinnati Center for Eosinophilic Disorders at Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio.
Rothenberg, who wasn’t involved with this study, has conducted transcriptome analyses of PPI-R-EoE, which showed PPI-reversible allergic inflammation.
“PPI-R-EoE and PPI-NR-EoE look the same at the molecular level,” he said. “After therapy, PPI-R-EoE normalizes, as per its definition.”
This study was supported by the Campaign Urging Research for Eosinophilic Disease Foundation Grant, the Kenneth and Louise Goldberg Junior Faculty Award, and a National Institutes of Health award. Chan declared advisory board positions with several pharmaceutical companies and Rothenberg reported no relevant disclosures.
A version of this article appeared on Medscape.com.
, according to comparative proteomic analyses.
Notably, after PPI therapy, the protein profiles of responsive patients reverted and appeared similar to non-EoE patients, whereas the profiles of nonresponsive patients remained largely unchanged.
“Identifying protein biomarkers associated with PPI response may help distinguish EoE phenotypes and guide therapy selections,” said senior author Walter Chan, MD, AGAF, associate professor of medicine in the Division of Gastroenterology, Hepatology, and Endoscopy at Harvard Medical School and director of the center for gastrointestinal motility at Brigham and Women’s Hospital, Boston.
“These findings may provide the framework for developing protein biomarkers to assess response to therapy and monitor disease activity,” he added.
The study was published online in Gastroenterology.
Comparative Proteomic Analyses
Chan and colleagues conducted a prospective exploratory pilot study to identify the differences in esophageal protein profiles among PPI-responsive-EoE (PPI-R-EoE), PPI-nonresponsive-EoE (PPI-NR-EoE), and non-EoE controls using SOMAscan, a proteomics platform that allows simultaneous detection of 1305 human proteins.
The research team prospectively enrolled patients undergoing endoscopy for esophageal symptoms or for EoE follow-up, obtaining clinically indicated biopsies as well as extra samples from the midesophagus.
Patients who were diagnosed with EoE (at 15 or greater eosinophils per high-power field, or eos/hpf) were treated with 20 mg of omeprazole twice daily for 8 weeks, followed by repeat biopsies to assess treatment response.
Patients with histologic remission (fewer than 15 eos/hpf) were classified as PPI-R-EoE, whereas those with persistently active disease were classified as PPI-NR-EoE. Patients without EoE served as controls and were categorized as having erosive esophagitis (EE) or no esophagitis.
Overall, the study enrolled 32 patients, including 15 with PPI-R-EoE, eight with PPI-NR-EoE, three with EE, and six with no esophagitis. The demographics, symptoms, and endoscopic findings were similar between the PPI-R-EoE and PPI-NR-EoE patients.
At the index endoscopy, the PPI-R-EoE and PPI-NR-EoE patients had similar esophageal protein profiles, with only 20 proteins differentially expressed at a relaxed cutoff of P < .1. An analysis of the 20 proteins predicted lower expression of six proteins that may be associated with gastrointestinal inflammation in nonresponsive patients, including STAT1, STAT3, CFB, interleukin (IL)-17RA, TNFRSF1A, and SERPINA3.
In addition, 136 proteins — including 15 with corrected P < .05 — clearly discriminated PPI-R-EoE patients from non-EoE controls, and 255 proteins — including 249 with P < .05 — discriminated PPI-NR-EoE patients from controls. Both types of EoE patients had proteins associated with enhanced inflammation and vasculogenesis, as well as down-regulation of CRISP3 and DSG1 and upregulation of TNFAIP6.
The comparative analyses also showed that the follow-up biopsies of PPI-R-EoE patients had protein profiles that resembled non-EoE controls after PPI therapy.
“This further supports the hypothesis that despite the PPI response, PPI-R-EoE represents a subtype of EoE rather than gastroesophageal reflux disease (GERD),” Chan said.
Future EoE Considerations
Although most expressed proteins appeared similar between PPI-responsive and nonresponsive patients before treatment, a few proteins differed related to gastrointestinal inflammation, the study authors wrote, including some previously implicated in IL4 and IL13 inflammatory pathways.
“Further study of these proteins may provide insights into the EoE pathogenic pathway, explore their potential to predict PPI response at diagnosis, and identify possible therapeutic targets,” they wrote.
The authors pointed to the small study size as the primary limitation, noting that the pilot study was intended to explore the feasibility of using SomaScan to assess esophageal protein profiles in different EoE phenotypes. In the future, larger studies with more expansive candidate proteins could help characterize the differences and better identify specific proteins and pathways in EoE, they wrote.
“The takeaway is that PPI responsiveness does not distinguish EoE from GERD but rather PPI is a primary therapy for EoE independent of GERD,” said Marc Rothenberg, MD, director of allergy and immunology and director of the Cincinnati Center for Eosinophilic Disorders at Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio.
Rothenberg, who wasn’t involved with this study, has conducted transcriptome analyses of PPI-R-EoE, which showed PPI-reversible allergic inflammation.
“PPI-R-EoE and PPI-NR-EoE look the same at the molecular level,” he said. “After therapy, PPI-R-EoE normalizes, as per its definition.”
This study was supported by the Campaign Urging Research for Eosinophilic Disease Foundation Grant, the Kenneth and Louise Goldberg Junior Faculty Award, and a National Institutes of Health award. Chan declared advisory board positions with several pharmaceutical companies and Rothenberg reported no relevant disclosures.
A version of this article appeared on Medscape.com.
A New Weight Loss Drug With No Side Effects? Yes... So Far
For people with obesity or type 2 diabetes, glucagon-like peptide 1 (GLP-1) agonists (including Mounjaro, Wegovy, and Ozempic) have been labeled miracle drugs. But they aren’t miraculous for everyone. Research indicates a significant portion of people discontinue using them within a year.
The main problems with GLP-1 agonists are that they are expensive and have a fairly high rate of side effects — such as nausea, vomiting, diarrhea, or constipation. Another big one is muscle loss.
This lack of side effects, particularly in how the potential drug causes no muscle loss — and in fact engages muscle for some of its effect — sets it apart and makes it a potential alternative to GLP-1s. The key is not just reducing appetite but also increasing energy expenditure.
How It Works
The new approach targets a protein called NK2R — a member of the neurokinin receptor family, which has a role in a variety of physiological processes, including pain sensation, anxiety, and inflammation.
“We were looking to see genetic linkages to metabolic health, and there NK2R was,” said Zach Gerhart-Hines, PhD, a professor studying molecular metabolism at the University of Copenhagen in Denmark and principal investigator of the study. The group then created a few long-acting agonists that are selective for NK2R. So far, they’ve tested them in mice and nonhuman primates.
“The data on new medicines targeting NK2R is very promising and highlights the potential of both reducing food intake and increasing energy expenditure,” said Daniel Drucker, MD, an endocrinologist and researcher at Lunenfeld-Tanenbaum Research Institute in Toronto who was not involved in the study.
“The drug activates a certain region in the hindbrain of the animal, which is controlling food intake, and it does so by reducing appetite without increasing nausea or vomiting,” explained Frederike Sass, a research assistant at the University of Copenhagen in Copenhagen, Denmark, who led the study.
Gerhart-Hines said that even at the highest dose, there were no incidents of vomiting among the nonhuman primates. Mice can’t vomit, but there are ways to tell if they feel unwell from a drug. One way researchers test that is to start feeding the mice sweetened water at the same time they’re given a drug. Then later, when the mice are no longer on the drug, they’re given a choice between sweetened and unsweetened water. If they weren’t feeling well on the drug, they’ll choose plain water because they associate the sweet water with feeling bad, otherwise mice prefer sweet water. Sass said that with the NK2R agonist, they continued to drink sweet water after the treatment, whereas when they gave the mice semaglutide, the mice preferred plain water posttreatment.
The researchers also monitored the animals’ psychological health, as NK2R has been associated with anxiety, but they observed no behavioral changes.
The Key Mechanism at Work
One big question is how the NK2R agonists work. The amphetamines people used for weight loss during the 1950s and 1960s worked by making people more active. GLP-1 agonists reduce appetite and lower blood sugar. This is not that. In their studies with animals, the researchers didn’t observe that the animals were more active nor were there changes in other biomarkers like insulin. So far, the main difference they found with the NK2R agonists is an increase in thermogenesis in certain muscles.
Another benefit of the NK2R treatments is that they don’t seem to have a big impact on lean mass — the nonfat component of body weight, namely muscle, bones, and organs. Studies indicate that 25%-39% of weight loss on GLP-1 agonists is lost muscle. According to DEXA scans of the mice, Gerhart-Hines said they observed no lean mass loss. (In mice, he noted, GLP-1 agonists can cause up to 50% lean mass loss).
And for people with both diabetes and obesity, “what we found with NK2R is that obese and diabetic models, whether mice or monkeys, respond much better to that treatment in terms of glucose control and body weight loss,” Gerhart-Hines said. He explained that GLP-1 agonists don’t work quite as well for weight loss in people with diabetes because the drug stimulates insulin production in a system that already has insulin issues and can cause more sugar to be stored as fat.
Further, GLP-1 agonists are peptide drugs, which are expensive to make. The NK2R agonists are small molecules that would be cheaper to produce, Gerhart-Hines believes. One candidate they’re testing would likely be given once daily, another once weekly.
The current surge in obesity and diabetes may be a direct consequence of our bodies’ decreased energy expenditure. “Compared to 80s and 90s, the average person is more physically active, but the overarching basal resting energy expenditure has gone down,” said Gerhart-Hines, according to research by John Speakman at the University of Aberdeen, Scotland. We don’t know why, though, he said, but guesses it could be our diets or climate controlled environments.
But the NK2R agonists are among the many currently being studied for weight loss, and it may be hard to compete with the GLP-1 agonists. “As GLP-1 medicines will soon achieve 25% weight loss and have an extensively studied safety profile, the task of producing better drugs that work well in most people, are well tolerated and also reduce the complications of cardiometabolic disease, is challenging but not impossible,” said Drucker.
Gerhart-Hines said they plan to start trials in humans in the next year, but he suspects it will be another 6 or 7 years before it comes to market, if the trials are successful.
“There’s people who want [a GLP-1 agonist] and can’t even get it,” Gerhart-Hines said. As far as weight loss drugs, he noted, “we are not even saturating the market right now.”
A version of this article first appeared on Medscape.com.
For people with obesity or type 2 diabetes, glucagon-like peptide 1 (GLP-1) agonists (including Mounjaro, Wegovy, and Ozempic) have been labeled miracle drugs. But they aren’t miraculous for everyone. Research indicates a significant portion of people discontinue using them within a year.
The main problems with GLP-1 agonists are that they are expensive and have a fairly high rate of side effects — such as nausea, vomiting, diarrhea, or constipation. Another big one is muscle loss.
This lack of side effects, particularly in how the potential drug causes no muscle loss — and in fact engages muscle for some of its effect — sets it apart and makes it a potential alternative to GLP-1s. The key is not just reducing appetite but also increasing energy expenditure.
How It Works
The new approach targets a protein called NK2R — a member of the neurokinin receptor family, which has a role in a variety of physiological processes, including pain sensation, anxiety, and inflammation.
“We were looking to see genetic linkages to metabolic health, and there NK2R was,” said Zach Gerhart-Hines, PhD, a professor studying molecular metabolism at the University of Copenhagen in Denmark and principal investigator of the study. The group then created a few long-acting agonists that are selective for NK2R. So far, they’ve tested them in mice and nonhuman primates.
“The data on new medicines targeting NK2R is very promising and highlights the potential of both reducing food intake and increasing energy expenditure,” said Daniel Drucker, MD, an endocrinologist and researcher at Lunenfeld-Tanenbaum Research Institute in Toronto who was not involved in the study.
“The drug activates a certain region in the hindbrain of the animal, which is controlling food intake, and it does so by reducing appetite without increasing nausea or vomiting,” explained Frederike Sass, a research assistant at the University of Copenhagen in Copenhagen, Denmark, who led the study.
Gerhart-Hines said that even at the highest dose, there were no incidents of vomiting among the nonhuman primates. Mice can’t vomit, but there are ways to tell if they feel unwell from a drug. One way researchers test that is to start feeding the mice sweetened water at the same time they’re given a drug. Then later, when the mice are no longer on the drug, they’re given a choice between sweetened and unsweetened water. If they weren’t feeling well on the drug, they’ll choose plain water because they associate the sweet water with feeling bad, otherwise mice prefer sweet water. Sass said that with the NK2R agonist, they continued to drink sweet water after the treatment, whereas when they gave the mice semaglutide, the mice preferred plain water posttreatment.
The researchers also monitored the animals’ psychological health, as NK2R has been associated with anxiety, but they observed no behavioral changes.
The Key Mechanism at Work
One big question is how the NK2R agonists work. The amphetamines people used for weight loss during the 1950s and 1960s worked by making people more active. GLP-1 agonists reduce appetite and lower blood sugar. This is not that. In their studies with animals, the researchers didn’t observe that the animals were more active nor were there changes in other biomarkers like insulin. So far, the main difference they found with the NK2R agonists is an increase in thermogenesis in certain muscles.
Another benefit of the NK2R treatments is that they don’t seem to have a big impact on lean mass — the nonfat component of body weight, namely muscle, bones, and organs. Studies indicate that 25%-39% of weight loss on GLP-1 agonists is lost muscle. According to DEXA scans of the mice, Gerhart-Hines said they observed no lean mass loss. (In mice, he noted, GLP-1 agonists can cause up to 50% lean mass loss).
And for people with both diabetes and obesity, “what we found with NK2R is that obese and diabetic models, whether mice or monkeys, respond much better to that treatment in terms of glucose control and body weight loss,” Gerhart-Hines said. He explained that GLP-1 agonists don’t work quite as well for weight loss in people with diabetes because the drug stimulates insulin production in a system that already has insulin issues and can cause more sugar to be stored as fat.
Further, GLP-1 agonists are peptide drugs, which are expensive to make. The NK2R agonists are small molecules that would be cheaper to produce, Gerhart-Hines believes. One candidate they’re testing would likely be given once daily, another once weekly.
The current surge in obesity and diabetes may be a direct consequence of our bodies’ decreased energy expenditure. “Compared to 80s and 90s, the average person is more physically active, but the overarching basal resting energy expenditure has gone down,” said Gerhart-Hines, according to research by John Speakman at the University of Aberdeen, Scotland. We don’t know why, though, he said, but guesses it could be our diets or climate controlled environments.
But the NK2R agonists are among the many currently being studied for weight loss, and it may be hard to compete with the GLP-1 agonists. “As GLP-1 medicines will soon achieve 25% weight loss and have an extensively studied safety profile, the task of producing better drugs that work well in most people, are well tolerated and also reduce the complications of cardiometabolic disease, is challenging but not impossible,” said Drucker.
Gerhart-Hines said they plan to start trials in humans in the next year, but he suspects it will be another 6 or 7 years before it comes to market, if the trials are successful.
“There’s people who want [a GLP-1 agonist] and can’t even get it,” Gerhart-Hines said. As far as weight loss drugs, he noted, “we are not even saturating the market right now.”
A version of this article first appeared on Medscape.com.
For people with obesity or type 2 diabetes, glucagon-like peptide 1 (GLP-1) agonists (including Mounjaro, Wegovy, and Ozempic) have been labeled miracle drugs. But they aren’t miraculous for everyone. Research indicates a significant portion of people discontinue using them within a year.
The main problems with GLP-1 agonists are that they are expensive and have a fairly high rate of side effects — such as nausea, vomiting, diarrhea, or constipation. Another big one is muscle loss.
This lack of side effects, particularly in how the potential drug causes no muscle loss — and in fact engages muscle for some of its effect — sets it apart and makes it a potential alternative to GLP-1s. The key is not just reducing appetite but also increasing energy expenditure.
How It Works
The new approach targets a protein called NK2R — a member of the neurokinin receptor family, which has a role in a variety of physiological processes, including pain sensation, anxiety, and inflammation.
“We were looking to see genetic linkages to metabolic health, and there NK2R was,” said Zach Gerhart-Hines, PhD, a professor studying molecular metabolism at the University of Copenhagen in Denmark and principal investigator of the study. The group then created a few long-acting agonists that are selective for NK2R. So far, they’ve tested them in mice and nonhuman primates.
“The data on new medicines targeting NK2R is very promising and highlights the potential of both reducing food intake and increasing energy expenditure,” said Daniel Drucker, MD, an endocrinologist and researcher at Lunenfeld-Tanenbaum Research Institute in Toronto who was not involved in the study.
“The drug activates a certain region in the hindbrain of the animal, which is controlling food intake, and it does so by reducing appetite without increasing nausea or vomiting,” explained Frederike Sass, a research assistant at the University of Copenhagen in Copenhagen, Denmark, who led the study.
Gerhart-Hines said that even at the highest dose, there were no incidents of vomiting among the nonhuman primates. Mice can’t vomit, but there are ways to tell if they feel unwell from a drug. One way researchers test that is to start feeding the mice sweetened water at the same time they’re given a drug. Then later, when the mice are no longer on the drug, they’re given a choice between sweetened and unsweetened water. If they weren’t feeling well on the drug, they’ll choose plain water because they associate the sweet water with feeling bad, otherwise mice prefer sweet water. Sass said that with the NK2R agonist, they continued to drink sweet water after the treatment, whereas when they gave the mice semaglutide, the mice preferred plain water posttreatment.
The researchers also monitored the animals’ psychological health, as NK2R has been associated with anxiety, but they observed no behavioral changes.
The Key Mechanism at Work
One big question is how the NK2R agonists work. The amphetamines people used for weight loss during the 1950s and 1960s worked by making people more active. GLP-1 agonists reduce appetite and lower blood sugar. This is not that. In their studies with animals, the researchers didn’t observe that the animals were more active nor were there changes in other biomarkers like insulin. So far, the main difference they found with the NK2R agonists is an increase in thermogenesis in certain muscles.
Another benefit of the NK2R treatments is that they don’t seem to have a big impact on lean mass — the nonfat component of body weight, namely muscle, bones, and organs. Studies indicate that 25%-39% of weight loss on GLP-1 agonists is lost muscle. According to DEXA scans of the mice, Gerhart-Hines said they observed no lean mass loss. (In mice, he noted, GLP-1 agonists can cause up to 50% lean mass loss).
And for people with both diabetes and obesity, “what we found with NK2R is that obese and diabetic models, whether mice or monkeys, respond much better to that treatment in terms of glucose control and body weight loss,” Gerhart-Hines said. He explained that GLP-1 agonists don’t work quite as well for weight loss in people with diabetes because the drug stimulates insulin production in a system that already has insulin issues and can cause more sugar to be stored as fat.
Further, GLP-1 agonists are peptide drugs, which are expensive to make. The NK2R agonists are small molecules that would be cheaper to produce, Gerhart-Hines believes. One candidate they’re testing would likely be given once daily, another once weekly.
The current surge in obesity and diabetes may be a direct consequence of our bodies’ decreased energy expenditure. “Compared to 80s and 90s, the average person is more physically active, but the overarching basal resting energy expenditure has gone down,” said Gerhart-Hines, according to research by John Speakman at the University of Aberdeen, Scotland. We don’t know why, though, he said, but guesses it could be our diets or climate controlled environments.
But the NK2R agonists are among the many currently being studied for weight loss, and it may be hard to compete with the GLP-1 agonists. “As GLP-1 medicines will soon achieve 25% weight loss and have an extensively studied safety profile, the task of producing better drugs that work well in most people, are well tolerated and also reduce the complications of cardiometabolic disease, is challenging but not impossible,” said Drucker.
Gerhart-Hines said they plan to start trials in humans in the next year, but he suspects it will be another 6 or 7 years before it comes to market, if the trials are successful.
“There’s people who want [a GLP-1 agonist] and can’t even get it,” Gerhart-Hines said. As far as weight loss drugs, he noted, “we are not even saturating the market right now.”
A version of this article first appeared on Medscape.com.
FROM NATURE
Most Effective Treatments for Adult ADHD Identified
, results of a large comprehensive meta-analysis showed.
The study of 113 randomized controlled trials with nearly 15,000 adults with a formal diagnosis of ADHD also revealed that atomoxetine is less acceptable to patients and that results of efficacy of nonpharmacological strategies are inconsistent.
Data on long-term efficacy of ADHD therapies are lacking, investigators noted, so these results only apply to short-term efficacy.
“There is a lot of controversy about medication, so these are quite reassuring data and certainly reinforce the role of medication as a treatment for ADHD,” study investigator Samuele Cortese, MD, PhD, with University of Southampton, England, said during a press briefing hosted by the UK Science Media Center where the findings were released.
The results also point to the “possible role of nonpharmacological interventions, which are currently not well established in current guidelines. However, there is a need for better evidence to fully understand the exact effect of these nonpharmacological interventions,” Cortese noted.
The study was published online in The Lancet Psychiatry.
Bridging the Knowledge Gap
Once thought to be a childhood disorder only, ADHD is now well-known to persist into adulthood, affecting roughly 2.5% of the general adult population worldwide. The comparative benefits and harms of available interventions for ADHD in adults remain unclear.
To address this knowledge gap, researchers did a comprehensive systematic review and component network meta-analysis comparing a broad range of drug and nondrug treatments for adults with ADHD across several outcomes.
For reducing core ADHD symptoms at 12 weeks, only stimulants and atomoxetine were better than placebo in self-reported and clinician-reported rating scales, the study team found.
For stimulants, the standardized mean differences (SMDs) on the self-reported and clinician-reported scales were 0.39 and 0.61, respectively. The corresponding SMDs for atomoxetine were 0.38 and 0.51.
There was no evidence that ADHD medications were better than placebo in improving additional relevant outcomes such as quality of life.
In terms of nondrug interventions, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation were better than placebo only on clinician-reported measures, with SMDs of −1.35, −0.79, −0.77, and −0.78, respectively.
However, the evidence for nondrug strategies is less conclusive overall, with “discordant results across types of raters and based on a small body of evidence,” the authors wrote in their article.
And evidence for long-term efficacy (beyond 12 weeks) for ADHD interventions is “limited and under-investigated,” they said.
Regarding acceptability, all strategies were similar to placebo except for atomoxetine and guanfacine which had lower acceptability than placebo.
“It’s very important to emphasize that we focused on the average effect, not at an individual level,” first author Edoardo Ostinelli, MD, with University of Oxford, England, said at the briefing. “Therefore, we cannot make any recommendation at an individual level. We need studies with individual participant data so that we can personalize treatment.”
Cortese said the information from this analysis may be particularly important for “psychoeducation” of the patient before actually starting with a treatment plan. Patients often ask about nonpharmacological interventions and this study provides the “best synthesis of available data to inform these discussions,” he said.
Experts Weigh In
Several experts weighed in on the results in a statement from the UK Science Media Center.
Celso Arango, MD, PhD, psychiatrist with Gregorio Marañón General University Hospital, Madrid, Spain, noted that there is a “clear shortage of research on ADHD in adulthood, particularly regarding medium-term (beyond 12 weeks) and long-term treatment outcomes. Consequently, the findings are applicable only to short-term treatment.”
Another strength of the study is that it was developed with input from people with ADHD, Arango added, making it “highly relevant.”
The majority of studies available for the analysis involved pharmacological treatments, which is important to consider when interpreting the findings, noted Katya Rubia, PhD, professor of cognitive neuroscience, King’s College London, England.
“For example, for neurostimulation, only 10 studies were included and on very heterogeneous stimulation methods,” Rubia said. “The evidence on the efficacy of neurostimulation is therefore hardly conclusive and more studies are needed to establish their efficacy.”
Roi Cohen Kadosh, PhD, professor of cognitive neuroscience, University of Surrey, Guildford, England, agreed. While the study is a “valuable contribution to the literature,” it sheds light on “both the scarcity of neurostimulation research and the limited exploration of combined treatment approaches for ADHD,” he said.
“While novel neurostimulation methods linked to neuroplasticity — such as those we have demonstrated to be superior in children with ADHD — were not covered here, they have shown promising and lasting benefits. In contrast, research in adults remains relatively underdeveloped. Moving forward, greater emphasis on innovative, tolerable, personalized, and sustainable neurostimulation approaches is essential to meet the unmet clinical needs of adults with ADHD,” Kadosh added.
In a commentary in The Lancet Psychiatry, David Coghill, MD, with The University of Melbourne, Australia, cautioned that the findings do not mean that potential benefits of nonpharmacological interventions should be dismissed.
“While some of the nonpharmacological treatments (eg, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation) showed effects on clinician-rated outcomes similar to, and in some cases greater than, the pharmacological treatments, they did not show the same effects on self-reported outcomes. These interventions were therefore considered less robust than the pharmacological treatments that showed changes on both measurement types,” he wrote.
This study had no commercial funding. Ostinelli had received research and consultancy fees from Angelini Pharma. Cortese received reimbursement for travel and accommodation expenses in relation to lectures delivered for the Association for Child and Adolescent Central Health, the Canadian ADHD Alliance Resource, and the British Association of Psychopharmacology; and had received honoraria from MEDICE; and is chair of the European ADHD Guidelines Group. Arango, Rubia, and Kadosh had no relevant disclosures. Coghill had received honoraria from CCM Conecta, Takeda, Novartis, Servier, and MEDICE.
A version of this article first appeared on Medscape.com.
, results of a large comprehensive meta-analysis showed.
The study of 113 randomized controlled trials with nearly 15,000 adults with a formal diagnosis of ADHD also revealed that atomoxetine is less acceptable to patients and that results of efficacy of nonpharmacological strategies are inconsistent.
Data on long-term efficacy of ADHD therapies are lacking, investigators noted, so these results only apply to short-term efficacy.
“There is a lot of controversy about medication, so these are quite reassuring data and certainly reinforce the role of medication as a treatment for ADHD,” study investigator Samuele Cortese, MD, PhD, with University of Southampton, England, said during a press briefing hosted by the UK Science Media Center where the findings were released.
The results also point to the “possible role of nonpharmacological interventions, which are currently not well established in current guidelines. However, there is a need for better evidence to fully understand the exact effect of these nonpharmacological interventions,” Cortese noted.
The study was published online in The Lancet Psychiatry.
Bridging the Knowledge Gap
Once thought to be a childhood disorder only, ADHD is now well-known to persist into adulthood, affecting roughly 2.5% of the general adult population worldwide. The comparative benefits and harms of available interventions for ADHD in adults remain unclear.
To address this knowledge gap, researchers did a comprehensive systematic review and component network meta-analysis comparing a broad range of drug and nondrug treatments for adults with ADHD across several outcomes.
For reducing core ADHD symptoms at 12 weeks, only stimulants and atomoxetine were better than placebo in self-reported and clinician-reported rating scales, the study team found.
For stimulants, the standardized mean differences (SMDs) on the self-reported and clinician-reported scales were 0.39 and 0.61, respectively. The corresponding SMDs for atomoxetine were 0.38 and 0.51.
There was no evidence that ADHD medications were better than placebo in improving additional relevant outcomes such as quality of life.
In terms of nondrug interventions, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation were better than placebo only on clinician-reported measures, with SMDs of −1.35, −0.79, −0.77, and −0.78, respectively.
However, the evidence for nondrug strategies is less conclusive overall, with “discordant results across types of raters and based on a small body of evidence,” the authors wrote in their article.
And evidence for long-term efficacy (beyond 12 weeks) for ADHD interventions is “limited and under-investigated,” they said.
Regarding acceptability, all strategies were similar to placebo except for atomoxetine and guanfacine which had lower acceptability than placebo.
“It’s very important to emphasize that we focused on the average effect, not at an individual level,” first author Edoardo Ostinelli, MD, with University of Oxford, England, said at the briefing. “Therefore, we cannot make any recommendation at an individual level. We need studies with individual participant data so that we can personalize treatment.”
Cortese said the information from this analysis may be particularly important for “psychoeducation” of the patient before actually starting with a treatment plan. Patients often ask about nonpharmacological interventions and this study provides the “best synthesis of available data to inform these discussions,” he said.
Experts Weigh In
Several experts weighed in on the results in a statement from the UK Science Media Center.
Celso Arango, MD, PhD, psychiatrist with Gregorio Marañón General University Hospital, Madrid, Spain, noted that there is a “clear shortage of research on ADHD in adulthood, particularly regarding medium-term (beyond 12 weeks) and long-term treatment outcomes. Consequently, the findings are applicable only to short-term treatment.”
Another strength of the study is that it was developed with input from people with ADHD, Arango added, making it “highly relevant.”
The majority of studies available for the analysis involved pharmacological treatments, which is important to consider when interpreting the findings, noted Katya Rubia, PhD, professor of cognitive neuroscience, King’s College London, England.
“For example, for neurostimulation, only 10 studies were included and on very heterogeneous stimulation methods,” Rubia said. “The evidence on the efficacy of neurostimulation is therefore hardly conclusive and more studies are needed to establish their efficacy.”
Roi Cohen Kadosh, PhD, professor of cognitive neuroscience, University of Surrey, Guildford, England, agreed. While the study is a “valuable contribution to the literature,” it sheds light on “both the scarcity of neurostimulation research and the limited exploration of combined treatment approaches for ADHD,” he said.
“While novel neurostimulation methods linked to neuroplasticity — such as those we have demonstrated to be superior in children with ADHD — were not covered here, they have shown promising and lasting benefits. In contrast, research in adults remains relatively underdeveloped. Moving forward, greater emphasis on innovative, tolerable, personalized, and sustainable neurostimulation approaches is essential to meet the unmet clinical needs of adults with ADHD,” Kadosh added.
In a commentary in The Lancet Psychiatry, David Coghill, MD, with The University of Melbourne, Australia, cautioned that the findings do not mean that potential benefits of nonpharmacological interventions should be dismissed.
“While some of the nonpharmacological treatments (eg, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation) showed effects on clinician-rated outcomes similar to, and in some cases greater than, the pharmacological treatments, they did not show the same effects on self-reported outcomes. These interventions were therefore considered less robust than the pharmacological treatments that showed changes on both measurement types,” he wrote.
This study had no commercial funding. Ostinelli had received research and consultancy fees from Angelini Pharma. Cortese received reimbursement for travel and accommodation expenses in relation to lectures delivered for the Association for Child and Adolescent Central Health, the Canadian ADHD Alliance Resource, and the British Association of Psychopharmacology; and had received honoraria from MEDICE; and is chair of the European ADHD Guidelines Group. Arango, Rubia, and Kadosh had no relevant disclosures. Coghill had received honoraria from CCM Conecta, Takeda, Novartis, Servier, and MEDICE.
A version of this article first appeared on Medscape.com.
, results of a large comprehensive meta-analysis showed.
The study of 113 randomized controlled trials with nearly 15,000 adults with a formal diagnosis of ADHD also revealed that atomoxetine is less acceptable to patients and that results of efficacy of nonpharmacological strategies are inconsistent.
Data on long-term efficacy of ADHD therapies are lacking, investigators noted, so these results only apply to short-term efficacy.
“There is a lot of controversy about medication, so these are quite reassuring data and certainly reinforce the role of medication as a treatment for ADHD,” study investigator Samuele Cortese, MD, PhD, with University of Southampton, England, said during a press briefing hosted by the UK Science Media Center where the findings were released.
The results also point to the “possible role of nonpharmacological interventions, which are currently not well established in current guidelines. However, there is a need for better evidence to fully understand the exact effect of these nonpharmacological interventions,” Cortese noted.
The study was published online in The Lancet Psychiatry.
Bridging the Knowledge Gap
Once thought to be a childhood disorder only, ADHD is now well-known to persist into adulthood, affecting roughly 2.5% of the general adult population worldwide. The comparative benefits and harms of available interventions for ADHD in adults remain unclear.
To address this knowledge gap, researchers did a comprehensive systematic review and component network meta-analysis comparing a broad range of drug and nondrug treatments for adults with ADHD across several outcomes.
For reducing core ADHD symptoms at 12 weeks, only stimulants and atomoxetine were better than placebo in self-reported and clinician-reported rating scales, the study team found.
For stimulants, the standardized mean differences (SMDs) on the self-reported and clinician-reported scales were 0.39 and 0.61, respectively. The corresponding SMDs for atomoxetine were 0.38 and 0.51.
There was no evidence that ADHD medications were better than placebo in improving additional relevant outcomes such as quality of life.
In terms of nondrug interventions, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation were better than placebo only on clinician-reported measures, with SMDs of −1.35, −0.79, −0.77, and −0.78, respectively.
However, the evidence for nondrug strategies is less conclusive overall, with “discordant results across types of raters and based on a small body of evidence,” the authors wrote in their article.
And evidence for long-term efficacy (beyond 12 weeks) for ADHD interventions is “limited and under-investigated,” they said.
Regarding acceptability, all strategies were similar to placebo except for atomoxetine and guanfacine which had lower acceptability than placebo.
“It’s very important to emphasize that we focused on the average effect, not at an individual level,” first author Edoardo Ostinelli, MD, with University of Oxford, England, said at the briefing. “Therefore, we cannot make any recommendation at an individual level. We need studies with individual participant data so that we can personalize treatment.”
Cortese said the information from this analysis may be particularly important for “psychoeducation” of the patient before actually starting with a treatment plan. Patients often ask about nonpharmacological interventions and this study provides the “best synthesis of available data to inform these discussions,” he said.
Experts Weigh In
Several experts weighed in on the results in a statement from the UK Science Media Center.
Celso Arango, MD, PhD, psychiatrist with Gregorio Marañón General University Hospital, Madrid, Spain, noted that there is a “clear shortage of research on ADHD in adulthood, particularly regarding medium-term (beyond 12 weeks) and long-term treatment outcomes. Consequently, the findings are applicable only to short-term treatment.”
Another strength of the study is that it was developed with input from people with ADHD, Arango added, making it “highly relevant.”
The majority of studies available for the analysis involved pharmacological treatments, which is important to consider when interpreting the findings, noted Katya Rubia, PhD, professor of cognitive neuroscience, King’s College London, England.
“For example, for neurostimulation, only 10 studies were included and on very heterogeneous stimulation methods,” Rubia said. “The evidence on the efficacy of neurostimulation is therefore hardly conclusive and more studies are needed to establish their efficacy.”
Roi Cohen Kadosh, PhD, professor of cognitive neuroscience, University of Surrey, Guildford, England, agreed. While the study is a “valuable contribution to the literature,” it sheds light on “both the scarcity of neurostimulation research and the limited exploration of combined treatment approaches for ADHD,” he said.
“While novel neurostimulation methods linked to neuroplasticity — such as those we have demonstrated to be superior in children with ADHD — were not covered here, they have shown promising and lasting benefits. In contrast, research in adults remains relatively underdeveloped. Moving forward, greater emphasis on innovative, tolerable, personalized, and sustainable neurostimulation approaches is essential to meet the unmet clinical needs of adults with ADHD,” Kadosh added.
In a commentary in The Lancet Psychiatry, David Coghill, MD, with The University of Melbourne, Australia, cautioned that the findings do not mean that potential benefits of nonpharmacological interventions should be dismissed.
“While some of the nonpharmacological treatments (eg, cognitive behavioral therapy, cognitive remediation, mindfulness, psychoeducation, and transcranial direct current stimulation) showed effects on clinician-rated outcomes similar to, and in some cases greater than, the pharmacological treatments, they did not show the same effects on self-reported outcomes. These interventions were therefore considered less robust than the pharmacological treatments that showed changes on both measurement types,” he wrote.
This study had no commercial funding. Ostinelli had received research and consultancy fees from Angelini Pharma. Cortese received reimbursement for travel and accommodation expenses in relation to lectures delivered for the Association for Child and Adolescent Central Health, the Canadian ADHD Alliance Resource, and the British Association of Psychopharmacology; and had received honoraria from MEDICE; and is chair of the European ADHD Guidelines Group. Arango, Rubia, and Kadosh had no relevant disclosures. Coghill had received honoraria from CCM Conecta, Takeda, Novartis, Servier, and MEDICE.
A version of this article first appeared on Medscape.com.
FROM THE LANCET PSYCHIATRY
New Test’s Utility in Distinguishing OA From Inflammatory Arthritis Questioned
A new diagnostic test can accurately distinguish osteoarthritis (OA) from inflammatory arthritis using two synovial fluid biomarkers, according to research published in the Journal of Orthopaedic Research on December 18, 2024.
However, experts question whether such a test would be useful.
“The need would seem to be fairly limited, mostly those with single joint involvement and a lack of other systemic features to specify a diagnosis, which is not that common, at least in rheumatology, where there are usually features in the history and physical that can clarify the diagnosis,” said Amanda E. Nelson, MD, MSCR, professor of medicine in the Division of Rheumatology, Allergy, and Immunology at the University of North Carolina at Chapel Hill. She was not involved with the research.
The test uses an algorithm that incorporates concentrations of cartilage oligomeric matrix protein (COMP) and interleukin 8 (IL-8) in synovial fluid. The researchers hypothesized that a ratio of the two biomarkers could distinguish between primary OA and other inflammatory arthritic diagnoses.
“Primary OA is unlikely when either COMP concentration or COMP/IL‐8 ratio in the synovial fluid is low since these conditions indicate either lack of cartilage degradation or presence of high inflammation,” wrote Daniel Keter and coauthors at CD Diagnostics, Claymont, Delaware, and CD Laboratories, Towson, Maryland. “In contrast, a high COMP concentration result in combination with high COMP/IL‐8 ratio would be suggestive of low inflammation in the setting of cartilage deterioration, which is indicative of primary OA.”
In patients with OA, synovial fluid can be difficult to aspirate in sufficient amounts for testing, Nelson said.
“If synovial fluid is present and able to be aspirated, it is unclear if this test has any benefit over a simple, standard cell count and crystal assessment, which can also distinguish between osteoarthritis and more inflammatory arthritides,” she said.
Differentiating OA
To test this potential diagnostic algorithm, researchers obtained 171 knee synovial fluid samples from approved clinical remnant sample sources and a biovendor. All samples were annotated with an existing arthritic diagnosis, including 54 with primary OA, 57 with rheumatoid arthritis (RA), 30 with crystal arthritis (CA), and 30 with native septic arthritis (NSA).
Researchers assigned a CA diagnosis based on the presence of monosodium urate or calcium pyrophosphate dehydrate crystals in the synovial fluid, and NSA was determined via the Synovasure Alpha Defensin test. OA was confirmed via radiograph as Kellgren‐Lawrence grades 2‐4 with no other arthritic diagnoses. RA samples were purchased via a biovendor, and researchers were not provided with diagnosis‐confirming data.
All samples were randomized and blinded before testing, and researchers used enzyme-linked immunosorbent assay tests for both COMP and IL-8 biomarkers.
Of the 54 OA samples, 47 tested positive for OA using the COMP + COMP/IL-8 ratio algorithm. Of the 117 samples with inflammatory arthritis, 13 tested positive for OA. Overall, the diagnostic algorithm demonstrated a clinical sensitivity of 87.0% and specificity of 88.9%. The positive predictive value was 78.3%, while the negative predictive value was 93.7%.
Unclear Clinical Need
Nelson noted that while this test aims to differentiate between arthritic diagnoses, patients can also have multiple conditions.
“Many individuals with rheumatoid arthritis will develop osteoarthritis, but they can have both, so a yes/no test is of unclear utility,” she said. OA and calcium pyrophosphate deposition (CPPD) disease can often occur together, “but the driver is really the OA, and the CPPD is present but not actively inflammatory,” she continued. “Septic arthritis should be readily distinguishable by cell count alone [and again, can coexist with any of the other conditions], and a thorough history and physical should be able to differentiate in most cases.”
While these results from this study are “reasonably impressive,” more clinical information is needed to interpret these results, added C. Kent Kwoh, MD, director of the University of Arizona Arthritis Center and professor of medicine and medical imaging at the University of Arizona College of Medicine, Tucson, Arizona.
Because the study is retrospective in nature and researchers obtained specimens from different sources, it was not clear if these patients were being treated when these samples were taken and if their various conditions were controlled or flaring.
“I would say this is a reasonable first step,” Kwoh said. “We would need prospective studies, more clinical characterization, and potentially longitudinal studies to understand when this test may be useful.”
This research was internally funded by Zimmer Biomet. All authors were employees of CD Diagnostics or CD Laboratories, both of which are subsidiaries of Zimmer Biomet. Kwoh reported receiving grants or contracts with AbbVie, Artiva, Eli Lilly and Company, Bristol Myers Squibb, Cumberland, Pfizer, GSK, and Galapagos, and consulting fees from TrialSpark/Formation Bio, Express Scripts, GSK, TLC BioSciences, and AposHealth. He participates on Data Safety Monitoring or Advisory Boards of Moebius Medical, Sun Pharma, Novartis, Xalud, and Kolon TissueGene. Nelson reported no relevant disclosures.
A version of this article appeared on Medscape.com.
A new diagnostic test can accurately distinguish osteoarthritis (OA) from inflammatory arthritis using two synovial fluid biomarkers, according to research published in the Journal of Orthopaedic Research on December 18, 2024.
However, experts question whether such a test would be useful.
“The need would seem to be fairly limited, mostly those with single joint involvement and a lack of other systemic features to specify a diagnosis, which is not that common, at least in rheumatology, where there are usually features in the history and physical that can clarify the diagnosis,” said Amanda E. Nelson, MD, MSCR, professor of medicine in the Division of Rheumatology, Allergy, and Immunology at the University of North Carolina at Chapel Hill. She was not involved with the research.
The test uses an algorithm that incorporates concentrations of cartilage oligomeric matrix protein (COMP) and interleukin 8 (IL-8) in synovial fluid. The researchers hypothesized that a ratio of the two biomarkers could distinguish between primary OA and other inflammatory arthritic diagnoses.
“Primary OA is unlikely when either COMP concentration or COMP/IL‐8 ratio in the synovial fluid is low since these conditions indicate either lack of cartilage degradation or presence of high inflammation,” wrote Daniel Keter and coauthors at CD Diagnostics, Claymont, Delaware, and CD Laboratories, Towson, Maryland. “In contrast, a high COMP concentration result in combination with high COMP/IL‐8 ratio would be suggestive of low inflammation in the setting of cartilage deterioration, which is indicative of primary OA.”
In patients with OA, synovial fluid can be difficult to aspirate in sufficient amounts for testing, Nelson said.
“If synovial fluid is present and able to be aspirated, it is unclear if this test has any benefit over a simple, standard cell count and crystal assessment, which can also distinguish between osteoarthritis and more inflammatory arthritides,” she said.
Differentiating OA
To test this potential diagnostic algorithm, researchers obtained 171 knee synovial fluid samples from approved clinical remnant sample sources and a biovendor. All samples were annotated with an existing arthritic diagnosis, including 54 with primary OA, 57 with rheumatoid arthritis (RA), 30 with crystal arthritis (CA), and 30 with native septic arthritis (NSA).
Researchers assigned a CA diagnosis based on the presence of monosodium urate or calcium pyrophosphate dehydrate crystals in the synovial fluid, and NSA was determined via the Synovasure Alpha Defensin test. OA was confirmed via radiograph as Kellgren‐Lawrence grades 2‐4 with no other arthritic diagnoses. RA samples were purchased via a biovendor, and researchers were not provided with diagnosis‐confirming data.
All samples were randomized and blinded before testing, and researchers used enzyme-linked immunosorbent assay tests for both COMP and IL-8 biomarkers.
Of the 54 OA samples, 47 tested positive for OA using the COMP + COMP/IL-8 ratio algorithm. Of the 117 samples with inflammatory arthritis, 13 tested positive for OA. Overall, the diagnostic algorithm demonstrated a clinical sensitivity of 87.0% and specificity of 88.9%. The positive predictive value was 78.3%, while the negative predictive value was 93.7%.
Unclear Clinical Need
Nelson noted that while this test aims to differentiate between arthritic diagnoses, patients can also have multiple conditions.
“Many individuals with rheumatoid arthritis will develop osteoarthritis, but they can have both, so a yes/no test is of unclear utility,” she said. OA and calcium pyrophosphate deposition (CPPD) disease can often occur together, “but the driver is really the OA, and the CPPD is present but not actively inflammatory,” she continued. “Septic arthritis should be readily distinguishable by cell count alone [and again, can coexist with any of the other conditions], and a thorough history and physical should be able to differentiate in most cases.”
While these results from this study are “reasonably impressive,” more clinical information is needed to interpret these results, added C. Kent Kwoh, MD, director of the University of Arizona Arthritis Center and professor of medicine and medical imaging at the University of Arizona College of Medicine, Tucson, Arizona.
Because the study is retrospective in nature and researchers obtained specimens from different sources, it was not clear if these patients were being treated when these samples were taken and if their various conditions were controlled or flaring.
“I would say this is a reasonable first step,” Kwoh said. “We would need prospective studies, more clinical characterization, and potentially longitudinal studies to understand when this test may be useful.”
This research was internally funded by Zimmer Biomet. All authors were employees of CD Diagnostics or CD Laboratories, both of which are subsidiaries of Zimmer Biomet. Kwoh reported receiving grants or contracts with AbbVie, Artiva, Eli Lilly and Company, Bristol Myers Squibb, Cumberland, Pfizer, GSK, and Galapagos, and consulting fees from TrialSpark/Formation Bio, Express Scripts, GSK, TLC BioSciences, and AposHealth. He participates on Data Safety Monitoring or Advisory Boards of Moebius Medical, Sun Pharma, Novartis, Xalud, and Kolon TissueGene. Nelson reported no relevant disclosures.
A version of this article appeared on Medscape.com.
A new diagnostic test can accurately distinguish osteoarthritis (OA) from inflammatory arthritis using two synovial fluid biomarkers, according to research published in the Journal of Orthopaedic Research on December 18, 2024.
However, experts question whether such a test would be useful.
“The need would seem to be fairly limited, mostly those with single joint involvement and a lack of other systemic features to specify a diagnosis, which is not that common, at least in rheumatology, where there are usually features in the history and physical that can clarify the diagnosis,” said Amanda E. Nelson, MD, MSCR, professor of medicine in the Division of Rheumatology, Allergy, and Immunology at the University of North Carolina at Chapel Hill. She was not involved with the research.
The test uses an algorithm that incorporates concentrations of cartilage oligomeric matrix protein (COMP) and interleukin 8 (IL-8) in synovial fluid. The researchers hypothesized that a ratio of the two biomarkers could distinguish between primary OA and other inflammatory arthritic diagnoses.
“Primary OA is unlikely when either COMP concentration or COMP/IL‐8 ratio in the synovial fluid is low since these conditions indicate either lack of cartilage degradation or presence of high inflammation,” wrote Daniel Keter and coauthors at CD Diagnostics, Claymont, Delaware, and CD Laboratories, Towson, Maryland. “In contrast, a high COMP concentration result in combination with high COMP/IL‐8 ratio would be suggestive of low inflammation in the setting of cartilage deterioration, which is indicative of primary OA.”
In patients with OA, synovial fluid can be difficult to aspirate in sufficient amounts for testing, Nelson said.
“If synovial fluid is present and able to be aspirated, it is unclear if this test has any benefit over a simple, standard cell count and crystal assessment, which can also distinguish between osteoarthritis and more inflammatory arthritides,” she said.
Differentiating OA
To test this potential diagnostic algorithm, researchers obtained 171 knee synovial fluid samples from approved clinical remnant sample sources and a biovendor. All samples were annotated with an existing arthritic diagnosis, including 54 with primary OA, 57 with rheumatoid arthritis (RA), 30 with crystal arthritis (CA), and 30 with native septic arthritis (NSA).
Researchers assigned a CA diagnosis based on the presence of monosodium urate or calcium pyrophosphate dehydrate crystals in the synovial fluid, and NSA was determined via the Synovasure Alpha Defensin test. OA was confirmed via radiograph as Kellgren‐Lawrence grades 2‐4 with no other arthritic diagnoses. RA samples were purchased via a biovendor, and researchers were not provided with diagnosis‐confirming data.
All samples were randomized and blinded before testing, and researchers used enzyme-linked immunosorbent assay tests for both COMP and IL-8 biomarkers.
Of the 54 OA samples, 47 tested positive for OA using the COMP + COMP/IL-8 ratio algorithm. Of the 117 samples with inflammatory arthritis, 13 tested positive for OA. Overall, the diagnostic algorithm demonstrated a clinical sensitivity of 87.0% and specificity of 88.9%. The positive predictive value was 78.3%, while the negative predictive value was 93.7%.
Unclear Clinical Need
Nelson noted that while this test aims to differentiate between arthritic diagnoses, patients can also have multiple conditions.
“Many individuals with rheumatoid arthritis will develop osteoarthritis, but they can have both, so a yes/no test is of unclear utility,” she said. OA and calcium pyrophosphate deposition (CPPD) disease can often occur together, “but the driver is really the OA, and the CPPD is present but not actively inflammatory,” she continued. “Septic arthritis should be readily distinguishable by cell count alone [and again, can coexist with any of the other conditions], and a thorough history and physical should be able to differentiate in most cases.”
While these results from this study are “reasonably impressive,” more clinical information is needed to interpret these results, added C. Kent Kwoh, MD, director of the University of Arizona Arthritis Center and professor of medicine and medical imaging at the University of Arizona College of Medicine, Tucson, Arizona.
Because the study is retrospective in nature and researchers obtained specimens from different sources, it was not clear if these patients were being treated when these samples were taken and if their various conditions were controlled or flaring.
“I would say this is a reasonable first step,” Kwoh said. “We would need prospective studies, more clinical characterization, and potentially longitudinal studies to understand when this test may be useful.”
This research was internally funded by Zimmer Biomet. All authors were employees of CD Diagnostics or CD Laboratories, both of which are subsidiaries of Zimmer Biomet. Kwoh reported receiving grants or contracts with AbbVie, Artiva, Eli Lilly and Company, Bristol Myers Squibb, Cumberland, Pfizer, GSK, and Galapagos, and consulting fees from TrialSpark/Formation Bio, Express Scripts, GSK, TLC BioSciences, and AposHealth. He participates on Data Safety Monitoring or Advisory Boards of Moebius Medical, Sun Pharma, Novartis, Xalud, and Kolon TissueGene. Nelson reported no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM JOURNAL OF ORTHOPAEDIC RESEARCH
Study Finds Association Between Statins and Glaucoma
Adults with high cholesterol taking statins may have a significantly higher risk of developing glaucoma than those not taking the cholesterol-lowering drugs, an observational study of a large research database found.
The study, published in Ophthalmology Glaucoma, analyzed electronic health records of 79,742 adults with hyperlipidemia in the All of Us Research Program database from 2017 to 2022. The repository is maintained by the National Institutes of Health and provides data for research into precision medicine.
The 6365 statin users in the study population had a 47% greater unadjusted prevalence of glaucoma than nonusers of the drugs (P < .001) and a 13% greater prevalence in models that adjusted for potential confounding variables (P = .02). The researchers also found statin users had significantly higher levels of low-density lipoprotein cholesterol (LDL-C), but even patients with optimal levels of LDL-C had higher rates of glaucoma.
‘A Little Unusual’
Drawing any clinically relevant conclusions from this latest study would be premature, said Victoria Tseng, MD, PhD, an assistant professor at UCLA Stein Eye Institute and Doheny Eye Centers UCLA, and the senior author of the study. “I certainly would not be telling my patients on statins to stop their statins.”
Tseng acknowledged her group’s finding runs counter to previous studies that found statins may help prevent glaucoma or at least have no effect on the eye disease, although the association between cholesterol and glaucoma has been well established.
A 2019 analysis of nearly 137,000 participants in three population studies found no connection between statin use and the risk for primary open-angle glaucoma. A 2012 study of more than 500,000 people with high cholesterol found statin use was associated with a significant reduction in the risk for open-angle glaucoma.
“It’s a little unusual that we found the opposite,” Tseng said in an interview.
One explanation is the observational nature of the AoU analysis Tseng’s group conducted. “We don’t know what these people look like or how well the data were collected, so we’re going off of what’s there in the database,” she said.
Another explanation could be the nature of hyperlipidemia itself, she said. “There have definitely been studies that suggest increased cholesterol levels are associated with an increased risk of glaucoma. Presumably, you’re not going to be taking a statin unless your cholesterol is a little worse.”
While the study analysis attempted to control for cholesterol levels, Tseng noted, “there could be some residual confounding from that.”
Statin users in the study had an average LDL-C level of 144.9 mg/dL vs 136.3 mg/dL in the population not taking any cholesterol medication (P < .001). Statin users with optimal LDL-C, defined as less than 100 mg/dL, had a 39% greater adjusted prevalence of glaucoma (P = .02), while those with high LDL-C (160-189 mg/dL) had a 37% greater adjusted prevalence (P = .005).
Age was another factor in the risk for glaucoma, the study found. Statin users aged 60-69 years had an adjusted rate of glaucoma 28% greater than that for nonusers (P = .05).
Laboratory studies may help clarify the relationships between statins and glaucoma, Tseng said. That could include putting statins directly on the optic nerve of laboratory mice and further investigating how statins affect the mechanisms that influence eye pressure, a key driver of glaucoma. From a population study perspective, a randomized trial of glaucoma patients comparing the effect of statins and other cholesterol-lowering medications with nonuse may provide answers.
Database Strengths and Limitations
The study “adds to the somewhat mixed literature on the potential association between statins and glaucoma,” Sophia Wang, MD, MS, a glaucoma specialist at Stanford Byers Eye Institute in Palo Alto, California, said in an interview.
The AoU research cohort is a “notable strength” of the new paper, added Wang, who has used the AoU database to study the relationship between blood pressure, blood pressure medications, and glaucoma.
“The population is especially large and diverse, with a large proportion of participants from backgrounds that are traditionally underrepresented in research,” she said. And The inclusion of both medical records and survey data means the health information on the cohort is detailed and longitudinal.
“The authors make excellent use here of the data by including in their analyses results of laboratory investigations — LDL-C, notably — which wouldn’t be readily available in other types of datasets such as claims datasets,” she said.
However, the database has limitations as well, including its reliance on coding, which is prone to errors, to determine glaucoma diagnosis and missing information on eye examinations. In addition, the study used one LDL-C measurement rather than multiple measurements, Wang pointed out, “and we know that LDL-C can vary over time.”
The study was funded by Research to Prevent Blindness. Tseng and Wang reported no relevant financial relationships to disclose.
A version of this article first appeared on Medscape.com.
Adults with high cholesterol taking statins may have a significantly higher risk of developing glaucoma than those not taking the cholesterol-lowering drugs, an observational study of a large research database found.
The study, published in Ophthalmology Glaucoma, analyzed electronic health records of 79,742 adults with hyperlipidemia in the All of Us Research Program database from 2017 to 2022. The repository is maintained by the National Institutes of Health and provides data for research into precision medicine.
The 6365 statin users in the study population had a 47% greater unadjusted prevalence of glaucoma than nonusers of the drugs (P < .001) and a 13% greater prevalence in models that adjusted for potential confounding variables (P = .02). The researchers also found statin users had significantly higher levels of low-density lipoprotein cholesterol (LDL-C), but even patients with optimal levels of LDL-C had higher rates of glaucoma.
‘A Little Unusual’
Drawing any clinically relevant conclusions from this latest study would be premature, said Victoria Tseng, MD, PhD, an assistant professor at UCLA Stein Eye Institute and Doheny Eye Centers UCLA, and the senior author of the study. “I certainly would not be telling my patients on statins to stop their statins.”
Tseng acknowledged her group’s finding runs counter to previous studies that found statins may help prevent glaucoma or at least have no effect on the eye disease, although the association between cholesterol and glaucoma has been well established.
A 2019 analysis of nearly 137,000 participants in three population studies found no connection between statin use and the risk for primary open-angle glaucoma. A 2012 study of more than 500,000 people with high cholesterol found statin use was associated with a significant reduction in the risk for open-angle glaucoma.
“It’s a little unusual that we found the opposite,” Tseng said in an interview.
One explanation is the observational nature of the AoU analysis Tseng’s group conducted. “We don’t know what these people look like or how well the data were collected, so we’re going off of what’s there in the database,” she said.
Another explanation could be the nature of hyperlipidemia itself, she said. “There have definitely been studies that suggest increased cholesterol levels are associated with an increased risk of glaucoma. Presumably, you’re not going to be taking a statin unless your cholesterol is a little worse.”
While the study analysis attempted to control for cholesterol levels, Tseng noted, “there could be some residual confounding from that.”
Statin users in the study had an average LDL-C level of 144.9 mg/dL vs 136.3 mg/dL in the population not taking any cholesterol medication (P < .001). Statin users with optimal LDL-C, defined as less than 100 mg/dL, had a 39% greater adjusted prevalence of glaucoma (P = .02), while those with high LDL-C (160-189 mg/dL) had a 37% greater adjusted prevalence (P = .005).
Age was another factor in the risk for glaucoma, the study found. Statin users aged 60-69 years had an adjusted rate of glaucoma 28% greater than that for nonusers (P = .05).
Laboratory studies may help clarify the relationships between statins and glaucoma, Tseng said. That could include putting statins directly on the optic nerve of laboratory mice and further investigating how statins affect the mechanisms that influence eye pressure, a key driver of glaucoma. From a population study perspective, a randomized trial of glaucoma patients comparing the effect of statins and other cholesterol-lowering medications with nonuse may provide answers.
Database Strengths and Limitations
The study “adds to the somewhat mixed literature on the potential association between statins and glaucoma,” Sophia Wang, MD, MS, a glaucoma specialist at Stanford Byers Eye Institute in Palo Alto, California, said in an interview.
The AoU research cohort is a “notable strength” of the new paper, added Wang, who has used the AoU database to study the relationship between blood pressure, blood pressure medications, and glaucoma.
“The population is especially large and diverse, with a large proportion of participants from backgrounds that are traditionally underrepresented in research,” she said. And The inclusion of both medical records and survey data means the health information on the cohort is detailed and longitudinal.
“The authors make excellent use here of the data by including in their analyses results of laboratory investigations — LDL-C, notably — which wouldn’t be readily available in other types of datasets such as claims datasets,” she said.
However, the database has limitations as well, including its reliance on coding, which is prone to errors, to determine glaucoma diagnosis and missing information on eye examinations. In addition, the study used one LDL-C measurement rather than multiple measurements, Wang pointed out, “and we know that LDL-C can vary over time.”
The study was funded by Research to Prevent Blindness. Tseng and Wang reported no relevant financial relationships to disclose.
A version of this article first appeared on Medscape.com.
Adults with high cholesterol taking statins may have a significantly higher risk of developing glaucoma than those not taking the cholesterol-lowering drugs, an observational study of a large research database found.
The study, published in Ophthalmology Glaucoma, analyzed electronic health records of 79,742 adults with hyperlipidemia in the All of Us Research Program database from 2017 to 2022. The repository is maintained by the National Institutes of Health and provides data for research into precision medicine.
The 6365 statin users in the study population had a 47% greater unadjusted prevalence of glaucoma than nonusers of the drugs (P < .001) and a 13% greater prevalence in models that adjusted for potential confounding variables (P = .02). The researchers also found statin users had significantly higher levels of low-density lipoprotein cholesterol (LDL-C), but even patients with optimal levels of LDL-C had higher rates of glaucoma.
‘A Little Unusual’
Drawing any clinically relevant conclusions from this latest study would be premature, said Victoria Tseng, MD, PhD, an assistant professor at UCLA Stein Eye Institute and Doheny Eye Centers UCLA, and the senior author of the study. “I certainly would not be telling my patients on statins to stop their statins.”
Tseng acknowledged her group’s finding runs counter to previous studies that found statins may help prevent glaucoma or at least have no effect on the eye disease, although the association between cholesterol and glaucoma has been well established.
A 2019 analysis of nearly 137,000 participants in three population studies found no connection between statin use and the risk for primary open-angle glaucoma. A 2012 study of more than 500,000 people with high cholesterol found statin use was associated with a significant reduction in the risk for open-angle glaucoma.
“It’s a little unusual that we found the opposite,” Tseng said in an interview.
One explanation is the observational nature of the AoU analysis Tseng’s group conducted. “We don’t know what these people look like or how well the data were collected, so we’re going off of what’s there in the database,” she said.
Another explanation could be the nature of hyperlipidemia itself, she said. “There have definitely been studies that suggest increased cholesterol levels are associated with an increased risk of glaucoma. Presumably, you’re not going to be taking a statin unless your cholesterol is a little worse.”
While the study analysis attempted to control for cholesterol levels, Tseng noted, “there could be some residual confounding from that.”
Statin users in the study had an average LDL-C level of 144.9 mg/dL vs 136.3 mg/dL in the population not taking any cholesterol medication (P < .001). Statin users with optimal LDL-C, defined as less than 100 mg/dL, had a 39% greater adjusted prevalence of glaucoma (P = .02), while those with high LDL-C (160-189 mg/dL) had a 37% greater adjusted prevalence (P = .005).
Age was another factor in the risk for glaucoma, the study found. Statin users aged 60-69 years had an adjusted rate of glaucoma 28% greater than that for nonusers (P = .05).
Laboratory studies may help clarify the relationships between statins and glaucoma, Tseng said. That could include putting statins directly on the optic nerve of laboratory mice and further investigating how statins affect the mechanisms that influence eye pressure, a key driver of glaucoma. From a population study perspective, a randomized trial of glaucoma patients comparing the effect of statins and other cholesterol-lowering medications with nonuse may provide answers.
Database Strengths and Limitations
The study “adds to the somewhat mixed literature on the potential association between statins and glaucoma,” Sophia Wang, MD, MS, a glaucoma specialist at Stanford Byers Eye Institute in Palo Alto, California, said in an interview.
The AoU research cohort is a “notable strength” of the new paper, added Wang, who has used the AoU database to study the relationship between blood pressure, blood pressure medications, and glaucoma.
“The population is especially large and diverse, with a large proportion of participants from backgrounds that are traditionally underrepresented in research,” she said. And The inclusion of both medical records and survey data means the health information on the cohort is detailed and longitudinal.
“The authors make excellent use here of the data by including in their analyses results of laboratory investigations — LDL-C, notably — which wouldn’t be readily available in other types of datasets such as claims datasets,” she said.
However, the database has limitations as well, including its reliance on coding, which is prone to errors, to determine glaucoma diagnosis and missing information on eye examinations. In addition, the study used one LDL-C measurement rather than multiple measurements, Wang pointed out, “and we know that LDL-C can vary over time.”
The study was funded by Research to Prevent Blindness. Tseng and Wang reported no relevant financial relationships to disclose.
A version of this article first appeared on Medscape.com.
FROM OPHTHALMOLOGY GLAUCOMA