Systematic Viral Testing in Emergency Departments Has Limited Benefit for General Population

Article Type
Changed
Thu, 03/21/2024 - 16:06

Routine use of rapid respiratory virus testing in the emergency department (ED) appears to show limited benefit among patients with signs and symptoms of acute respiratory infection (ARI), according to a new study.

Rapid viral testing wasn’t associated with reduced antibiotic use, ED length of stay, or rates of ED return visits or hospitalization. However, testing was associated with a small increase in antiviral prescriptions and a small reduction in blood tests and chest x-rays.

“Our interest in studying the benefits of rapid viral testing in emergency departments comes from a commitment to diagnostic stewardship — ensuring that the right tests are administered to the right patients at the right time while also curbing overuse,” said lead author Tilmann Schober, MD, a resident in pediatric infectious disease at McGill University and Montreal Children’s Hospital.

“Following the SARS-CoV-2 pandemic, we have seen a surge in the availability of rapid viral testing, including molecular multiplex panels,” he said. “However, the actual impact of these advancements on patient care in the ED remains uncertain.”

The study was published online on March 4, 2024, in JAMA Internal Medicine).
 

Rapid Viral Testing

Dr. Schober and colleagues conducted a systematic review and meta-analysis of 11 randomized clinical trials to understand whether rapid testing for respiratory viruses was associated with patient treatment in the ED.

In particular, the research team looked at whether testing in patients with suspected ARI was associated with decreased antibiotic use, ancillary tests, ED length of stay, ED return visits, hospitalization, and increased influenza antiviral treatment.

Among the trials, seven studies included molecular testing, and eight used multiplex panels, including influenza and respiratory syncytial virus (RSV), influenza/RSV/adenovirus/parainfluenza, or a panel of 15 or more respiratory viruses. No study evaluated testing for SARS-CoV-2. The research team reported risk ratios (RRs) and risk difference estimates.

In general, routine rapid viral testing was associated with higher use of influenza antivirals (RR, 1.33) and lower use of chest radiography (RR, 0.88) and blood tests (RR, 0.81). However, the magnitude of these effects was small. For instance, to achieve one additional viral prescription, 70 patients would need to be tested, and to save one x-ray, 30 patients would need to be tested.

“This suggests that, while statistically significant, the practical impact of these secondary outcomes may not justify the extensive effort and resources involved in widespread testing,” Dr. Schober said.

In addition, there was no association between rapid testing and antibiotic use (RR, 0.99), urine testing (RR, 0.95), ED length of stay (0 h), return visits (RR, 0.93), or hospitalization (RR, 1.01).

Notably, there was no association between rapid viral testing and antibiotic use in any prespecified subgroup based on age, test method, publication date, number of viral targets, risk of bias, or industry funding, the authors said. They concluded that rapid virus testing should be reserved for patients for whom the testing will change treatment, such as high-risk patients or those with severe disease.

“It’s crucial to note that our study specifically evaluated the impact of systematic testing of patients with signs and symptoms of acute respiratory infection. Our findings do not advocate against rapid respiratory virus testing in general,” Dr. Schober said. “There is well-established evidence supporting the benefits of viral testing in certain contexts, such as hospitalized patients, to guide infection control practices or in specific high-risk populations.”
 

 

 

Future Research

Additional studies should look at testing among subgroups, particularly those with high-risk conditions, the study authors wrote. In addition, the research team would like to study the implementation of novel diagnostic stewardship programs as compared with well-established antibiotic stewardship programs.

“Acute respiratory tract illnesses represent one of the most common reasons for being evaluated in an acute care setting, especially in pediatrics, and these visits have traditionally resulted in excessive antibiotic prescribing, despite the etiology of the infection mostly being viral,” said Suchitra Rao, MBBS, associate professor of pediatrics at the University of Colorado School of Medicine and associate medical director of infection prevention and control at Children’s Hospital Colorado, Aurora.

Dr. Rao, who wasn’t involved with this study, has surveyed ED providers about respiratory viral testing and changes in clinical decision-making. She and colleagues found that providers most commonly changed clinical decision-making while prescribing an antiviral if influenza was detected or withholding antivirals if influenza wasn’t detected.

“Multiplex testing for respiratory viruses and atypical bacteria is becoming more widespread, with newer-generation platforms having shorter turnaround times, and offers the potential to impact point-of-care decision-making,” she said. “However, these tests are expensive, and more studies are needed to explore whether respiratory pathogen panel testing in the acute care setting has an impact in terms of reduced antibiotic use as well as other outcomes, including ED visits, health-seeking behaviors, and hospitalization.”

For instance, more recent studies around SARS-CoV-2 with newer-generation panels may make a difference, as well as multiplex panels that include numerous viral targets, she said.

“Further RCTs are required to evaluate the impact of influenza/RSV/SARS-CoV-2 panels, as well as respiratory pathogen panel testing in conjunction with antimicrobial and diagnostic stewardship efforts, which have been associated with improved outcomes for other rapid molecular platforms, such as blood culture identification panels,” Rao said.

The study was funded by the Research Institute of the McGill University Health Center. Dr. Schober reported no disclosures, and several study authors reported grants or personal fees from companies outside of this research. Dr. Rao disclosed no relevant relationships.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Routine use of rapid respiratory virus testing in the emergency department (ED) appears to show limited benefit among patients with signs and symptoms of acute respiratory infection (ARI), according to a new study.

Rapid viral testing wasn’t associated with reduced antibiotic use, ED length of stay, or rates of ED return visits or hospitalization. However, testing was associated with a small increase in antiviral prescriptions and a small reduction in blood tests and chest x-rays.

“Our interest in studying the benefits of rapid viral testing in emergency departments comes from a commitment to diagnostic stewardship — ensuring that the right tests are administered to the right patients at the right time while also curbing overuse,” said lead author Tilmann Schober, MD, a resident in pediatric infectious disease at McGill University and Montreal Children’s Hospital.

“Following the SARS-CoV-2 pandemic, we have seen a surge in the availability of rapid viral testing, including molecular multiplex panels,” he said. “However, the actual impact of these advancements on patient care in the ED remains uncertain.”

The study was published online on March 4, 2024, in JAMA Internal Medicine).
 

Rapid Viral Testing

Dr. Schober and colleagues conducted a systematic review and meta-analysis of 11 randomized clinical trials to understand whether rapid testing for respiratory viruses was associated with patient treatment in the ED.

In particular, the research team looked at whether testing in patients with suspected ARI was associated with decreased antibiotic use, ancillary tests, ED length of stay, ED return visits, hospitalization, and increased influenza antiviral treatment.

Among the trials, seven studies included molecular testing, and eight used multiplex panels, including influenza and respiratory syncytial virus (RSV), influenza/RSV/adenovirus/parainfluenza, or a panel of 15 or more respiratory viruses. No study evaluated testing for SARS-CoV-2. The research team reported risk ratios (RRs) and risk difference estimates.

In general, routine rapid viral testing was associated with higher use of influenza antivirals (RR, 1.33) and lower use of chest radiography (RR, 0.88) and blood tests (RR, 0.81). However, the magnitude of these effects was small. For instance, to achieve one additional viral prescription, 70 patients would need to be tested, and to save one x-ray, 30 patients would need to be tested.

“This suggests that, while statistically significant, the practical impact of these secondary outcomes may not justify the extensive effort and resources involved in widespread testing,” Dr. Schober said.

In addition, there was no association between rapid testing and antibiotic use (RR, 0.99), urine testing (RR, 0.95), ED length of stay (0 h), return visits (RR, 0.93), or hospitalization (RR, 1.01).

Notably, there was no association between rapid viral testing and antibiotic use in any prespecified subgroup based on age, test method, publication date, number of viral targets, risk of bias, or industry funding, the authors said. They concluded that rapid virus testing should be reserved for patients for whom the testing will change treatment, such as high-risk patients or those with severe disease.

“It’s crucial to note that our study specifically evaluated the impact of systematic testing of patients with signs and symptoms of acute respiratory infection. Our findings do not advocate against rapid respiratory virus testing in general,” Dr. Schober said. “There is well-established evidence supporting the benefits of viral testing in certain contexts, such as hospitalized patients, to guide infection control practices or in specific high-risk populations.”
 

 

 

Future Research

Additional studies should look at testing among subgroups, particularly those with high-risk conditions, the study authors wrote. In addition, the research team would like to study the implementation of novel diagnostic stewardship programs as compared with well-established antibiotic stewardship programs.

“Acute respiratory tract illnesses represent one of the most common reasons for being evaluated in an acute care setting, especially in pediatrics, and these visits have traditionally resulted in excessive antibiotic prescribing, despite the etiology of the infection mostly being viral,” said Suchitra Rao, MBBS, associate professor of pediatrics at the University of Colorado School of Medicine and associate medical director of infection prevention and control at Children’s Hospital Colorado, Aurora.

Dr. Rao, who wasn’t involved with this study, has surveyed ED providers about respiratory viral testing and changes in clinical decision-making. She and colleagues found that providers most commonly changed clinical decision-making while prescribing an antiviral if influenza was detected or withholding antivirals if influenza wasn’t detected.

“Multiplex testing for respiratory viruses and atypical bacteria is becoming more widespread, with newer-generation platforms having shorter turnaround times, and offers the potential to impact point-of-care decision-making,” she said. “However, these tests are expensive, and more studies are needed to explore whether respiratory pathogen panel testing in the acute care setting has an impact in terms of reduced antibiotic use as well as other outcomes, including ED visits, health-seeking behaviors, and hospitalization.”

For instance, more recent studies around SARS-CoV-2 with newer-generation panels may make a difference, as well as multiplex panels that include numerous viral targets, she said.

“Further RCTs are required to evaluate the impact of influenza/RSV/SARS-CoV-2 panels, as well as respiratory pathogen panel testing in conjunction with antimicrobial and diagnostic stewardship efforts, which have been associated with improved outcomes for other rapid molecular platforms, such as blood culture identification panels,” Rao said.

The study was funded by the Research Institute of the McGill University Health Center. Dr. Schober reported no disclosures, and several study authors reported grants or personal fees from companies outside of this research. Dr. Rao disclosed no relevant relationships.

A version of this article appeared on Medscape.com .

Routine use of rapid respiratory virus testing in the emergency department (ED) appears to show limited benefit among patients with signs and symptoms of acute respiratory infection (ARI), according to a new study.

Rapid viral testing wasn’t associated with reduced antibiotic use, ED length of stay, or rates of ED return visits or hospitalization. However, testing was associated with a small increase in antiviral prescriptions and a small reduction in blood tests and chest x-rays.

“Our interest in studying the benefits of rapid viral testing in emergency departments comes from a commitment to diagnostic stewardship — ensuring that the right tests are administered to the right patients at the right time while also curbing overuse,” said lead author Tilmann Schober, MD, a resident in pediatric infectious disease at McGill University and Montreal Children’s Hospital.

“Following the SARS-CoV-2 pandemic, we have seen a surge in the availability of rapid viral testing, including molecular multiplex panels,” he said. “However, the actual impact of these advancements on patient care in the ED remains uncertain.”

The study was published online on March 4, 2024, in JAMA Internal Medicine).
 

Rapid Viral Testing

Dr. Schober and colleagues conducted a systematic review and meta-analysis of 11 randomized clinical trials to understand whether rapid testing for respiratory viruses was associated with patient treatment in the ED.

In particular, the research team looked at whether testing in patients with suspected ARI was associated with decreased antibiotic use, ancillary tests, ED length of stay, ED return visits, hospitalization, and increased influenza antiviral treatment.

Among the trials, seven studies included molecular testing, and eight used multiplex panels, including influenza and respiratory syncytial virus (RSV), influenza/RSV/adenovirus/parainfluenza, or a panel of 15 or more respiratory viruses. No study evaluated testing for SARS-CoV-2. The research team reported risk ratios (RRs) and risk difference estimates.

In general, routine rapid viral testing was associated with higher use of influenza antivirals (RR, 1.33) and lower use of chest radiography (RR, 0.88) and blood tests (RR, 0.81). However, the magnitude of these effects was small. For instance, to achieve one additional viral prescription, 70 patients would need to be tested, and to save one x-ray, 30 patients would need to be tested.

“This suggests that, while statistically significant, the practical impact of these secondary outcomes may not justify the extensive effort and resources involved in widespread testing,” Dr. Schober said.

In addition, there was no association between rapid testing and antibiotic use (RR, 0.99), urine testing (RR, 0.95), ED length of stay (0 h), return visits (RR, 0.93), or hospitalization (RR, 1.01).

Notably, there was no association between rapid viral testing and antibiotic use in any prespecified subgroup based on age, test method, publication date, number of viral targets, risk of bias, or industry funding, the authors said. They concluded that rapid virus testing should be reserved for patients for whom the testing will change treatment, such as high-risk patients or those with severe disease.

“It’s crucial to note that our study specifically evaluated the impact of systematic testing of patients with signs and symptoms of acute respiratory infection. Our findings do not advocate against rapid respiratory virus testing in general,” Dr. Schober said. “There is well-established evidence supporting the benefits of viral testing in certain contexts, such as hospitalized patients, to guide infection control practices or in specific high-risk populations.”
 

 

 

Future Research

Additional studies should look at testing among subgroups, particularly those with high-risk conditions, the study authors wrote. In addition, the research team would like to study the implementation of novel diagnostic stewardship programs as compared with well-established antibiotic stewardship programs.

“Acute respiratory tract illnesses represent one of the most common reasons for being evaluated in an acute care setting, especially in pediatrics, and these visits have traditionally resulted in excessive antibiotic prescribing, despite the etiology of the infection mostly being viral,” said Suchitra Rao, MBBS, associate professor of pediatrics at the University of Colorado School of Medicine and associate medical director of infection prevention and control at Children’s Hospital Colorado, Aurora.

Dr. Rao, who wasn’t involved with this study, has surveyed ED providers about respiratory viral testing and changes in clinical decision-making. She and colleagues found that providers most commonly changed clinical decision-making while prescribing an antiviral if influenza was detected or withholding antivirals if influenza wasn’t detected.

“Multiplex testing for respiratory viruses and atypical bacteria is becoming more widespread, with newer-generation platforms having shorter turnaround times, and offers the potential to impact point-of-care decision-making,” she said. “However, these tests are expensive, and more studies are needed to explore whether respiratory pathogen panel testing in the acute care setting has an impact in terms of reduced antibiotic use as well as other outcomes, including ED visits, health-seeking behaviors, and hospitalization.”

For instance, more recent studies around SARS-CoV-2 with newer-generation panels may make a difference, as well as multiplex panels that include numerous viral targets, she said.

“Further RCTs are required to evaluate the impact of influenza/RSV/SARS-CoV-2 panels, as well as respiratory pathogen panel testing in conjunction with antimicrobial and diagnostic stewardship efforts, which have been associated with improved outcomes for other rapid molecular platforms, such as blood culture identification panels,” Rao said.

The study was funded by the Research Institute of the McGill University Health Center. Dr. Schober reported no disclosures, and several study authors reported grants or personal fees from companies outside of this research. Dr. Rao disclosed no relevant relationships.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cell-Free DNA Blood Test Has High Accuracy for Detecting Colorectal Cancer

Article Type
Changed
Thu, 03/21/2024 - 16:42
Display Headline
Cell-Free DNA Blood Test Developed for Detecting Colorectal Cancer

A cell-free DNA (cfDNA) blood test, aimed at detecting abnormal DNA signals in people with an average risk of colorectal cancer (CRC), correctly detected CRC in most people confirmed to have the disease, according to a new study.

The cfDNA blood test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions. Other noninvasive screening methods have sensitivity from 67% to 94% for CRC and 22% to 43% for advanced precancerous lesions.

“The results of the study are a promising step toward developing more convenient tools to detect colorectal cancer early while it is more easily treated,” said senior author William M. Grady, MD, AGAF, medical director of the Gastrointestinal Cancer Prevention Program at the Fred Hutchinson Cancer Center in Seattle.

“The test, which has an accuracy rate for colon cancer detection similar to stool tests used for early detection of cancer, could offer an alternative for patients who may otherwise decline current screening options,” he said.

The study was published online on March 14 in The New England Journal of Medicine.
 

Analyzing the Blood Test’s Accuracy 

Dr. Grady and colleagues conducted a multisite clinical trial called ECLIPSE, which compared the sensitivity and specificity of a cfDNA blood test (Shield, Guardant Health) against that obtained with colonoscopy, the gold standard for CRC screening. Guardant led and funded the study.

Dr. William M. Grady

Guardant’s Shield test is designed to detect CRC through genomic alterations, aberrant methylation status, and fragmentomic patterns, which show up as an “abnormal signal detected” result. Similar blood tests are being developed as “liquid biopsy” tests for other emerging cancer screenings as well.

The study included 7861 people with average CRC risk who underwent routine screening with colonoscopy at 265 sites in the United States, including primary care and endoscopy centers in academic and community-based institutions. Eligible people were aged 45-84 years (average age, 60 years), and 53.7% were women. The race and ethnicity characteristics of the participants closely mirrored the demographic distribution in the 2020 US Census.

Overall, 54 of 65 (83.1%) participants with colonoscopy-detected CRC had a positive cfDNA blood test. However, 11 participants (16.9%) with CRC had a negative test.

The cfDNA blood test identified 42 of 48 stage I, II, or III CRCs, indicating a sensitivity of 87.5%, including 65% for stage I cancers, 100% for stage II cancers, and 100% for stage III cancers. The test also identified all 10 of the stage IV CRC cases. There were no substantial differences in sensitivity for CRC based on primary tumor location, tumor histologic grade, or demographic characteristics.

Among participants without advanced colorectal neoplasia on colonoscopy, 89.6% had a negative cfDNA blood test, and 10.4% had a positive test. 

Among those with a negative colonoscopy — with no CRC, advanced precancerous lesions, or nonadvanced precancerous lesions — specificity was 89.9%.

Among 1116 participants with advanced precancerous lesions identified as the most advanced lesion on colonoscopy, the cfDNA blood test was positive for 147, indicating a sensitivity for advanced precancerous lesions of 13.2%.

Although the blood test has sensitivity similar to stool-based tests for CRC, the accuracy is lower than it is with colonoscopy, which remains the current gold standard for CRC screening, Dr. Grady said.

“Colorectal cancer is common and very preventable with screening, but only about 50% to 60% of people who are eligible for screening actually take those tests,” he said. “Getting people to be screened for cancer works best when we offer them screening options and then let them choose what works best for them.”
 

 

 

Future Research

Colorectal cancer is the second leading cause of cancer-related death among US adults and is now the third most diagnosed cancer for people younger than 50 years, Dr. Grady said. Although overall CRC death rates have declined in recent years, the rates among those younger than 55 years have increased since the mid-2000s.

“When colorectal cancer is found earlier and the cancer has not yet spread throughout the body, patient outcomes are much better, as reflected in 5-year survival being much better. It makes sense that an effective blood-based test could have a potential role, in particular for those not getting screened yet,” said Joshua Melson, MD, AGAF, clinical professor of medicine and director of the High-Risk Clinic for Gastrointestinal Cancers at the University of Arizona Cancer Center in Tucson.

Dr. Melson, who wasn’t involved with this study, noted that blood-based testing shows promise for cancer detection but needs additional support for real-world implementation. For instance, the Shield blood test has difficulty detecting precancerous lesions, and it remains unclear what the optimal intervals for repeat testing would be after a negative test, he said. In addition, screening programs will need to ensure they have capacity to effectively deal with a positive test result.

“For a screening program to actually work, when a noninvasive test (whether blood-based or stool-based) is read as positive, those patients need to have a follow-up colonoscopy,” he said. 

Proper communication with patients will be important as well, said Gloria Coronado, PhD, associate director of Population Sciences at the University of Arizona Cancer Center, Tucson. Dr. Coronado, who wasn’t involved with this study, has developed CRC screening messages for specific patient populations and studied patient reactions to CRC blood tests. 

In a study by Dr. Coronado and colleagues, among more than 2000 patients who passively declined fecal testing and had an upcoming clinic visit, CRC screening proportions were 17.5 percentage points higher in the group offered the blood test vs those offered usual care. In qualitative interviews, one patient said of the blood-based testing option, “I was screaming hallelujah!

“Patients believed that a blood test would be more accurate than a stool-based test. However, for the detection of advanced adenomas, the reverse is true,” she said. “It will be important to balance the high acceptance and enthusiasm for the blood test with the lower performance of the blood test compared to other tests already on the market.”

In a statement accompanying the study’s publication, the American Gastroenterological Association welcomed these results as an exciting development, but cautioned that a blood-based test was not interchangeable with colonoscopy.

“The Centers for Medicare and Medicaid Services (CMS) has determined it will cover a blood test for colorectal cancer screening every three years if the test achieves 74% sensitivity for CRC, 90% specificity, and FDA approval,” the statement reads. “However, a blood test that meets only the CMS criteria will be inferior to current recommended tests and should not be recommended to replace current tests. Such a test could be recommended for patients who decline all other recommended tests, since any screening is better than no screening at all.”

Dr. Grady is a paid member of Guardant’s scientific advisory board and advised on the design and procedure of the clinical trial and data analysis. Dr. Melson previously served as consultant for Guardant. Dr. Coronado reported no relevant disclosures. 

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

A cell-free DNA (cfDNA) blood test, aimed at detecting abnormal DNA signals in people with an average risk of colorectal cancer (CRC), correctly detected CRC in most people confirmed to have the disease, according to a new study.

The cfDNA blood test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions. Other noninvasive screening methods have sensitivity from 67% to 94% for CRC and 22% to 43% for advanced precancerous lesions.

“The results of the study are a promising step toward developing more convenient tools to detect colorectal cancer early while it is more easily treated,” said senior author William M. Grady, MD, AGAF, medical director of the Gastrointestinal Cancer Prevention Program at the Fred Hutchinson Cancer Center in Seattle.

“The test, which has an accuracy rate for colon cancer detection similar to stool tests used for early detection of cancer, could offer an alternative for patients who may otherwise decline current screening options,” he said.

The study was published online on March 14 in The New England Journal of Medicine.
 

Analyzing the Blood Test’s Accuracy 

Dr. Grady and colleagues conducted a multisite clinical trial called ECLIPSE, which compared the sensitivity and specificity of a cfDNA blood test (Shield, Guardant Health) against that obtained with colonoscopy, the gold standard for CRC screening. Guardant led and funded the study.

Dr. William M. Grady

Guardant’s Shield test is designed to detect CRC through genomic alterations, aberrant methylation status, and fragmentomic patterns, which show up as an “abnormal signal detected” result. Similar blood tests are being developed as “liquid biopsy” tests for other emerging cancer screenings as well.

The study included 7861 people with average CRC risk who underwent routine screening with colonoscopy at 265 sites in the United States, including primary care and endoscopy centers in academic and community-based institutions. Eligible people were aged 45-84 years (average age, 60 years), and 53.7% were women. The race and ethnicity characteristics of the participants closely mirrored the demographic distribution in the 2020 US Census.

Overall, 54 of 65 (83.1%) participants with colonoscopy-detected CRC had a positive cfDNA blood test. However, 11 participants (16.9%) with CRC had a negative test.

The cfDNA blood test identified 42 of 48 stage I, II, or III CRCs, indicating a sensitivity of 87.5%, including 65% for stage I cancers, 100% for stage II cancers, and 100% for stage III cancers. The test also identified all 10 of the stage IV CRC cases. There were no substantial differences in sensitivity for CRC based on primary tumor location, tumor histologic grade, or demographic characteristics.

Among participants without advanced colorectal neoplasia on colonoscopy, 89.6% had a negative cfDNA blood test, and 10.4% had a positive test. 

Among those with a negative colonoscopy — with no CRC, advanced precancerous lesions, or nonadvanced precancerous lesions — specificity was 89.9%.

Among 1116 participants with advanced precancerous lesions identified as the most advanced lesion on colonoscopy, the cfDNA blood test was positive for 147, indicating a sensitivity for advanced precancerous lesions of 13.2%.

Although the blood test has sensitivity similar to stool-based tests for CRC, the accuracy is lower than it is with colonoscopy, which remains the current gold standard for CRC screening, Dr. Grady said.

“Colorectal cancer is common and very preventable with screening, but only about 50% to 60% of people who are eligible for screening actually take those tests,” he said. “Getting people to be screened for cancer works best when we offer them screening options and then let them choose what works best for them.”
 

 

 

Future Research

Colorectal cancer is the second leading cause of cancer-related death among US adults and is now the third most diagnosed cancer for people younger than 50 years, Dr. Grady said. Although overall CRC death rates have declined in recent years, the rates among those younger than 55 years have increased since the mid-2000s.

“When colorectal cancer is found earlier and the cancer has not yet spread throughout the body, patient outcomes are much better, as reflected in 5-year survival being much better. It makes sense that an effective blood-based test could have a potential role, in particular for those not getting screened yet,” said Joshua Melson, MD, AGAF, clinical professor of medicine and director of the High-Risk Clinic for Gastrointestinal Cancers at the University of Arizona Cancer Center in Tucson.

Dr. Melson, who wasn’t involved with this study, noted that blood-based testing shows promise for cancer detection but needs additional support for real-world implementation. For instance, the Shield blood test has difficulty detecting precancerous lesions, and it remains unclear what the optimal intervals for repeat testing would be after a negative test, he said. In addition, screening programs will need to ensure they have capacity to effectively deal with a positive test result.

“For a screening program to actually work, when a noninvasive test (whether blood-based or stool-based) is read as positive, those patients need to have a follow-up colonoscopy,” he said. 

Proper communication with patients will be important as well, said Gloria Coronado, PhD, associate director of Population Sciences at the University of Arizona Cancer Center, Tucson. Dr. Coronado, who wasn’t involved with this study, has developed CRC screening messages for specific patient populations and studied patient reactions to CRC blood tests. 

In a study by Dr. Coronado and colleagues, among more than 2000 patients who passively declined fecal testing and had an upcoming clinic visit, CRC screening proportions were 17.5 percentage points higher in the group offered the blood test vs those offered usual care. In qualitative interviews, one patient said of the blood-based testing option, “I was screaming hallelujah!

“Patients believed that a blood test would be more accurate than a stool-based test. However, for the detection of advanced adenomas, the reverse is true,” she said. “It will be important to balance the high acceptance and enthusiasm for the blood test with the lower performance of the blood test compared to other tests already on the market.”

In a statement accompanying the study’s publication, the American Gastroenterological Association welcomed these results as an exciting development, but cautioned that a blood-based test was not interchangeable with colonoscopy.

“The Centers for Medicare and Medicaid Services (CMS) has determined it will cover a blood test for colorectal cancer screening every three years if the test achieves 74% sensitivity for CRC, 90% specificity, and FDA approval,” the statement reads. “However, a blood test that meets only the CMS criteria will be inferior to current recommended tests and should not be recommended to replace current tests. Such a test could be recommended for patients who decline all other recommended tests, since any screening is better than no screening at all.”

Dr. Grady is a paid member of Guardant’s scientific advisory board and advised on the design and procedure of the clinical trial and data analysis. Dr. Melson previously served as consultant for Guardant. Dr. Coronado reported no relevant disclosures. 

A version of this article appeared on Medscape.com .

A cell-free DNA (cfDNA) blood test, aimed at detecting abnormal DNA signals in people with an average risk of colorectal cancer (CRC), correctly detected CRC in most people confirmed to have the disease, according to a new study.

The cfDNA blood test had 83% sensitivity for CRC, 90% specificity for advanced neoplasia, and 13% sensitivity for advanced precancerous lesions. Other noninvasive screening methods have sensitivity from 67% to 94% for CRC and 22% to 43% for advanced precancerous lesions.

“The results of the study are a promising step toward developing more convenient tools to detect colorectal cancer early while it is more easily treated,” said senior author William M. Grady, MD, AGAF, medical director of the Gastrointestinal Cancer Prevention Program at the Fred Hutchinson Cancer Center in Seattle.

“The test, which has an accuracy rate for colon cancer detection similar to stool tests used for early detection of cancer, could offer an alternative for patients who may otherwise decline current screening options,” he said.

The study was published online on March 14 in The New England Journal of Medicine.
 

Analyzing the Blood Test’s Accuracy 

Dr. Grady and colleagues conducted a multisite clinical trial called ECLIPSE, which compared the sensitivity and specificity of a cfDNA blood test (Shield, Guardant Health) against that obtained with colonoscopy, the gold standard for CRC screening. Guardant led and funded the study.

Dr. William M. Grady

Guardant’s Shield test is designed to detect CRC through genomic alterations, aberrant methylation status, and fragmentomic patterns, which show up as an “abnormal signal detected” result. Similar blood tests are being developed as “liquid biopsy” tests for other emerging cancer screenings as well.

The study included 7861 people with average CRC risk who underwent routine screening with colonoscopy at 265 sites in the United States, including primary care and endoscopy centers in academic and community-based institutions. Eligible people were aged 45-84 years (average age, 60 years), and 53.7% were women. The race and ethnicity characteristics of the participants closely mirrored the demographic distribution in the 2020 US Census.

Overall, 54 of 65 (83.1%) participants with colonoscopy-detected CRC had a positive cfDNA blood test. However, 11 participants (16.9%) with CRC had a negative test.

The cfDNA blood test identified 42 of 48 stage I, II, or III CRCs, indicating a sensitivity of 87.5%, including 65% for stage I cancers, 100% for stage II cancers, and 100% for stage III cancers. The test also identified all 10 of the stage IV CRC cases. There were no substantial differences in sensitivity for CRC based on primary tumor location, tumor histologic grade, or demographic characteristics.

Among participants without advanced colorectal neoplasia on colonoscopy, 89.6% had a negative cfDNA blood test, and 10.4% had a positive test. 

Among those with a negative colonoscopy — with no CRC, advanced precancerous lesions, or nonadvanced precancerous lesions — specificity was 89.9%.

Among 1116 participants with advanced precancerous lesions identified as the most advanced lesion on colonoscopy, the cfDNA blood test was positive for 147, indicating a sensitivity for advanced precancerous lesions of 13.2%.

Although the blood test has sensitivity similar to stool-based tests for CRC, the accuracy is lower than it is with colonoscopy, which remains the current gold standard for CRC screening, Dr. Grady said.

“Colorectal cancer is common and very preventable with screening, but only about 50% to 60% of people who are eligible for screening actually take those tests,” he said. “Getting people to be screened for cancer works best when we offer them screening options and then let them choose what works best for them.”
 

 

 

Future Research

Colorectal cancer is the second leading cause of cancer-related death among US adults and is now the third most diagnosed cancer for people younger than 50 years, Dr. Grady said. Although overall CRC death rates have declined in recent years, the rates among those younger than 55 years have increased since the mid-2000s.

“When colorectal cancer is found earlier and the cancer has not yet spread throughout the body, patient outcomes are much better, as reflected in 5-year survival being much better. It makes sense that an effective blood-based test could have a potential role, in particular for those not getting screened yet,” said Joshua Melson, MD, AGAF, clinical professor of medicine and director of the High-Risk Clinic for Gastrointestinal Cancers at the University of Arizona Cancer Center in Tucson.

Dr. Melson, who wasn’t involved with this study, noted that blood-based testing shows promise for cancer detection but needs additional support for real-world implementation. For instance, the Shield blood test has difficulty detecting precancerous lesions, and it remains unclear what the optimal intervals for repeat testing would be after a negative test, he said. In addition, screening programs will need to ensure they have capacity to effectively deal with a positive test result.

“For a screening program to actually work, when a noninvasive test (whether blood-based or stool-based) is read as positive, those patients need to have a follow-up colonoscopy,” he said. 

Proper communication with patients will be important as well, said Gloria Coronado, PhD, associate director of Population Sciences at the University of Arizona Cancer Center, Tucson. Dr. Coronado, who wasn’t involved with this study, has developed CRC screening messages for specific patient populations and studied patient reactions to CRC blood tests. 

In a study by Dr. Coronado and colleagues, among more than 2000 patients who passively declined fecal testing and had an upcoming clinic visit, CRC screening proportions were 17.5 percentage points higher in the group offered the blood test vs those offered usual care. In qualitative interviews, one patient said of the blood-based testing option, “I was screaming hallelujah!

“Patients believed that a blood test would be more accurate than a stool-based test. However, for the detection of advanced adenomas, the reverse is true,” she said. “It will be important to balance the high acceptance and enthusiasm for the blood test with the lower performance of the blood test compared to other tests already on the market.”

In a statement accompanying the study’s publication, the American Gastroenterological Association welcomed these results as an exciting development, but cautioned that a blood-based test was not interchangeable with colonoscopy.

“The Centers for Medicare and Medicaid Services (CMS) has determined it will cover a blood test for colorectal cancer screening every three years if the test achieves 74% sensitivity for CRC, 90% specificity, and FDA approval,” the statement reads. “However, a blood test that meets only the CMS criteria will be inferior to current recommended tests and should not be recommended to replace current tests. Such a test could be recommended for patients who decline all other recommended tests, since any screening is better than no screening at all.”

Dr. Grady is a paid member of Guardant’s scientific advisory board and advised on the design and procedure of the clinical trial and data analysis. Dr. Melson previously served as consultant for Guardant. Dr. Coronado reported no relevant disclosures. 

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Display Headline
Cell-Free DNA Blood Test Developed for Detecting Colorectal Cancer
Display Headline
Cell-Free DNA Blood Test Developed for Detecting Colorectal Cancer
Sections
Article Source

FROM NEJM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

No-Biopsy Approach to Celiac Disease Diagnosis Appears Effective for Select Adult Patients

Article Type
Changed
Wed, 02/14/2024 - 12:16

Select adult patients with immunoglobulin A-tissue transglutaminase antibody levels (IgA-tTG) greater than or equal to 10 times the upper limit of normal (ULN) and a moderate-to-high pretest probability of celiac disease could be diagnosed without undergoing invasive endoscopy and duodenal biopsy, according to a new study.

Current international guidelines recommend duodenal biopsies to confirm a celiac disease diagnosis in adult patients, but growing evidence suggests invasive procedures may not be needed, the authors wrote.

“Our study confirms the high accuracy of serology-based diagnosis of coeliac disease in select adult patients,” said Mohamed G. Shiha, MBBCh, MRCP, lead author and a clinical research fellow in gastroenterology at Sheffield Teaching Hospitals in the United Kingdom.

iStock/Getty Images

“This no-biopsy approach could lead to a shorter time to diagnosis, increased patient satisfaction, and reduced healthcare costs,” he said.

The study was published online in Gastroenterology.
 

Evaluating the No-Biopsy Approach

Dr. Shiha and colleagues conducted a systematic review and meta-analysis to evaluate to the accuracy of a no-biopsy approach for diagnosing celiac disease in adults. They looked for studies that reported the sensitivity and specificity of IgA-tTG ≥10xULN compared with duodenal biopsies (with a Marsh grade ≥2) in adults with suspected celiac disease.

The research team used a bivariate random-effects model to calculate the summary estimates of sensitivity, specificity, and positive and negative likelihood ratios. Then the positive and negative likelihood ratios were used to calculate the positive predictive value (PPV) of the no-biopsy approach across different pretest probabilities of celiac disease.

Among 18 studies with 12,103 participants from 15 countries, the pooled prevalence of biopsy-proven celiac disease was 62%. The proportion of patients with IgA-tTG ≥10xULN was 32%.

The summary sensitivity of IgA-tTG ≥10xULN was 51%, and the summary specificity was 100% for the diagnosis of celiac disease. The positive and negative likelihood ratios were 183.42 and .49, respectively. The area under the summary receiver operating characteristic curve was .83.

Overall, the PPV of IgA-tTG ≥10xULN to identify patients with celiac disease was 98%, which varied according to pretest probability of celiac disease in the studied population. Specifically, the PPV was 65%, 88%, 95%, and 99% if celiac disease prevalence was 1%, 4%, 10%, and 40%, respectively. The 40% prevalence represents the lower confidence interval of the pooled prevalence from the included studies, the authors noted.

“We provided PPV estimates of IgA-tTG ≥10xULN for common pretest probabilities of coeliac disease to aid clinicians and patients in reaching an informed decision on a no-biopsy diagnosis based on the best available evidence,” the authors wrote.
 

Considering Additional Factors

Due to the increased accuracy of serological tests, pediatric guidelines have adopted a no-biopsy approach, the authors wrote. Children with IgA-tTG ≥10xULN and positive serum endomysial antibodies (EMA) can be diagnosed with celiac disease without biopsy.

However, the no-biopsy approach remains controversial for diagnosing adult patients and requires additional study, the authors wrote. They noted a limitation that all included studies were conducted in secondary and tertiary care settings and excluded patients with known celiac disease or on a gluten-free diet, so the results may not be generalizable to primary care settings.

In addition, relying on serology testing alone could lead to potential false-positive diagnoses, unnecessary dietary restriction, and negative effects on patients’ quality of life, the authors wrote.

At the same time, duodenal biopsy may not always be accurate due to inadequate sampling and could result in false-negative histology. The no-biopsy approach could mitigate this potential risk, the authors noted.

“This study systematically collates the growing data supporting the accuracy of antibody testing to diagnose celiac disease,” said Benjamin Lebwohl, MD, AGAF, professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research for the Celiac Disease Center at Columbia University, New York. Dr. Lebwohl wasn’t involved with this study.

Dr. Benjamin Lebwohl


“We have historically relied on duodenal biopsy to confirm the diagnosis of celiac disease, and the biopsy will still have a central role in most cases in the foreseeable future,” he said. “But as we hone our understanding of antibody testing, one day we may be able to accept or even recommend a biopsy-free approach in select patients.”

Two authors reported grant support from the National Institute for Health and Care Research and National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Shiha reported speaker honorarium from Thermo Fisher. Dr. Lebwohl reported no relevant disclosures.

Publications
Topics
Sections

Select adult patients with immunoglobulin A-tissue transglutaminase antibody levels (IgA-tTG) greater than or equal to 10 times the upper limit of normal (ULN) and a moderate-to-high pretest probability of celiac disease could be diagnosed without undergoing invasive endoscopy and duodenal biopsy, according to a new study.

Current international guidelines recommend duodenal biopsies to confirm a celiac disease diagnosis in adult patients, but growing evidence suggests invasive procedures may not be needed, the authors wrote.

“Our study confirms the high accuracy of serology-based diagnosis of coeliac disease in select adult patients,” said Mohamed G. Shiha, MBBCh, MRCP, lead author and a clinical research fellow in gastroenterology at Sheffield Teaching Hospitals in the United Kingdom.

iStock/Getty Images

“This no-biopsy approach could lead to a shorter time to diagnosis, increased patient satisfaction, and reduced healthcare costs,” he said.

The study was published online in Gastroenterology.
 

Evaluating the No-Biopsy Approach

Dr. Shiha and colleagues conducted a systematic review and meta-analysis to evaluate to the accuracy of a no-biopsy approach for diagnosing celiac disease in adults. They looked for studies that reported the sensitivity and specificity of IgA-tTG ≥10xULN compared with duodenal biopsies (with a Marsh grade ≥2) in adults with suspected celiac disease.

The research team used a bivariate random-effects model to calculate the summary estimates of sensitivity, specificity, and positive and negative likelihood ratios. Then the positive and negative likelihood ratios were used to calculate the positive predictive value (PPV) of the no-biopsy approach across different pretest probabilities of celiac disease.

Among 18 studies with 12,103 participants from 15 countries, the pooled prevalence of biopsy-proven celiac disease was 62%. The proportion of patients with IgA-tTG ≥10xULN was 32%.

The summary sensitivity of IgA-tTG ≥10xULN was 51%, and the summary specificity was 100% for the diagnosis of celiac disease. The positive and negative likelihood ratios were 183.42 and .49, respectively. The area under the summary receiver operating characteristic curve was .83.

Overall, the PPV of IgA-tTG ≥10xULN to identify patients with celiac disease was 98%, which varied according to pretest probability of celiac disease in the studied population. Specifically, the PPV was 65%, 88%, 95%, and 99% if celiac disease prevalence was 1%, 4%, 10%, and 40%, respectively. The 40% prevalence represents the lower confidence interval of the pooled prevalence from the included studies, the authors noted.

“We provided PPV estimates of IgA-tTG ≥10xULN for common pretest probabilities of coeliac disease to aid clinicians and patients in reaching an informed decision on a no-biopsy diagnosis based on the best available evidence,” the authors wrote.
 

Considering Additional Factors

Due to the increased accuracy of serological tests, pediatric guidelines have adopted a no-biopsy approach, the authors wrote. Children with IgA-tTG ≥10xULN and positive serum endomysial antibodies (EMA) can be diagnosed with celiac disease without biopsy.

However, the no-biopsy approach remains controversial for diagnosing adult patients and requires additional study, the authors wrote. They noted a limitation that all included studies were conducted in secondary and tertiary care settings and excluded patients with known celiac disease or on a gluten-free diet, so the results may not be generalizable to primary care settings.

In addition, relying on serology testing alone could lead to potential false-positive diagnoses, unnecessary dietary restriction, and negative effects on patients’ quality of life, the authors wrote.

At the same time, duodenal biopsy may not always be accurate due to inadequate sampling and could result in false-negative histology. The no-biopsy approach could mitigate this potential risk, the authors noted.

“This study systematically collates the growing data supporting the accuracy of antibody testing to diagnose celiac disease,” said Benjamin Lebwohl, MD, AGAF, professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research for the Celiac Disease Center at Columbia University, New York. Dr. Lebwohl wasn’t involved with this study.

Dr. Benjamin Lebwohl


“We have historically relied on duodenal biopsy to confirm the diagnosis of celiac disease, and the biopsy will still have a central role in most cases in the foreseeable future,” he said. “But as we hone our understanding of antibody testing, one day we may be able to accept or even recommend a biopsy-free approach in select patients.”

Two authors reported grant support from the National Institute for Health and Care Research and National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Shiha reported speaker honorarium from Thermo Fisher. Dr. Lebwohl reported no relevant disclosures.

Select adult patients with immunoglobulin A-tissue transglutaminase antibody levels (IgA-tTG) greater than or equal to 10 times the upper limit of normal (ULN) and a moderate-to-high pretest probability of celiac disease could be diagnosed without undergoing invasive endoscopy and duodenal biopsy, according to a new study.

Current international guidelines recommend duodenal biopsies to confirm a celiac disease diagnosis in adult patients, but growing evidence suggests invasive procedures may not be needed, the authors wrote.

“Our study confirms the high accuracy of serology-based diagnosis of coeliac disease in select adult patients,” said Mohamed G. Shiha, MBBCh, MRCP, lead author and a clinical research fellow in gastroenterology at Sheffield Teaching Hospitals in the United Kingdom.

iStock/Getty Images

“This no-biopsy approach could lead to a shorter time to diagnosis, increased patient satisfaction, and reduced healthcare costs,” he said.

The study was published online in Gastroenterology.
 

Evaluating the No-Biopsy Approach

Dr. Shiha and colleagues conducted a systematic review and meta-analysis to evaluate to the accuracy of a no-biopsy approach for diagnosing celiac disease in adults. They looked for studies that reported the sensitivity and specificity of IgA-tTG ≥10xULN compared with duodenal biopsies (with a Marsh grade ≥2) in adults with suspected celiac disease.

The research team used a bivariate random-effects model to calculate the summary estimates of sensitivity, specificity, and positive and negative likelihood ratios. Then the positive and negative likelihood ratios were used to calculate the positive predictive value (PPV) of the no-biopsy approach across different pretest probabilities of celiac disease.

Among 18 studies with 12,103 participants from 15 countries, the pooled prevalence of biopsy-proven celiac disease was 62%. The proportion of patients with IgA-tTG ≥10xULN was 32%.

The summary sensitivity of IgA-tTG ≥10xULN was 51%, and the summary specificity was 100% for the diagnosis of celiac disease. The positive and negative likelihood ratios were 183.42 and .49, respectively. The area under the summary receiver operating characteristic curve was .83.

Overall, the PPV of IgA-tTG ≥10xULN to identify patients with celiac disease was 98%, which varied according to pretest probability of celiac disease in the studied population. Specifically, the PPV was 65%, 88%, 95%, and 99% if celiac disease prevalence was 1%, 4%, 10%, and 40%, respectively. The 40% prevalence represents the lower confidence interval of the pooled prevalence from the included studies, the authors noted.

“We provided PPV estimates of IgA-tTG ≥10xULN for common pretest probabilities of coeliac disease to aid clinicians and patients in reaching an informed decision on a no-biopsy diagnosis based on the best available evidence,” the authors wrote.
 

Considering Additional Factors

Due to the increased accuracy of serological tests, pediatric guidelines have adopted a no-biopsy approach, the authors wrote. Children with IgA-tTG ≥10xULN and positive serum endomysial antibodies (EMA) can be diagnosed with celiac disease without biopsy.

However, the no-biopsy approach remains controversial for diagnosing adult patients and requires additional study, the authors wrote. They noted a limitation that all included studies were conducted in secondary and tertiary care settings and excluded patients with known celiac disease or on a gluten-free diet, so the results may not be generalizable to primary care settings.

In addition, relying on serology testing alone could lead to potential false-positive diagnoses, unnecessary dietary restriction, and negative effects on patients’ quality of life, the authors wrote.

At the same time, duodenal biopsy may not always be accurate due to inadequate sampling and could result in false-negative histology. The no-biopsy approach could mitigate this potential risk, the authors noted.

“This study systematically collates the growing data supporting the accuracy of antibody testing to diagnose celiac disease,” said Benjamin Lebwohl, MD, AGAF, professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research for the Celiac Disease Center at Columbia University, New York. Dr. Lebwohl wasn’t involved with this study.

Dr. Benjamin Lebwohl


“We have historically relied on duodenal biopsy to confirm the diagnosis of celiac disease, and the biopsy will still have a central role in most cases in the foreseeable future,” he said. “But as we hone our understanding of antibody testing, one day we may be able to accept or even recommend a biopsy-free approach in select patients.”

Two authors reported grant support from the National Institute for Health and Care Research and National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Shiha reported speaker honorarium from Thermo Fisher. Dr. Lebwohl reported no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mood Interventions May Reduce IBD Inflammation

Article Type
Changed
Tue, 02/13/2024 - 11:31

Various interventions that improve mood — such as psychological therapy, antidepressants, and exercise — reduce general inflammation and disease-specific biomarkers in people with inflammatory bowel disease (IBD), according to a new study.

“IBD is a distressing condition, and current medication that reduces inflammation is expensive and can have side effects,” said Natasha Seaton, first author and a PhD student at the Institute of Psychiatry, Psychology and Neuroscience (IoPPN) at King’s College London.

“Our study showed that interventions that treat mental health reduce levels of inflammation in the body,” she said. “This indicates that mood interventions could be a valuable tool in our approach to help those with IBD.”

The study was published online in eBioMedicine.
 

Analyzing Mood Interventions

Ms. Seaton and colleagues conducted a systematic review and meta-analysis of randomized controlled trials in adults with IBD that measured inflammatory biomarker levels and tested a mood intervention, including those aimed at reducing depression, anxiety, stress, or distress or improving emotional well-being.

Looking at data from 28 randomized controlled trials with 1789 participants, the research team evaluated whether mood interventions affected IBD inflammation, particularly IBD indicators such as C-reactive protein and fecal calprotectin, and other general inflammatory biomarkers.

The researchers found mood interventions significantly reduced levels of inflammatory biomarkers, compared with controls, corresponding to an 18% reduction in inflammatory biomarkers.

Psychological therapies had the best outcomes related to IBD inflammation, compared with antidepressants or exercise. These therapies included cognitive behavioral therapy, acceptance and commitment therapy, and mindfulness-based stress reduction.

Individual analyses of IBD-specific inflammatory markers found small but statistically significant reductions in C-reactive protein and fecal calprotectin after a mood intervention. This could mean mood treatments have positive effects on both inflammation and disease-specific biomarkers, the authors wrote.

In addition, interventions that had a larger positive effect on mood had a greater effect in reducing inflammatory biomarkers. This suggests that a better mood could reduce IBD inflammation, they noted.

“We know stress-related feelings can increase inflammation, and the findings suggest that by improving mood, we can reduce this type of inflammation,” said Valeria Mondelli, MD, PhD, clinical professor of psychoneuroimmunology at King’s IoPPN.

Courtesy Dr. Mondelli
Dr. Valeria Mondelli


“This adds to the growing body of research demonstrating the role of inflammation in mental health and suggests that interventions working to improve mood could also have direct physical effects on levels of inflammation,” she said. “However, more research is needed to understand exact mechanisms in IBD.”
 

Cost Benefit

Many IBD interventions and medications can be expensive for patients, have significant negative side effects, and have a lower long-term treatment response, the authors noted. Mood interventions, whether psychological therapy or medication, could potentially reduce costs and improve both mood and inflammation.

Previous studies have indicated that psychosocial factors, as well as mood disorders such as anxiety and depression, affect IBD symptom severity and progression, the authors wrote. However, researchers still need to understand the mechanisms behind this connection, including gut-brain dynamics.

Future research should focus on interventions that have been effective at improving mood in patients with IBD, assess inflammation and disease activity at numerous timepoints, and include potential variables related to illness self-management, the authors wrote.

In addition, implementation of mood interventions for IBD management may require better continuity of care and healthcare integration.

“Integrated mental health support, alongside pharmacological treatments, may offer a more holistic approach to IBD care, potentially leading to reduced disease and healthcare costs,” said Rona Moss-Morris, PhD, senior author and professor of psychology at King’s IoPPN.

King’s College London
Dr. Rona Moss-Morris


Medications taken to reduce inflammation can be costly compared with psychological therapies, she said. “Given this, including psychological interventions, such as cost-effective digital interventions, within IBD management might reduce the need for anti-inflammatory medication, resulting in an overall cost benefit.”

The study was funded by the Medical Research Council (MRC) and National Institute for Health and Care Research Maudsley Biomedical Research Centre, which is hosted by South London and Maudsley NHS Foundation Trust in partnership with King’s College London. Ms. Seaton was funded by an MRC Doctoral Training Partnership. No other interests were declared.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Various interventions that improve mood — such as psychological therapy, antidepressants, and exercise — reduce general inflammation and disease-specific biomarkers in people with inflammatory bowel disease (IBD), according to a new study.

“IBD is a distressing condition, and current medication that reduces inflammation is expensive and can have side effects,” said Natasha Seaton, first author and a PhD student at the Institute of Psychiatry, Psychology and Neuroscience (IoPPN) at King’s College London.

“Our study showed that interventions that treat mental health reduce levels of inflammation in the body,” she said. “This indicates that mood interventions could be a valuable tool in our approach to help those with IBD.”

The study was published online in eBioMedicine.
 

Analyzing Mood Interventions

Ms. Seaton and colleagues conducted a systematic review and meta-analysis of randomized controlled trials in adults with IBD that measured inflammatory biomarker levels and tested a mood intervention, including those aimed at reducing depression, anxiety, stress, or distress or improving emotional well-being.

Looking at data from 28 randomized controlled trials with 1789 participants, the research team evaluated whether mood interventions affected IBD inflammation, particularly IBD indicators such as C-reactive protein and fecal calprotectin, and other general inflammatory biomarkers.

The researchers found mood interventions significantly reduced levels of inflammatory biomarkers, compared with controls, corresponding to an 18% reduction in inflammatory biomarkers.

Psychological therapies had the best outcomes related to IBD inflammation, compared with antidepressants or exercise. These therapies included cognitive behavioral therapy, acceptance and commitment therapy, and mindfulness-based stress reduction.

Individual analyses of IBD-specific inflammatory markers found small but statistically significant reductions in C-reactive protein and fecal calprotectin after a mood intervention. This could mean mood treatments have positive effects on both inflammation and disease-specific biomarkers, the authors wrote.

In addition, interventions that had a larger positive effect on mood had a greater effect in reducing inflammatory biomarkers. This suggests that a better mood could reduce IBD inflammation, they noted.

“We know stress-related feelings can increase inflammation, and the findings suggest that by improving mood, we can reduce this type of inflammation,” said Valeria Mondelli, MD, PhD, clinical professor of psychoneuroimmunology at King’s IoPPN.

Courtesy Dr. Mondelli
Dr. Valeria Mondelli


“This adds to the growing body of research demonstrating the role of inflammation in mental health and suggests that interventions working to improve mood could also have direct physical effects on levels of inflammation,” she said. “However, more research is needed to understand exact mechanisms in IBD.”
 

Cost Benefit

Many IBD interventions and medications can be expensive for patients, have significant negative side effects, and have a lower long-term treatment response, the authors noted. Mood interventions, whether psychological therapy or medication, could potentially reduce costs and improve both mood and inflammation.

Previous studies have indicated that psychosocial factors, as well as mood disorders such as anxiety and depression, affect IBD symptom severity and progression, the authors wrote. However, researchers still need to understand the mechanisms behind this connection, including gut-brain dynamics.

Future research should focus on interventions that have been effective at improving mood in patients with IBD, assess inflammation and disease activity at numerous timepoints, and include potential variables related to illness self-management, the authors wrote.

In addition, implementation of mood interventions for IBD management may require better continuity of care and healthcare integration.

“Integrated mental health support, alongside pharmacological treatments, may offer a more holistic approach to IBD care, potentially leading to reduced disease and healthcare costs,” said Rona Moss-Morris, PhD, senior author and professor of psychology at King’s IoPPN.

King’s College London
Dr. Rona Moss-Morris


Medications taken to reduce inflammation can be costly compared with psychological therapies, she said. “Given this, including psychological interventions, such as cost-effective digital interventions, within IBD management might reduce the need for anti-inflammatory medication, resulting in an overall cost benefit.”

The study was funded by the Medical Research Council (MRC) and National Institute for Health and Care Research Maudsley Biomedical Research Centre, which is hosted by South London and Maudsley NHS Foundation Trust in partnership with King’s College London. Ms. Seaton was funded by an MRC Doctoral Training Partnership. No other interests were declared.

A version of this article appeared on Medscape.com.

Various interventions that improve mood — such as psychological therapy, antidepressants, and exercise — reduce general inflammation and disease-specific biomarkers in people with inflammatory bowel disease (IBD), according to a new study.

“IBD is a distressing condition, and current medication that reduces inflammation is expensive and can have side effects,” said Natasha Seaton, first author and a PhD student at the Institute of Psychiatry, Psychology and Neuroscience (IoPPN) at King’s College London.

“Our study showed that interventions that treat mental health reduce levels of inflammation in the body,” she said. “This indicates that mood interventions could be a valuable tool in our approach to help those with IBD.”

The study was published online in eBioMedicine.
 

Analyzing Mood Interventions

Ms. Seaton and colleagues conducted a systematic review and meta-analysis of randomized controlled trials in adults with IBD that measured inflammatory biomarker levels and tested a mood intervention, including those aimed at reducing depression, anxiety, stress, or distress or improving emotional well-being.

Looking at data from 28 randomized controlled trials with 1789 participants, the research team evaluated whether mood interventions affected IBD inflammation, particularly IBD indicators such as C-reactive protein and fecal calprotectin, and other general inflammatory biomarkers.

The researchers found mood interventions significantly reduced levels of inflammatory biomarkers, compared with controls, corresponding to an 18% reduction in inflammatory biomarkers.

Psychological therapies had the best outcomes related to IBD inflammation, compared with antidepressants or exercise. These therapies included cognitive behavioral therapy, acceptance and commitment therapy, and mindfulness-based stress reduction.

Individual analyses of IBD-specific inflammatory markers found small but statistically significant reductions in C-reactive protein and fecal calprotectin after a mood intervention. This could mean mood treatments have positive effects on both inflammation and disease-specific biomarkers, the authors wrote.

In addition, interventions that had a larger positive effect on mood had a greater effect in reducing inflammatory biomarkers. This suggests that a better mood could reduce IBD inflammation, they noted.

“We know stress-related feelings can increase inflammation, and the findings suggest that by improving mood, we can reduce this type of inflammation,” said Valeria Mondelli, MD, PhD, clinical professor of psychoneuroimmunology at King’s IoPPN.

Courtesy Dr. Mondelli
Dr. Valeria Mondelli


“This adds to the growing body of research demonstrating the role of inflammation in mental health and suggests that interventions working to improve mood could also have direct physical effects on levels of inflammation,” she said. “However, more research is needed to understand exact mechanisms in IBD.”
 

Cost Benefit

Many IBD interventions and medications can be expensive for patients, have significant negative side effects, and have a lower long-term treatment response, the authors noted. Mood interventions, whether psychological therapy or medication, could potentially reduce costs and improve both mood and inflammation.

Previous studies have indicated that psychosocial factors, as well as mood disorders such as anxiety and depression, affect IBD symptom severity and progression, the authors wrote. However, researchers still need to understand the mechanisms behind this connection, including gut-brain dynamics.

Future research should focus on interventions that have been effective at improving mood in patients with IBD, assess inflammation and disease activity at numerous timepoints, and include potential variables related to illness self-management, the authors wrote.

In addition, implementation of mood interventions for IBD management may require better continuity of care and healthcare integration.

“Integrated mental health support, alongside pharmacological treatments, may offer a more holistic approach to IBD care, potentially leading to reduced disease and healthcare costs,” said Rona Moss-Morris, PhD, senior author and professor of psychology at King’s IoPPN.

King’s College London
Dr. Rona Moss-Morris


Medications taken to reduce inflammation can be costly compared with psychological therapies, she said. “Given this, including psychological interventions, such as cost-effective digital interventions, within IBD management might reduce the need for anti-inflammatory medication, resulting in an overall cost benefit.”

The study was funded by the Medical Research Council (MRC) and National Institute for Health and Care Research Maudsley Biomedical Research Centre, which is hosted by South London and Maudsley NHS Foundation Trust in partnership with King’s College London. Ms. Seaton was funded by an MRC Doctoral Training Partnership. No other interests were declared.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Most Americans Believe Bariatric Surgery Is Shortcut, Should Be ‘Last Resort’: Survey

Article Type
Changed
Thu, 02/01/2024 - 11:07

Most Americans’ views about obesity and bariatric surgery are colored by stigmas, according to a new survey from the healthcare system at Orlando Health.

For example, most Americans believe that weight loss surgery should be pursued only as a last resort and that bariatric surgery is a shortcut to shedding pounds, the survey found.

Common stigmas could be deterring people who qualify for bariatric surgery from pursuing it, according to Orlando Health, located in Florida.

“Bariatric surgery is by no means an easy way out. If you have the courage to ask for help and commit to doing the hard work of changing your diet and improving your life, you’re a champion in my book,” said Andre Teixeira, MD, medical director and bariatric surgeon at Orlando Health Weight Loss and Bariatric Surgery Institute, Orlando, Florida.

“Surgery is simply a tool to jumpstart that change,” he said. “After surgery, it is up to the patient to learn how to eat well, implement exercise into their routine, and shift their mindset to maintain their health for the rest of their lives.”

The survey results were published in January by Orlando Health.
 

Surveying Americans

The national survey, conducted for Orlando Health by the market research firm Ipsos in early November 2023, asked 1017 US adults whether they agreed or disagreed with several statements about weight loss and bariatric surgery. The statements and responses are as follows:

  • “Weight loss surgery is a shortcut to shedding pounds” — 60% strongly or somewhat agreed, 38% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Weight loss surgery is cosmetic and mainly impacts appearance” — 37% strongly or somewhat agreed, 61% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Exercise and diet should be enough for weight loss” — 61% strongly or somewhat agreed, 37% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Weight loss surgery should only be pursued as a last resort” — 79% strongly or somewhat agreed, 19% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Surgery should be more socially accepted as a way to lose weight” — 46% strongly or somewhat agreed, 52% strongly or somewhat disagreed, and the remainder declined to respond.

Men’s responses indicated that they are more likely to have negative views toward weight loss surgery than women. For example, 66% of men vs 54% of women respondents see weight loss surgery as a shortcut to losing weight. Conversely, 42% of men vs 50% of women said that surgery should be a more socially accepted weight loss method.

Opinions that might interfere with the willingness to have weight loss surgery were apparent among people with obesity. The survey found that 65% of respondents with obesity and 59% with extreme obesity view surgery as a shortcut. Eighty-two percent of respondents with obesity and 68% with extreme obesity see surgery as a last resort.

At the end of 2022, the American Society of Metabolic and Bariatric Surgery and the International Federation for the Surgery of Obesity and Metabolic Disorders updated their guidelines for metabolic and bariatric surgery for the first time since 1991, with the aim of expanding access to surgery, Orlando Health noted. However, only 1% of those who are clinically eligible end up undergoing weight loss surgery, even with advancements in laparoscopic and robotic techniques that have made it safer and less invasive, the health system added.

“Because of the stigma around obesity and bariatric surgery, so many of my patients feel defeated if they can’t lose weight on their own,” said Muhammad Ghanem, MD, a bariatric surgeon at Orlando Health.

“But when I tell them obesity is a disease and that many of its causes are outside of their control, you can see their relief,” he said. “They often even shed a tear because they’ve struggled with their weight all their lives and finally have some validation.”
 

 

 

Individualizing Treatment

Obesity treatment plans should be tailored to patients on the basis of individual factors such as body mass index, existing medical conditions, and family history, Dr. Teixeira said.

Besides bariatric surgery, patients also may consider options such as counseling, lifestyle changes, and medications including the latest weight loss drugs, he added.

The clinical approach to obesity treatment has evolved, said Miguel Burch, MD, director of general surgery and chief of minimally invasive and gastrointestinal surgery at Cedars-Sinai, Los Angeles, California, who was not involved in the survey.

“At one point in my career, I could say the only proven durable treatment for obesity is weight loss surgery. This was in the context of patients who were morbidly obese requiring risk reduction, not for a year or two but for decades, and not for 10-20 pounds but for 40-60 pounds of weight loss,” said Dr. Burch, who also directs the bariatric surgery program at Torrance Memorial Medical Center, Torrance, California.

“That was a previous era. We are now in a new one with the weight loss drugs,” Dr. Burch said. “In fact, it’s wonderful to have the opportunity to serve so many patients with an option other than just surgery.”

Still, Dr. Burch added, “we have to change the way we look at obesity management as being either surgery or medicine and start thinking about it more as a multidisciplinary approach to a chronic and potentially relapsing disease.”

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Most Americans’ views about obesity and bariatric surgery are colored by stigmas, according to a new survey from the healthcare system at Orlando Health.

For example, most Americans believe that weight loss surgery should be pursued only as a last resort and that bariatric surgery is a shortcut to shedding pounds, the survey found.

Common stigmas could be deterring people who qualify for bariatric surgery from pursuing it, according to Orlando Health, located in Florida.

“Bariatric surgery is by no means an easy way out. If you have the courage to ask for help and commit to doing the hard work of changing your diet and improving your life, you’re a champion in my book,” said Andre Teixeira, MD, medical director and bariatric surgeon at Orlando Health Weight Loss and Bariatric Surgery Institute, Orlando, Florida.

“Surgery is simply a tool to jumpstart that change,” he said. “After surgery, it is up to the patient to learn how to eat well, implement exercise into their routine, and shift their mindset to maintain their health for the rest of their lives.”

The survey results were published in January by Orlando Health.
 

Surveying Americans

The national survey, conducted for Orlando Health by the market research firm Ipsos in early November 2023, asked 1017 US adults whether they agreed or disagreed with several statements about weight loss and bariatric surgery. The statements and responses are as follows:

  • “Weight loss surgery is a shortcut to shedding pounds” — 60% strongly or somewhat agreed, 38% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Weight loss surgery is cosmetic and mainly impacts appearance” — 37% strongly or somewhat agreed, 61% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Exercise and diet should be enough for weight loss” — 61% strongly or somewhat agreed, 37% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Weight loss surgery should only be pursued as a last resort” — 79% strongly or somewhat agreed, 19% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Surgery should be more socially accepted as a way to lose weight” — 46% strongly or somewhat agreed, 52% strongly or somewhat disagreed, and the remainder declined to respond.

Men’s responses indicated that they are more likely to have negative views toward weight loss surgery than women. For example, 66% of men vs 54% of women respondents see weight loss surgery as a shortcut to losing weight. Conversely, 42% of men vs 50% of women said that surgery should be a more socially accepted weight loss method.

Opinions that might interfere with the willingness to have weight loss surgery were apparent among people with obesity. The survey found that 65% of respondents with obesity and 59% with extreme obesity view surgery as a shortcut. Eighty-two percent of respondents with obesity and 68% with extreme obesity see surgery as a last resort.

At the end of 2022, the American Society of Metabolic and Bariatric Surgery and the International Federation for the Surgery of Obesity and Metabolic Disorders updated their guidelines for metabolic and bariatric surgery for the first time since 1991, with the aim of expanding access to surgery, Orlando Health noted. However, only 1% of those who are clinically eligible end up undergoing weight loss surgery, even with advancements in laparoscopic and robotic techniques that have made it safer and less invasive, the health system added.

“Because of the stigma around obesity and bariatric surgery, so many of my patients feel defeated if they can’t lose weight on their own,” said Muhammad Ghanem, MD, a bariatric surgeon at Orlando Health.

“But when I tell them obesity is a disease and that many of its causes are outside of their control, you can see their relief,” he said. “They often even shed a tear because they’ve struggled with their weight all their lives and finally have some validation.”
 

 

 

Individualizing Treatment

Obesity treatment plans should be tailored to patients on the basis of individual factors such as body mass index, existing medical conditions, and family history, Dr. Teixeira said.

Besides bariatric surgery, patients also may consider options such as counseling, lifestyle changes, and medications including the latest weight loss drugs, he added.

The clinical approach to obesity treatment has evolved, said Miguel Burch, MD, director of general surgery and chief of minimally invasive and gastrointestinal surgery at Cedars-Sinai, Los Angeles, California, who was not involved in the survey.

“At one point in my career, I could say the only proven durable treatment for obesity is weight loss surgery. This was in the context of patients who were morbidly obese requiring risk reduction, not for a year or two but for decades, and not for 10-20 pounds but for 40-60 pounds of weight loss,” said Dr. Burch, who also directs the bariatric surgery program at Torrance Memorial Medical Center, Torrance, California.

“That was a previous era. We are now in a new one with the weight loss drugs,” Dr. Burch said. “In fact, it’s wonderful to have the opportunity to serve so many patients with an option other than just surgery.”

Still, Dr. Burch added, “we have to change the way we look at obesity management as being either surgery or medicine and start thinking about it more as a multidisciplinary approach to a chronic and potentially relapsing disease.”

A version of this article appeared on Medscape.com.

Most Americans’ views about obesity and bariatric surgery are colored by stigmas, according to a new survey from the healthcare system at Orlando Health.

For example, most Americans believe that weight loss surgery should be pursued only as a last resort and that bariatric surgery is a shortcut to shedding pounds, the survey found.

Common stigmas could be deterring people who qualify for bariatric surgery from pursuing it, according to Orlando Health, located in Florida.

“Bariatric surgery is by no means an easy way out. If you have the courage to ask for help and commit to doing the hard work of changing your diet and improving your life, you’re a champion in my book,” said Andre Teixeira, MD, medical director and bariatric surgeon at Orlando Health Weight Loss and Bariatric Surgery Institute, Orlando, Florida.

“Surgery is simply a tool to jumpstart that change,” he said. “After surgery, it is up to the patient to learn how to eat well, implement exercise into their routine, and shift their mindset to maintain their health for the rest of their lives.”

The survey results were published in January by Orlando Health.
 

Surveying Americans

The national survey, conducted for Orlando Health by the market research firm Ipsos in early November 2023, asked 1017 US adults whether they agreed or disagreed with several statements about weight loss and bariatric surgery. The statements and responses are as follows:

  • “Weight loss surgery is a shortcut to shedding pounds” — 60% strongly or somewhat agreed, 38% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Weight loss surgery is cosmetic and mainly impacts appearance” — 37% strongly or somewhat agreed, 61% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Exercise and diet should be enough for weight loss” — 61% strongly or somewhat agreed, 37% strongly or somewhat disagreed, and the remainder declined to respond.
  • “Weight loss surgery should only be pursued as a last resort” — 79% strongly or somewhat agreed, 19% strongly or somewhat disagreed, and the remainder declined to answer.
  • “Surgery should be more socially accepted as a way to lose weight” — 46% strongly or somewhat agreed, 52% strongly or somewhat disagreed, and the remainder declined to respond.

Men’s responses indicated that they are more likely to have negative views toward weight loss surgery than women. For example, 66% of men vs 54% of women respondents see weight loss surgery as a shortcut to losing weight. Conversely, 42% of men vs 50% of women said that surgery should be a more socially accepted weight loss method.

Opinions that might interfere with the willingness to have weight loss surgery were apparent among people with obesity. The survey found that 65% of respondents with obesity and 59% with extreme obesity view surgery as a shortcut. Eighty-two percent of respondents with obesity and 68% with extreme obesity see surgery as a last resort.

At the end of 2022, the American Society of Metabolic and Bariatric Surgery and the International Federation for the Surgery of Obesity and Metabolic Disorders updated their guidelines for metabolic and bariatric surgery for the first time since 1991, with the aim of expanding access to surgery, Orlando Health noted. However, only 1% of those who are clinically eligible end up undergoing weight loss surgery, even with advancements in laparoscopic and robotic techniques that have made it safer and less invasive, the health system added.

“Because of the stigma around obesity and bariatric surgery, so many of my patients feel defeated if they can’t lose weight on their own,” said Muhammad Ghanem, MD, a bariatric surgeon at Orlando Health.

“But when I tell them obesity is a disease and that many of its causes are outside of their control, you can see their relief,” he said. “They often even shed a tear because they’ve struggled with their weight all their lives and finally have some validation.”
 

 

 

Individualizing Treatment

Obesity treatment plans should be tailored to patients on the basis of individual factors such as body mass index, existing medical conditions, and family history, Dr. Teixeira said.

Besides bariatric surgery, patients also may consider options such as counseling, lifestyle changes, and medications including the latest weight loss drugs, he added.

The clinical approach to obesity treatment has evolved, said Miguel Burch, MD, director of general surgery and chief of minimally invasive and gastrointestinal surgery at Cedars-Sinai, Los Angeles, California, who was not involved in the survey.

“At one point in my career, I could say the only proven durable treatment for obesity is weight loss surgery. This was in the context of patients who were morbidly obese requiring risk reduction, not for a year or two but for decades, and not for 10-20 pounds but for 40-60 pounds of weight loss,” said Dr. Burch, who also directs the bariatric surgery program at Torrance Memorial Medical Center, Torrance, California.

“That was a previous era. We are now in a new one with the weight loss drugs,” Dr. Burch said. “In fact, it’s wonderful to have the opportunity to serve so many patients with an option other than just surgery.”

Still, Dr. Burch added, “we have to change the way we look at obesity management as being either surgery or medicine and start thinking about it more as a multidisciplinary approach to a chronic and potentially relapsing disease.”

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Guideline Offers Recommendations for Alcohol-Associated Liver Disease

Article Type
Changed
Wed, 01/31/2024 - 13:40

To curb alcohol-associated liver disease (ALD), alcohol consumption should be avoided among those with underlying obesitychronic hepatitis C infection, hepatitis B virus infection, or a history of gastric bypass, according to a new clinical guideline from the American College of Gastroenterology.

In addition, health systems need to overcome barriers to treating alcohol use disorder (AUD) and commit to creating a multidisciplinary care model with behavioral interventions and pharmacotherapy for patients.

Experts were convened to develop these guidelines because it was “imperative to provide an up-to-date, evidence-based blueprint for how to care for patients, as well as guide prevention and research efforts in the field of ALD for the coming years,” said the first author, Loretta Jophlin, MD, PhD, assistant professor of medicine in gastroenterology, hepatology, and nutrition and medical director of liver transplantation at the University of Louisville in Kentucky.

“In recent years, perhaps fueled by the COVID-19 pandemic, alcohol use has been normalized in an increasing number of situations,” she said. “Drinking was normalized as a coping mechanism to deal with many of the sorrows we experienced during the pandemic, including loss of purposeful work and social isolation, and many more people are struggling with AUD. So many aspects of our culture have been inundated by the presence of alcohol use, and we need to work hard to denormalize this, first focusing on at-risk populations.”

The guideline was published in the January issue of the American Journal of Gastroenterology.
 

Updating ALD Recommendations

With ALD as the most common cause of advanced hepatic disease and a frequent indicator of eventual liver transplantation, the rising incidence of alcohol use during the past decade has led to rapid growth in ALD-related healthcare burdens, the guideline authors wrote.

In particular, those with ALD tend to present at an advanced stage and progress faster, which can lead to progressive fibrosis, cirrhosis, and hepatocellular carcinoma. This can include alcohol-associated hepatitis (AH), which often presents with a rapid onset or worsening of jaundice and can lead to acute or chronic liver failure.

To update the guideline, Dr. Jophlin and colleagues analyzed data based on a patient-intervention-comparison-outcome format, resulting in 34 key concepts or statements and 21 recommendations.

Among them, the authors recommended screening and treating AUD with the goal of helping patients who have not yet developed significant liver injury and preventing progression to advanced stages of ALD, particularly among at-risk groups who have had an increasing prevalence of severe AUD, including women, younger people, and Hispanic and American Indian patients.

“So many patients are still told to ‘stop drinking’ or ‘cut back’ but are provided no additional resources. Without offering referrals to treatment programs or pharmacologic therapies to assist in abstinence, many patients are not successful,” Dr. Jophlin said. “We hope these guidelines empower providers to consider selected [Food and Drug Administration]-approved medications, well-studied off-label therapies, and nonpharmacologic interventions to aid their patients’ journeys to abstinence and hopefully avert the progression of ALD.”

In addition, the guidelines provide recommendations for AH treatment. In patients with severe AH, the authors offered strong recommendations against the use of pentoxifylline and prophylactic antibiotics, and in support of corticosteroid therapy and intravenous N-acetyl cysteine as an adjuvant to corticosteroids.

Liver transplantation, which may be recommended for carefully selected patients, is being performed at many centers but remains relatively controversial, Dr. Jophlin said.

“Questions remain about ideal patient selection as center practices vary considerably, yet we have started to realize the impacts of relapse after transplantation,” she said. “The guidelines highlight the knowns and unknowns in this area and will hopefully serve as a catalyst for the dissemination of centers’ experiences and the development of a universal set of ethically sound, evidence-based guidelines to be used by all transplant centers.”
 

 

 

Policy Implications

Dr. Jophlin and colleagues noted the importance of policy aimed at alcohol use reduction, multidisciplinary care for AUD and ALD, and additional research around severe AH.

“As a practicing transplant hepatologist and medical director of a liver transplant program in the heart of Bourbon country, I am a part of just one healthcare team experiencing ALD, particularly AH, as a mass casualty event. Healthcare teams are fighting an unrelenting fire that the alcohol industry is pouring gasoline on,” Dr. Jophlin said. “It is imperative that healthcare providers have a voice in the policies that shape this preventable disease. We hope these guidelines inspire practitioners to explore our influence on how alcohol is regulated, marketed, and distributed.”

Additional interventions and public policy considerations could help reduce alcohol-related morbidity and mortality at a moment when the characteristics of those who present with AUD appear to be evolving.

“The typical person I’m seeing now is not someone who has been drinking heavily for decades. Rather, it’s a young person who has been drinking heavily for many months or a couple of years,” said James Burton, MD, a professor of medicine at the University of Colorado School of Medicine and medical director of liver transplantation at the University of Colorado Hospital’s Anschutz Medical Campus in Aurora.

Dr. Burton, who wasn’t involved with the guideline, noted it’s become more common for people to drink multiple alcoholic drinks per day for multiple times per week. Patients often don’t think it’s a problem, even as he discusses their liver-related issues.

“We can’t just keep living and working the way we were 10 years ago,” he said. “We’ve got to change how we approach treatment. We have to treat liver disease and AUD.”

The guideline was supported by several National Institutes of Health grants and an American College of Gastroenterology faculty development grant. The authors declared potential competing interests with various pharmaceutical companies. Dr. Burton reported no financial disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

To curb alcohol-associated liver disease (ALD), alcohol consumption should be avoided among those with underlying obesitychronic hepatitis C infection, hepatitis B virus infection, or a history of gastric bypass, according to a new clinical guideline from the American College of Gastroenterology.

In addition, health systems need to overcome barriers to treating alcohol use disorder (AUD) and commit to creating a multidisciplinary care model with behavioral interventions and pharmacotherapy for patients.

Experts were convened to develop these guidelines because it was “imperative to provide an up-to-date, evidence-based blueprint for how to care for patients, as well as guide prevention and research efforts in the field of ALD for the coming years,” said the first author, Loretta Jophlin, MD, PhD, assistant professor of medicine in gastroenterology, hepatology, and nutrition and medical director of liver transplantation at the University of Louisville in Kentucky.

“In recent years, perhaps fueled by the COVID-19 pandemic, alcohol use has been normalized in an increasing number of situations,” she said. “Drinking was normalized as a coping mechanism to deal with many of the sorrows we experienced during the pandemic, including loss of purposeful work and social isolation, and many more people are struggling with AUD. So many aspects of our culture have been inundated by the presence of alcohol use, and we need to work hard to denormalize this, first focusing on at-risk populations.”

The guideline was published in the January issue of the American Journal of Gastroenterology.
 

Updating ALD Recommendations

With ALD as the most common cause of advanced hepatic disease and a frequent indicator of eventual liver transplantation, the rising incidence of alcohol use during the past decade has led to rapid growth in ALD-related healthcare burdens, the guideline authors wrote.

In particular, those with ALD tend to present at an advanced stage and progress faster, which can lead to progressive fibrosis, cirrhosis, and hepatocellular carcinoma. This can include alcohol-associated hepatitis (AH), which often presents with a rapid onset or worsening of jaundice and can lead to acute or chronic liver failure.

To update the guideline, Dr. Jophlin and colleagues analyzed data based on a patient-intervention-comparison-outcome format, resulting in 34 key concepts or statements and 21 recommendations.

Among them, the authors recommended screening and treating AUD with the goal of helping patients who have not yet developed significant liver injury and preventing progression to advanced stages of ALD, particularly among at-risk groups who have had an increasing prevalence of severe AUD, including women, younger people, and Hispanic and American Indian patients.

“So many patients are still told to ‘stop drinking’ or ‘cut back’ but are provided no additional resources. Without offering referrals to treatment programs or pharmacologic therapies to assist in abstinence, many patients are not successful,” Dr. Jophlin said. “We hope these guidelines empower providers to consider selected [Food and Drug Administration]-approved medications, well-studied off-label therapies, and nonpharmacologic interventions to aid their patients’ journeys to abstinence and hopefully avert the progression of ALD.”

In addition, the guidelines provide recommendations for AH treatment. In patients with severe AH, the authors offered strong recommendations against the use of pentoxifylline and prophylactic antibiotics, and in support of corticosteroid therapy and intravenous N-acetyl cysteine as an adjuvant to corticosteroids.

Liver transplantation, which may be recommended for carefully selected patients, is being performed at many centers but remains relatively controversial, Dr. Jophlin said.

“Questions remain about ideal patient selection as center practices vary considerably, yet we have started to realize the impacts of relapse after transplantation,” she said. “The guidelines highlight the knowns and unknowns in this area and will hopefully serve as a catalyst for the dissemination of centers’ experiences and the development of a universal set of ethically sound, evidence-based guidelines to be used by all transplant centers.”
 

 

 

Policy Implications

Dr. Jophlin and colleagues noted the importance of policy aimed at alcohol use reduction, multidisciplinary care for AUD and ALD, and additional research around severe AH.

“As a practicing transplant hepatologist and medical director of a liver transplant program in the heart of Bourbon country, I am a part of just one healthcare team experiencing ALD, particularly AH, as a mass casualty event. Healthcare teams are fighting an unrelenting fire that the alcohol industry is pouring gasoline on,” Dr. Jophlin said. “It is imperative that healthcare providers have a voice in the policies that shape this preventable disease. We hope these guidelines inspire practitioners to explore our influence on how alcohol is regulated, marketed, and distributed.”

Additional interventions and public policy considerations could help reduce alcohol-related morbidity and mortality at a moment when the characteristics of those who present with AUD appear to be evolving.

“The typical person I’m seeing now is not someone who has been drinking heavily for decades. Rather, it’s a young person who has been drinking heavily for many months or a couple of years,” said James Burton, MD, a professor of medicine at the University of Colorado School of Medicine and medical director of liver transplantation at the University of Colorado Hospital’s Anschutz Medical Campus in Aurora.

Dr. Burton, who wasn’t involved with the guideline, noted it’s become more common for people to drink multiple alcoholic drinks per day for multiple times per week. Patients often don’t think it’s a problem, even as he discusses their liver-related issues.

“We can’t just keep living and working the way we were 10 years ago,” he said. “We’ve got to change how we approach treatment. We have to treat liver disease and AUD.”

The guideline was supported by several National Institutes of Health grants and an American College of Gastroenterology faculty development grant. The authors declared potential competing interests with various pharmaceutical companies. Dr. Burton reported no financial disclosures.

A version of this article appeared on Medscape.com.

To curb alcohol-associated liver disease (ALD), alcohol consumption should be avoided among those with underlying obesitychronic hepatitis C infection, hepatitis B virus infection, or a history of gastric bypass, according to a new clinical guideline from the American College of Gastroenterology.

In addition, health systems need to overcome barriers to treating alcohol use disorder (AUD) and commit to creating a multidisciplinary care model with behavioral interventions and pharmacotherapy for patients.

Experts were convened to develop these guidelines because it was “imperative to provide an up-to-date, evidence-based blueprint for how to care for patients, as well as guide prevention and research efforts in the field of ALD for the coming years,” said the first author, Loretta Jophlin, MD, PhD, assistant professor of medicine in gastroenterology, hepatology, and nutrition and medical director of liver transplantation at the University of Louisville in Kentucky.

“In recent years, perhaps fueled by the COVID-19 pandemic, alcohol use has been normalized in an increasing number of situations,” she said. “Drinking was normalized as a coping mechanism to deal with many of the sorrows we experienced during the pandemic, including loss of purposeful work and social isolation, and many more people are struggling with AUD. So many aspects of our culture have been inundated by the presence of alcohol use, and we need to work hard to denormalize this, first focusing on at-risk populations.”

The guideline was published in the January issue of the American Journal of Gastroenterology.
 

Updating ALD Recommendations

With ALD as the most common cause of advanced hepatic disease and a frequent indicator of eventual liver transplantation, the rising incidence of alcohol use during the past decade has led to rapid growth in ALD-related healthcare burdens, the guideline authors wrote.

In particular, those with ALD tend to present at an advanced stage and progress faster, which can lead to progressive fibrosis, cirrhosis, and hepatocellular carcinoma. This can include alcohol-associated hepatitis (AH), which often presents with a rapid onset or worsening of jaundice and can lead to acute or chronic liver failure.

To update the guideline, Dr. Jophlin and colleagues analyzed data based on a patient-intervention-comparison-outcome format, resulting in 34 key concepts or statements and 21 recommendations.

Among them, the authors recommended screening and treating AUD with the goal of helping patients who have not yet developed significant liver injury and preventing progression to advanced stages of ALD, particularly among at-risk groups who have had an increasing prevalence of severe AUD, including women, younger people, and Hispanic and American Indian patients.

“So many patients are still told to ‘stop drinking’ or ‘cut back’ but are provided no additional resources. Without offering referrals to treatment programs or pharmacologic therapies to assist in abstinence, many patients are not successful,” Dr. Jophlin said. “We hope these guidelines empower providers to consider selected [Food and Drug Administration]-approved medications, well-studied off-label therapies, and nonpharmacologic interventions to aid their patients’ journeys to abstinence and hopefully avert the progression of ALD.”

In addition, the guidelines provide recommendations for AH treatment. In patients with severe AH, the authors offered strong recommendations against the use of pentoxifylline and prophylactic antibiotics, and in support of corticosteroid therapy and intravenous N-acetyl cysteine as an adjuvant to corticosteroids.

Liver transplantation, which may be recommended for carefully selected patients, is being performed at many centers but remains relatively controversial, Dr. Jophlin said.

“Questions remain about ideal patient selection as center practices vary considerably, yet we have started to realize the impacts of relapse after transplantation,” she said. “The guidelines highlight the knowns and unknowns in this area and will hopefully serve as a catalyst for the dissemination of centers’ experiences and the development of a universal set of ethically sound, evidence-based guidelines to be used by all transplant centers.”
 

 

 

Policy Implications

Dr. Jophlin and colleagues noted the importance of policy aimed at alcohol use reduction, multidisciplinary care for AUD and ALD, and additional research around severe AH.

“As a practicing transplant hepatologist and medical director of a liver transplant program in the heart of Bourbon country, I am a part of just one healthcare team experiencing ALD, particularly AH, as a mass casualty event. Healthcare teams are fighting an unrelenting fire that the alcohol industry is pouring gasoline on,” Dr. Jophlin said. “It is imperative that healthcare providers have a voice in the policies that shape this preventable disease. We hope these guidelines inspire practitioners to explore our influence on how alcohol is regulated, marketed, and distributed.”

Additional interventions and public policy considerations could help reduce alcohol-related morbidity and mortality at a moment when the characteristics of those who present with AUD appear to be evolving.

“The typical person I’m seeing now is not someone who has been drinking heavily for decades. Rather, it’s a young person who has been drinking heavily for many months or a couple of years,” said James Burton, MD, a professor of medicine at the University of Colorado School of Medicine and medical director of liver transplantation at the University of Colorado Hospital’s Anschutz Medical Campus in Aurora.

Dr. Burton, who wasn’t involved with the guideline, noted it’s become more common for people to drink multiple alcoholic drinks per day for multiple times per week. Patients often don’t think it’s a problem, even as he discusses their liver-related issues.

“We can’t just keep living and working the way we were 10 years ago,” he said. “We’ve got to change how we approach treatment. We have to treat liver disease and AUD.”

The guideline was supported by several National Institutes of Health grants and an American College of Gastroenterology faculty development grant. The authors declared potential competing interests with various pharmaceutical companies. Dr. Burton reported no financial disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Colorectal Cancer Risk Increasing Across Successive Birth Cohorts

Article Type
Changed
Tue, 02/06/2024 - 11:56

Colorectal cancer (CRC) epidemiology is changing due to a birth cohort effect, also called birth cohort CRC — the observed phenomena of the rising risk for CRC across successive generations of people born in 1960 and later — according to a new narrative review.

Birth cohort CRC is associated with increasing rectal cancer (greater than colon cancer) diagnosis and distant-stage (greater than local-stage) CRC diagnosis, and a rising incidence of early-onset CRC (EOCRC), defined as occurring before age 50.

Recognizing this birth cohort effect could improve the understanding of CRC risk factors, etiology, mechanisms, as well as the public health consequences of rising rates.

“The changing epidemiology means that we need to redouble our efforts at optimizing early detection and prevention of colorectal cancer,” Samir Gupta, MD, the review’s lead author and professor of gastroenterology at the University of California, San Diego, California, told this news organization. Dr. Gupta serves as the co-lead for the cancer control program at Moores Cancer Center at UC San Diego Health.

This requires “being alert for potential red flag signs and symptoms of colorectal cancer, such as iron deficiency anemia and rectal bleeding, that are otherwise unexplained, including for those under age 45,” he said.

We also should make “sure that all people eligible for screening — at age 45 and older — have every opportunity to get screened for colorectal cancer,” Dr. Gupta added.

The review was published online in Clinical Gastroenterology and Hepatology.
 

Tracking Birth Cohort Trends

CRC rates have increased in the United States among people born since the early 1960s, the authors wrote.

Generation X (individuals born in 1965-1980) experienced an increase in EOCRC, and rates subsequently increased in this generation after age 50. Rates are 1.22-fold higher among people born in 1965-1969 and 1.58-fold higher among those born 1975-1979 than among people born in 1950-1954.

Now rates are also increasing across younger generations, particularly among Millennials (individuals born in 1981-1996) as they enter mid-adulthood. Incidence rates are 1.89-fold higher among people born in 1980-1984 and 2.98-fold higher among those born in 1990-1994 than among individuals born in 1950-1954.

These birth cohort effects are evident globally, despite differences in population age structures, screening programs, and diagnostic strategies around the world. Due to this ongoing trend, physicians anticipate that CRC rates will likely continue to increase as higher-risk birth cohorts become older, the authors wrote.

Notably, four important shifts in CRC incidence are apparent, they noted. First, rates are steadily increasing up to age 50 and plateauing after age 60. Rectal cancers are now predominant through ages 50-59. Rates of distant-stage disease have increased most rapidly among ages 30-49 and more slowly decreased among ages 60-79 compared with those of local-stage disease. In addition, the increasing rates of EOCRC have been observed across all racial and ethnic groups since the early 1990s.

These shifts led to major changes in the types of patients diagnosed with CRC now vs 30 years ago, with a higher proportion being patients younger than 60, as well as Black, Asian or Pacific Islander, American Indian/Alaska Native, and Hispanic patients.

The combination of age-related increases in CRC and birth cohort–related trends will likely lead to substantial increases in the number of people diagnosed with CRC in coming years, especially as Generation X patients move into their 50s and 60s, the authors wrote.
 

 

 

Research and Clinical Implications

Birth cohort CRC, including increasing EOCRC incidence, likely is driven by a range of influences, including demographic, lifestyle, early life, environmental, genetic, and somatic factors, as well as interactions among them, the authors noted. Examples within these broad categories include male sex, food insecurity, income inequality, diabetes, alcohol use, less healthy dietary patterns, in utero exposure to certain medications, and microbiome concerns such as early life antibiotic exposure or dysbiosis.

“From a research perspective, this means that we need to think about risk factors and mechanisms that are associated with birth cohorts, not just age at diagnosis,” Dr. Gupta said. “To date, most studies of changing epidemiology have not taken into account birth cohort, such as whether someone is Generation X or later versus pre-Baby Boomer.”

Although additional research is needed, the epidemiology changes have several immediate clinical implications, Dr. Gupta said. For those younger than 45, it is critical to raise awareness about the signs and symptoms of CRC, such as hematochezia, iron deficiency anemia, and unintentional weight loss, as well as family history.

For ages 45 and older, a major focus should be placed on increasing screening participation and follow-up after abnormal results, addressing disparities in screening participation, and optimizing screening quality.

In addition, as CRC incidence continues to increase, health systems and policymakers should ensure every patient has access to guideline-appropriate care and innovative clinical trials, the authors wrote. This access may be particularly important to address the increasing burden of rectal cancer, as treatment approaches rapidly evolve toward more effective therapies, such as neoadjuvant chemotherapy and radiation prior to surgery, and with less-morbid treatments on the horizon, they added.
 

‘An Interesting Concept’

“Birth cohort CRC is an interesting concept that allows people to think of their CRC risk according to their birth cohort in addition to age,” Shuji Ogino, MD, PhD, chief of the Molecular Pathological Epidemiology program at Brigham & Women’s Hospital, Boston, Massachusetts, told this news organization.

Dr. Ogino, who wasn’t involved with this study, serves as a member of the cancer immunology and cancer epidemiology programs at the Dana-Farber Harvard Cancer Center. In studies of EOCRC, he and colleagues have found various biogeographical and pathogenic trends across age groups.

“More research is needed to disentangle the complex etiologies of birth cohort CRC and early-onset CRC,” Dr. Ogino said. “Tumor cells and tissues have certain past and ongoing pathological marks, which we can detect to better understand birth cohort CRC and early-onset CRC.”

The study was funded by several National Institutes of Health/National Cancer Institute grants. Dr. Gupta disclosed consulting for Geneoscopy, Guardant Health, Universal Diagnostics, InterVenn Bio, and CellMax. Another author reported consulting for Freenome, Exact Sciences, Medtronic, and Geneoscopy. Dr. Ogino reported no relevant financial disclosures. 

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Colorectal cancer (CRC) epidemiology is changing due to a birth cohort effect, also called birth cohort CRC — the observed phenomena of the rising risk for CRC across successive generations of people born in 1960 and later — according to a new narrative review.

Birth cohort CRC is associated with increasing rectal cancer (greater than colon cancer) diagnosis and distant-stage (greater than local-stage) CRC diagnosis, and a rising incidence of early-onset CRC (EOCRC), defined as occurring before age 50.

Recognizing this birth cohort effect could improve the understanding of CRC risk factors, etiology, mechanisms, as well as the public health consequences of rising rates.

“The changing epidemiology means that we need to redouble our efforts at optimizing early detection and prevention of colorectal cancer,” Samir Gupta, MD, the review’s lead author and professor of gastroenterology at the University of California, San Diego, California, told this news organization. Dr. Gupta serves as the co-lead for the cancer control program at Moores Cancer Center at UC San Diego Health.

This requires “being alert for potential red flag signs and symptoms of colorectal cancer, such as iron deficiency anemia and rectal bleeding, that are otherwise unexplained, including for those under age 45,” he said.

We also should make “sure that all people eligible for screening — at age 45 and older — have every opportunity to get screened for colorectal cancer,” Dr. Gupta added.

The review was published online in Clinical Gastroenterology and Hepatology.
 

Tracking Birth Cohort Trends

CRC rates have increased in the United States among people born since the early 1960s, the authors wrote.

Generation X (individuals born in 1965-1980) experienced an increase in EOCRC, and rates subsequently increased in this generation after age 50. Rates are 1.22-fold higher among people born in 1965-1969 and 1.58-fold higher among those born 1975-1979 than among people born in 1950-1954.

Now rates are also increasing across younger generations, particularly among Millennials (individuals born in 1981-1996) as they enter mid-adulthood. Incidence rates are 1.89-fold higher among people born in 1980-1984 and 2.98-fold higher among those born in 1990-1994 than among individuals born in 1950-1954.

These birth cohort effects are evident globally, despite differences in population age structures, screening programs, and diagnostic strategies around the world. Due to this ongoing trend, physicians anticipate that CRC rates will likely continue to increase as higher-risk birth cohorts become older, the authors wrote.

Notably, four important shifts in CRC incidence are apparent, they noted. First, rates are steadily increasing up to age 50 and plateauing after age 60. Rectal cancers are now predominant through ages 50-59. Rates of distant-stage disease have increased most rapidly among ages 30-49 and more slowly decreased among ages 60-79 compared with those of local-stage disease. In addition, the increasing rates of EOCRC have been observed across all racial and ethnic groups since the early 1990s.

These shifts led to major changes in the types of patients diagnosed with CRC now vs 30 years ago, with a higher proportion being patients younger than 60, as well as Black, Asian or Pacific Islander, American Indian/Alaska Native, and Hispanic patients.

The combination of age-related increases in CRC and birth cohort–related trends will likely lead to substantial increases in the number of people diagnosed with CRC in coming years, especially as Generation X patients move into their 50s and 60s, the authors wrote.
 

 

 

Research and Clinical Implications

Birth cohort CRC, including increasing EOCRC incidence, likely is driven by a range of influences, including demographic, lifestyle, early life, environmental, genetic, and somatic factors, as well as interactions among them, the authors noted. Examples within these broad categories include male sex, food insecurity, income inequality, diabetes, alcohol use, less healthy dietary patterns, in utero exposure to certain medications, and microbiome concerns such as early life antibiotic exposure or dysbiosis.

“From a research perspective, this means that we need to think about risk factors and mechanisms that are associated with birth cohorts, not just age at diagnosis,” Dr. Gupta said. “To date, most studies of changing epidemiology have not taken into account birth cohort, such as whether someone is Generation X or later versus pre-Baby Boomer.”

Although additional research is needed, the epidemiology changes have several immediate clinical implications, Dr. Gupta said. For those younger than 45, it is critical to raise awareness about the signs and symptoms of CRC, such as hematochezia, iron deficiency anemia, and unintentional weight loss, as well as family history.

For ages 45 and older, a major focus should be placed on increasing screening participation and follow-up after abnormal results, addressing disparities in screening participation, and optimizing screening quality.

In addition, as CRC incidence continues to increase, health systems and policymakers should ensure every patient has access to guideline-appropriate care and innovative clinical trials, the authors wrote. This access may be particularly important to address the increasing burden of rectal cancer, as treatment approaches rapidly evolve toward more effective therapies, such as neoadjuvant chemotherapy and radiation prior to surgery, and with less-morbid treatments on the horizon, they added.
 

‘An Interesting Concept’

“Birth cohort CRC is an interesting concept that allows people to think of their CRC risk according to their birth cohort in addition to age,” Shuji Ogino, MD, PhD, chief of the Molecular Pathological Epidemiology program at Brigham & Women’s Hospital, Boston, Massachusetts, told this news organization.

Dr. Ogino, who wasn’t involved with this study, serves as a member of the cancer immunology and cancer epidemiology programs at the Dana-Farber Harvard Cancer Center. In studies of EOCRC, he and colleagues have found various biogeographical and pathogenic trends across age groups.

“More research is needed to disentangle the complex etiologies of birth cohort CRC and early-onset CRC,” Dr. Ogino said. “Tumor cells and tissues have certain past and ongoing pathological marks, which we can detect to better understand birth cohort CRC and early-onset CRC.”

The study was funded by several National Institutes of Health/National Cancer Institute grants. Dr. Gupta disclosed consulting for Geneoscopy, Guardant Health, Universal Diagnostics, InterVenn Bio, and CellMax. Another author reported consulting for Freenome, Exact Sciences, Medtronic, and Geneoscopy. Dr. Ogino reported no relevant financial disclosures. 

A version of this article appeared on Medscape.com .

Colorectal cancer (CRC) epidemiology is changing due to a birth cohort effect, also called birth cohort CRC — the observed phenomena of the rising risk for CRC across successive generations of people born in 1960 and later — according to a new narrative review.

Birth cohort CRC is associated with increasing rectal cancer (greater than colon cancer) diagnosis and distant-stage (greater than local-stage) CRC diagnosis, and a rising incidence of early-onset CRC (EOCRC), defined as occurring before age 50.

Recognizing this birth cohort effect could improve the understanding of CRC risk factors, etiology, mechanisms, as well as the public health consequences of rising rates.

“The changing epidemiology means that we need to redouble our efforts at optimizing early detection and prevention of colorectal cancer,” Samir Gupta, MD, the review’s lead author and professor of gastroenterology at the University of California, San Diego, California, told this news organization. Dr. Gupta serves as the co-lead for the cancer control program at Moores Cancer Center at UC San Diego Health.

This requires “being alert for potential red flag signs and symptoms of colorectal cancer, such as iron deficiency anemia and rectal bleeding, that are otherwise unexplained, including for those under age 45,” he said.

We also should make “sure that all people eligible for screening — at age 45 and older — have every opportunity to get screened for colorectal cancer,” Dr. Gupta added.

The review was published online in Clinical Gastroenterology and Hepatology.
 

Tracking Birth Cohort Trends

CRC rates have increased in the United States among people born since the early 1960s, the authors wrote.

Generation X (individuals born in 1965-1980) experienced an increase in EOCRC, and rates subsequently increased in this generation after age 50. Rates are 1.22-fold higher among people born in 1965-1969 and 1.58-fold higher among those born 1975-1979 than among people born in 1950-1954.

Now rates are also increasing across younger generations, particularly among Millennials (individuals born in 1981-1996) as they enter mid-adulthood. Incidence rates are 1.89-fold higher among people born in 1980-1984 and 2.98-fold higher among those born in 1990-1994 than among individuals born in 1950-1954.

These birth cohort effects are evident globally, despite differences in population age structures, screening programs, and diagnostic strategies around the world. Due to this ongoing trend, physicians anticipate that CRC rates will likely continue to increase as higher-risk birth cohorts become older, the authors wrote.

Notably, four important shifts in CRC incidence are apparent, they noted. First, rates are steadily increasing up to age 50 and plateauing after age 60. Rectal cancers are now predominant through ages 50-59. Rates of distant-stage disease have increased most rapidly among ages 30-49 and more slowly decreased among ages 60-79 compared with those of local-stage disease. In addition, the increasing rates of EOCRC have been observed across all racial and ethnic groups since the early 1990s.

These shifts led to major changes in the types of patients diagnosed with CRC now vs 30 years ago, with a higher proportion being patients younger than 60, as well as Black, Asian or Pacific Islander, American Indian/Alaska Native, and Hispanic patients.

The combination of age-related increases in CRC and birth cohort–related trends will likely lead to substantial increases in the number of people diagnosed with CRC in coming years, especially as Generation X patients move into their 50s and 60s, the authors wrote.
 

 

 

Research and Clinical Implications

Birth cohort CRC, including increasing EOCRC incidence, likely is driven by a range of influences, including demographic, lifestyle, early life, environmental, genetic, and somatic factors, as well as interactions among them, the authors noted. Examples within these broad categories include male sex, food insecurity, income inequality, diabetes, alcohol use, less healthy dietary patterns, in utero exposure to certain medications, and microbiome concerns such as early life antibiotic exposure or dysbiosis.

“From a research perspective, this means that we need to think about risk factors and mechanisms that are associated with birth cohorts, not just age at diagnosis,” Dr. Gupta said. “To date, most studies of changing epidemiology have not taken into account birth cohort, such as whether someone is Generation X or later versus pre-Baby Boomer.”

Although additional research is needed, the epidemiology changes have several immediate clinical implications, Dr. Gupta said. For those younger than 45, it is critical to raise awareness about the signs and symptoms of CRC, such as hematochezia, iron deficiency anemia, and unintentional weight loss, as well as family history.

For ages 45 and older, a major focus should be placed on increasing screening participation and follow-up after abnormal results, addressing disparities in screening participation, and optimizing screening quality.

In addition, as CRC incidence continues to increase, health systems and policymakers should ensure every patient has access to guideline-appropriate care and innovative clinical trials, the authors wrote. This access may be particularly important to address the increasing burden of rectal cancer, as treatment approaches rapidly evolve toward more effective therapies, such as neoadjuvant chemotherapy and radiation prior to surgery, and with less-morbid treatments on the horizon, they added.
 

‘An Interesting Concept’

“Birth cohort CRC is an interesting concept that allows people to think of their CRC risk according to their birth cohort in addition to age,” Shuji Ogino, MD, PhD, chief of the Molecular Pathological Epidemiology program at Brigham & Women’s Hospital, Boston, Massachusetts, told this news organization.

Dr. Ogino, who wasn’t involved with this study, serves as a member of the cancer immunology and cancer epidemiology programs at the Dana-Farber Harvard Cancer Center. In studies of EOCRC, he and colleagues have found various biogeographical and pathogenic trends across age groups.

“More research is needed to disentangle the complex etiologies of birth cohort CRC and early-onset CRC,” Dr. Ogino said. “Tumor cells and tissues have certain past and ongoing pathological marks, which we can detect to better understand birth cohort CRC and early-onset CRC.”

The study was funded by several National Institutes of Health/National Cancer Institute grants. Dr. Gupta disclosed consulting for Geneoscopy, Guardant Health, Universal Diagnostics, InterVenn Bio, and CellMax. Another author reported consulting for Freenome, Exact Sciences, Medtronic, and Geneoscopy. Dr. Ogino reported no relevant financial disclosures. 

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smoking Associated With Increased Risk for Hair Loss Among Men

Article Type
Changed
Tue, 01/23/2024 - 06:54

Men who have smoked or currently smoke are significantly more likely to develop androgenetic alopecia (AGA) than men who have never smoked, according to a new study.

In addition, the odds of developing AGA are higher among those who smoke at least 10 cigarettes per day than among those who smoke less, the study authors found.

“Men who smoke are more likely to develop and experience progression of male pattern hair loss,” lead author Aditya Gupta, MD, PhD, professor of medicine at the University of Toronto, Toronto, and director of clinical research at Mediprobe Research Inc., London, Ontario, Canada, told this news organization.

“Our patients with male pattern baldness need to be educated about the negative effects of smoking, given that this condition can have a profound negative psychological impact on those who suffer from it,” he said.

The study was published online in the Journal of Cosmetic Dermatology.
 

Analyzing Smoking’s Effects

Smoking generally has been accepted as a risk factor for the development and progression of AGA or the most common form of hair loss. The research evidence on this association has been inconsistent, however, the authors wrote.

The investigators conducted a review and meta-analysis of eight observational studies to understand the links between smoking and AGA. Ever-smokers were defined as current and former smokers.

Overall, based on six studies, men who have ever smoked are 1.8 times more likely (P < .05) to develop AGA.

Based on two studies, men who smoke 10 or more cigarettes daily are about twice as likely (P < .05) to develop AGA than those who smoke up to 10 cigarettes per day.

Based on four studies, ever smoking is associated with 1.3 times higher odds of AGA progressing from mild (ie, Norwood-Hamilton stages I-III) to more severe (stages IV-VII) than among those who have never smoked.



Based on two studies, there’s no association between AGA progression and smoking intensity (as defined as smoking up to 20 cigarettes daily vs smoking 20 or more cigarettes per day).

“Though our pooled analysis found no significant association between smoking intensity and severity of male AGA, a positive correlation may exist and be detected through an analysis that is statistically better powered,” said Dr. Gupta.

The investigators noted the limitations of their analysis, such as its reliance on observational studies and its lack of data about nicotine levels, smoking intensity, and smoking cessation among study participants.

Additional studies are needed to better understand the links between smoking and hair loss, said Dr. Gupta, as well as the effects of smoking cessation.

Improving Practice and Research

Commenting on the findings for this news organization, Arash Babadjouni, MD, a dermatologist at Midwestern University, Glendale, Arizona, said, “Smoking is not only a preventable cause of significant systemic disease but also affects the follicular growth cycle and fiber pigmentation. The prevalence of hair loss and premature hair graying is higher in smokers than nonsmokers.”

Dr. Babadjouni, who wasn’t involved with this study, has researched the associations between smoking and hair loss and premature hair graying.

“Evidence of this association can be used to clinically promote smoking cessation and emphasize the consequences of smoking on hair,” he said. “Smoking status should be assessed in patients who are presenting to their dermatologist and physicians alike for evaluation of alopecia and premature hair graying.”

The study was conducted without outside funding, and the authors declared no conflicts of interest. Dr. Babadjouni reported no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Men who have smoked or currently smoke are significantly more likely to develop androgenetic alopecia (AGA) than men who have never smoked, according to a new study.

In addition, the odds of developing AGA are higher among those who smoke at least 10 cigarettes per day than among those who smoke less, the study authors found.

“Men who smoke are more likely to develop and experience progression of male pattern hair loss,” lead author Aditya Gupta, MD, PhD, professor of medicine at the University of Toronto, Toronto, and director of clinical research at Mediprobe Research Inc., London, Ontario, Canada, told this news organization.

“Our patients with male pattern baldness need to be educated about the negative effects of smoking, given that this condition can have a profound negative psychological impact on those who suffer from it,” he said.

The study was published online in the Journal of Cosmetic Dermatology.
 

Analyzing Smoking’s Effects

Smoking generally has been accepted as a risk factor for the development and progression of AGA or the most common form of hair loss. The research evidence on this association has been inconsistent, however, the authors wrote.

The investigators conducted a review and meta-analysis of eight observational studies to understand the links between smoking and AGA. Ever-smokers were defined as current and former smokers.

Overall, based on six studies, men who have ever smoked are 1.8 times more likely (P < .05) to develop AGA.

Based on two studies, men who smoke 10 or more cigarettes daily are about twice as likely (P < .05) to develop AGA than those who smoke up to 10 cigarettes per day.

Based on four studies, ever smoking is associated with 1.3 times higher odds of AGA progressing from mild (ie, Norwood-Hamilton stages I-III) to more severe (stages IV-VII) than among those who have never smoked.



Based on two studies, there’s no association between AGA progression and smoking intensity (as defined as smoking up to 20 cigarettes daily vs smoking 20 or more cigarettes per day).

“Though our pooled analysis found no significant association between smoking intensity and severity of male AGA, a positive correlation may exist and be detected through an analysis that is statistically better powered,” said Dr. Gupta.

The investigators noted the limitations of their analysis, such as its reliance on observational studies and its lack of data about nicotine levels, smoking intensity, and smoking cessation among study participants.

Additional studies are needed to better understand the links between smoking and hair loss, said Dr. Gupta, as well as the effects of smoking cessation.

Improving Practice and Research

Commenting on the findings for this news organization, Arash Babadjouni, MD, a dermatologist at Midwestern University, Glendale, Arizona, said, “Smoking is not only a preventable cause of significant systemic disease but also affects the follicular growth cycle and fiber pigmentation. The prevalence of hair loss and premature hair graying is higher in smokers than nonsmokers.”

Dr. Babadjouni, who wasn’t involved with this study, has researched the associations between smoking and hair loss and premature hair graying.

“Evidence of this association can be used to clinically promote smoking cessation and emphasize the consequences of smoking on hair,” he said. “Smoking status should be assessed in patients who are presenting to their dermatologist and physicians alike for evaluation of alopecia and premature hair graying.”

The study was conducted without outside funding, and the authors declared no conflicts of interest. Dr. Babadjouni reported no relevant disclosures.

A version of this article appeared on Medscape.com.

Men who have smoked or currently smoke are significantly more likely to develop androgenetic alopecia (AGA) than men who have never smoked, according to a new study.

In addition, the odds of developing AGA are higher among those who smoke at least 10 cigarettes per day than among those who smoke less, the study authors found.

“Men who smoke are more likely to develop and experience progression of male pattern hair loss,” lead author Aditya Gupta, MD, PhD, professor of medicine at the University of Toronto, Toronto, and director of clinical research at Mediprobe Research Inc., London, Ontario, Canada, told this news organization.

“Our patients with male pattern baldness need to be educated about the negative effects of smoking, given that this condition can have a profound negative psychological impact on those who suffer from it,” he said.

The study was published online in the Journal of Cosmetic Dermatology.
 

Analyzing Smoking’s Effects

Smoking generally has been accepted as a risk factor for the development and progression of AGA or the most common form of hair loss. The research evidence on this association has been inconsistent, however, the authors wrote.

The investigators conducted a review and meta-analysis of eight observational studies to understand the links between smoking and AGA. Ever-smokers were defined as current and former smokers.

Overall, based on six studies, men who have ever smoked are 1.8 times more likely (P < .05) to develop AGA.

Based on two studies, men who smoke 10 or more cigarettes daily are about twice as likely (P < .05) to develop AGA than those who smoke up to 10 cigarettes per day.

Based on four studies, ever smoking is associated with 1.3 times higher odds of AGA progressing from mild (ie, Norwood-Hamilton stages I-III) to more severe (stages IV-VII) than among those who have never smoked.



Based on two studies, there’s no association between AGA progression and smoking intensity (as defined as smoking up to 20 cigarettes daily vs smoking 20 or more cigarettes per day).

“Though our pooled analysis found no significant association between smoking intensity and severity of male AGA, a positive correlation may exist and be detected through an analysis that is statistically better powered,” said Dr. Gupta.

The investigators noted the limitations of their analysis, such as its reliance on observational studies and its lack of data about nicotine levels, smoking intensity, and smoking cessation among study participants.

Additional studies are needed to better understand the links between smoking and hair loss, said Dr. Gupta, as well as the effects of smoking cessation.

Improving Practice and Research

Commenting on the findings for this news organization, Arash Babadjouni, MD, a dermatologist at Midwestern University, Glendale, Arizona, said, “Smoking is not only a preventable cause of significant systemic disease but also affects the follicular growth cycle and fiber pigmentation. The prevalence of hair loss and premature hair graying is higher in smokers than nonsmokers.”

Dr. Babadjouni, who wasn’t involved with this study, has researched the associations between smoking and hair loss and premature hair graying.

“Evidence of this association can be used to clinically promote smoking cessation and emphasize the consequences of smoking on hair,” he said. “Smoking status should be assessed in patients who are presenting to their dermatologist and physicians alike for evaluation of alopecia and premature hair graying.”

The study was conducted without outside funding, and the authors declared no conflicts of interest. Dr. Babadjouni reported no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF COSMETIC DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI Shows Potential for Detecting Mucosal Healing in Ulcerative Colitis

Article Type
Changed
Fri, 01/05/2024 - 14:07

Artificial intelligence (AI) systems show high potential for detecting mucosal healing in ulcerative colitis with optimal diagnostic performance, according to a new systematic review and meta-analysis.

AI algorithms replicated expert opinion with high sensitivity and specificity when evaluating images and videos. At the same time, moderate-high heterogeneity of the data was found, the authors noted.

“Artificial intelligence software is expected to potentially solve the longstanding issue of low-to-moderate interobserver agreement when human endoscopists are required to indicate mucosal healing or different grades of inflammation in ulcerative colitis,” Alessandro Rimondi, lead author and clinical fellow at the Royal Free Hospital and University College London Institute for Liver and Digestive Health, London, England, told this news organization.

“However, high levels of heterogeneity have been found, potentially linked to how differently the AI software was trained and how many cases it has been tested on,” he said. “This partially limits the quality of the body of evidence.”

The study was published online in Digestive and Liver Disease.
 

Evaluating AI Detection

In clinical practice, assessing mucosal healing in inflammatory bowel disease (IBD) is critical for evaluating a patient’s response to therapy and guiding strategies for treatment, surgery, and endoscopic surveillance. In an era of precision medicine, assessment of mucosal healing should be precise, readily available in an endoscopic report, and highly reproducible, which requires high accuracy and agreement in endoscopic diagnosis, the authors noted.

AI systems — particularly deep learning algorithms based on convolutional neural network architecture — may allow endoscopists to establish an objective and real-time diagnosis of mucosal healing and improve the average quality standards at primary and tertiary care centers, the authors wrote. Research on AI in IBD has looked at potential implications for endoscopy and clinical management, which opens new areas to explore.

Dr. Rimondi and colleagues conducted a systematic review of studies up to December 2022 that involved an AI-based system used to estimate any degree of endoscopic inflammation in IBD, whether ulcerative colitis or Crohn’s disease. After that, they conducted a diagnostic test accuracy meta-analysis restricted to the field in which more than five studies providing diagnostic performance — mucosal healing in ulcerative colitis based on luminal imaging — were available.

The researchers identified 12 studies with luminal imaging in patients with ulcerative colitis. Four evaluated the performance of AI systems on videos, six focused on fixed images, and two looked at both.

Overall, the AI systems achieved a satisfactory performance in evaluating mucosal healing in ulcerative colitis. When evaluating fixed images, the algorithms achieved a sensitivity of 0.91 and specificity of 0.89, with a diagnostic odds ratio (DOR) of 92.42, summary receiver operating characteristic curve (SROC) of 0.957, and area under the curve (AUC) of 0.957. When evaluating videos, the algorithms achieved 0.86 sensitivity, 0.91 specificity, 70.86 DOR, 0.941 SROC, and 0.941 AUC.

“It is exciting to see artificial intelligence expand and be effective for conditions beyond colon polyps,” Seth Gross, MD, professor of medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health, New York, told this news organization.

Dr. Gross, who wasn’t involved with this study, has researched AI applications in endoscopy and colonoscopy. He and colleagues have found that machine learning software can improve lesion and polyp detection, serving as a “second set of eyes” for practitioners.

“Mucosal healing interpretation can be variable amongst providers,” he said. “AI has the potential to help standardize the assessment of mucosal healing in patients with ulcerative colitis.”
 

 

 

Improving AI Training

The authors found moderate-high levels of heterogeneity among the studies, which limited the quality of the evidence. Only 2 of the 12 studies used an external dataset to validate the AI systems, and 1 evaluated the AI system on a mixed database. However, seven used an internal validation dataset separate from the training dataset.

It is crucial to find a shared consensus on training for AI models, with a shared definition of mucosal healing and cutoff thresholds based on recent guidelines, Dr. Rimondi and colleagues noted. Training data ideally should be on the basis of a broad and shared database containing images and videos with high interobserver agreement on the degree of inflammation, they added.

“We probably need a consensus or guidelines that identify the standards for training and testing newly developed software, stating the bare minimum number of images or videos for the training and testing sections,” Dr. Rimondi said.

In addition, due to interobserver misalignment, an expert-validated database could help serve the purpose of a gold standard, he added.

“In my opinion, artificial intelligence tends to better perform when it is required to evaluate a dichotomic outcome (such as polyp detection, which is a yes or no task) than when it is required to replicate more difficult tasks (such as polyp characterization or judging a degree of inflammation), which have a continuous range of expression,” Dr. Rimondi said.

The authors declared no financial support for this study. Dr. Rimondi and Dr. Gross reported no financial disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Artificial intelligence (AI) systems show high potential for detecting mucosal healing in ulcerative colitis with optimal diagnostic performance, according to a new systematic review and meta-analysis.

AI algorithms replicated expert opinion with high sensitivity and specificity when evaluating images and videos. At the same time, moderate-high heterogeneity of the data was found, the authors noted.

“Artificial intelligence software is expected to potentially solve the longstanding issue of low-to-moderate interobserver agreement when human endoscopists are required to indicate mucosal healing or different grades of inflammation in ulcerative colitis,” Alessandro Rimondi, lead author and clinical fellow at the Royal Free Hospital and University College London Institute for Liver and Digestive Health, London, England, told this news organization.

“However, high levels of heterogeneity have been found, potentially linked to how differently the AI software was trained and how many cases it has been tested on,” he said. “This partially limits the quality of the body of evidence.”

The study was published online in Digestive and Liver Disease.
 

Evaluating AI Detection

In clinical practice, assessing mucosal healing in inflammatory bowel disease (IBD) is critical for evaluating a patient’s response to therapy and guiding strategies for treatment, surgery, and endoscopic surveillance. In an era of precision medicine, assessment of mucosal healing should be precise, readily available in an endoscopic report, and highly reproducible, which requires high accuracy and agreement in endoscopic diagnosis, the authors noted.

AI systems — particularly deep learning algorithms based on convolutional neural network architecture — may allow endoscopists to establish an objective and real-time diagnosis of mucosal healing and improve the average quality standards at primary and tertiary care centers, the authors wrote. Research on AI in IBD has looked at potential implications for endoscopy and clinical management, which opens new areas to explore.

Dr. Rimondi and colleagues conducted a systematic review of studies up to December 2022 that involved an AI-based system used to estimate any degree of endoscopic inflammation in IBD, whether ulcerative colitis or Crohn’s disease. After that, they conducted a diagnostic test accuracy meta-analysis restricted to the field in which more than five studies providing diagnostic performance — mucosal healing in ulcerative colitis based on luminal imaging — were available.

The researchers identified 12 studies with luminal imaging in patients with ulcerative colitis. Four evaluated the performance of AI systems on videos, six focused on fixed images, and two looked at both.

Overall, the AI systems achieved a satisfactory performance in evaluating mucosal healing in ulcerative colitis. When evaluating fixed images, the algorithms achieved a sensitivity of 0.91 and specificity of 0.89, with a diagnostic odds ratio (DOR) of 92.42, summary receiver operating characteristic curve (SROC) of 0.957, and area under the curve (AUC) of 0.957. When evaluating videos, the algorithms achieved 0.86 sensitivity, 0.91 specificity, 70.86 DOR, 0.941 SROC, and 0.941 AUC.

“It is exciting to see artificial intelligence expand and be effective for conditions beyond colon polyps,” Seth Gross, MD, professor of medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health, New York, told this news organization.

Dr. Gross, who wasn’t involved with this study, has researched AI applications in endoscopy and colonoscopy. He and colleagues have found that machine learning software can improve lesion and polyp detection, serving as a “second set of eyes” for practitioners.

“Mucosal healing interpretation can be variable amongst providers,” he said. “AI has the potential to help standardize the assessment of mucosal healing in patients with ulcerative colitis.”
 

 

 

Improving AI Training

The authors found moderate-high levels of heterogeneity among the studies, which limited the quality of the evidence. Only 2 of the 12 studies used an external dataset to validate the AI systems, and 1 evaluated the AI system on a mixed database. However, seven used an internal validation dataset separate from the training dataset.

It is crucial to find a shared consensus on training for AI models, with a shared definition of mucosal healing and cutoff thresholds based on recent guidelines, Dr. Rimondi and colleagues noted. Training data ideally should be on the basis of a broad and shared database containing images and videos with high interobserver agreement on the degree of inflammation, they added.

“We probably need a consensus or guidelines that identify the standards for training and testing newly developed software, stating the bare minimum number of images or videos for the training and testing sections,” Dr. Rimondi said.

In addition, due to interobserver misalignment, an expert-validated database could help serve the purpose of a gold standard, he added.

“In my opinion, artificial intelligence tends to better perform when it is required to evaluate a dichotomic outcome (such as polyp detection, which is a yes or no task) than when it is required to replicate more difficult tasks (such as polyp characterization or judging a degree of inflammation), which have a continuous range of expression,” Dr. Rimondi said.

The authors declared no financial support for this study. Dr. Rimondi and Dr. Gross reported no financial disclosures.

A version of this article appeared on Medscape.com.

Artificial intelligence (AI) systems show high potential for detecting mucosal healing in ulcerative colitis with optimal diagnostic performance, according to a new systematic review and meta-analysis.

AI algorithms replicated expert opinion with high sensitivity and specificity when evaluating images and videos. At the same time, moderate-high heterogeneity of the data was found, the authors noted.

“Artificial intelligence software is expected to potentially solve the longstanding issue of low-to-moderate interobserver agreement when human endoscopists are required to indicate mucosal healing or different grades of inflammation in ulcerative colitis,” Alessandro Rimondi, lead author and clinical fellow at the Royal Free Hospital and University College London Institute for Liver and Digestive Health, London, England, told this news organization.

“However, high levels of heterogeneity have been found, potentially linked to how differently the AI software was trained and how many cases it has been tested on,” he said. “This partially limits the quality of the body of evidence.”

The study was published online in Digestive and Liver Disease.
 

Evaluating AI Detection

In clinical practice, assessing mucosal healing in inflammatory bowel disease (IBD) is critical for evaluating a patient’s response to therapy and guiding strategies for treatment, surgery, and endoscopic surveillance. In an era of precision medicine, assessment of mucosal healing should be precise, readily available in an endoscopic report, and highly reproducible, which requires high accuracy and agreement in endoscopic diagnosis, the authors noted.

AI systems — particularly deep learning algorithms based on convolutional neural network architecture — may allow endoscopists to establish an objective and real-time diagnosis of mucosal healing and improve the average quality standards at primary and tertiary care centers, the authors wrote. Research on AI in IBD has looked at potential implications for endoscopy and clinical management, which opens new areas to explore.

Dr. Rimondi and colleagues conducted a systematic review of studies up to December 2022 that involved an AI-based system used to estimate any degree of endoscopic inflammation in IBD, whether ulcerative colitis or Crohn’s disease. After that, they conducted a diagnostic test accuracy meta-analysis restricted to the field in which more than five studies providing diagnostic performance — mucosal healing in ulcerative colitis based on luminal imaging — were available.

The researchers identified 12 studies with luminal imaging in patients with ulcerative colitis. Four evaluated the performance of AI systems on videos, six focused on fixed images, and two looked at both.

Overall, the AI systems achieved a satisfactory performance in evaluating mucosal healing in ulcerative colitis. When evaluating fixed images, the algorithms achieved a sensitivity of 0.91 and specificity of 0.89, with a diagnostic odds ratio (DOR) of 92.42, summary receiver operating characteristic curve (SROC) of 0.957, and area under the curve (AUC) of 0.957. When evaluating videos, the algorithms achieved 0.86 sensitivity, 0.91 specificity, 70.86 DOR, 0.941 SROC, and 0.941 AUC.

“It is exciting to see artificial intelligence expand and be effective for conditions beyond colon polyps,” Seth Gross, MD, professor of medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health, New York, told this news organization.

Dr. Gross, who wasn’t involved with this study, has researched AI applications in endoscopy and colonoscopy. He and colleagues have found that machine learning software can improve lesion and polyp detection, serving as a “second set of eyes” for practitioners.

“Mucosal healing interpretation can be variable amongst providers,” he said. “AI has the potential to help standardize the assessment of mucosal healing in patients with ulcerative colitis.”
 

 

 

Improving AI Training

The authors found moderate-high levels of heterogeneity among the studies, which limited the quality of the evidence. Only 2 of the 12 studies used an external dataset to validate the AI systems, and 1 evaluated the AI system on a mixed database. However, seven used an internal validation dataset separate from the training dataset.

It is crucial to find a shared consensus on training for AI models, with a shared definition of mucosal healing and cutoff thresholds based on recent guidelines, Dr. Rimondi and colleagues noted. Training data ideally should be on the basis of a broad and shared database containing images and videos with high interobserver agreement on the degree of inflammation, they added.

“We probably need a consensus or guidelines that identify the standards for training and testing newly developed software, stating the bare minimum number of images or videos for the training and testing sections,” Dr. Rimondi said.

In addition, due to interobserver misalignment, an expert-validated database could help serve the purpose of a gold standard, he added.

“In my opinion, artificial intelligence tends to better perform when it is required to evaluate a dichotomic outcome (such as polyp detection, which is a yes or no task) than when it is required to replicate more difficult tasks (such as polyp characterization or judging a degree of inflammation), which have a continuous range of expression,” Dr. Rimondi said.

The authors declared no financial support for this study. Dr. Rimondi and Dr. Gross reported no financial disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DIGESTIVE AND LIVER DISEASE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

GLP-1 RAs Associated With Reduced Colorectal Cancer Risk in Patients With Type 2 Diabetes

Article Type
Changed
Thu, 03/21/2024 - 13:07

Glucagon-like peptide 1 receptor agonists (GLP-1 RAs) are associated with a reduced risk for colorectal cancer (CRC) in patients with type 2 diabetes, with and without overweight or obesity, according to a new analysis.

In particular, GLP-1 RAs were associated with decreased risk compared with other antidiabetic treatments, including insulinmetformin, sodium-glucose cotransporter 2 (SGLT2) inhibitors, sulfonylureas, and thiazolidinediones.

More profound effects were seen in patients with overweight or obesity, “suggesting a potential protective effect against CRC partially mediated by weight loss and other mechanisms related to weight loss,” Lindsey Wang, an undergraduate student at Case Western Reserve University, Cleveland, Ohio, and colleagues wrote in JAMA Oncology.
 

Testing Treatments

GLP-1 RAs, usually given by injection, are approved by the US Food and Drug Administration to treat type 2 diabetes. They can lower blood sugar levels, improve insulin sensitivity, and help patients manage their weight.

Diabetes, overweight, and obesity are known risk factors for CRC and make prognosis worse. Ms. Wang and colleagues hypothesized that GLP-1 RAs might reduce CRC risk compared with other antidiabetics, including metformin and insulin, which have also been shown to reduce CRC risk.

Using a national database of more than 101 million electronic health records, Ms. Wang and colleagues conducted a population-based study of more than 1.2 million patients who had medical encounters for type 2 diabetes and were subsequently prescribed antidiabetic medications between 2005 and 2019. The patients had no prior antidiabetic medication use nor CRC diagnosis.

The researchers analyzed the effects of GLP-1 RAs on CRC incidence compared with the other prescribed antidiabetic drugs, matching for demographics, adverse socioeconomic determinants of health, preexisting medical conditions, family and personal history of cancers and colonic polyps, lifestyle factors, and procedures such as colonoscopy.

During a 15-year follow-up, GLP-1 RAs were associated with decreased risk for CRC compared with insulin (hazard ratio [HR], 0.56), metformin (HR, 0.75), SGLT2 inhibitors (HR, 0.77), sulfonylureas (HR, 0.82), and thiazolidinediones (HR, 0.82) in the overall study population.

For instance, among 22,572 patients who took insulin, 167 cases of CRC occurred, compared with 94 cases among the matched GLP-1 RA cohort. Among 18,518 patients who took metformin, 153 cases of CRC occurred compared with 96 cases among the matched GLP-1 RA cohort.

GLP-1 RAs also were associated with lower but not statistically significant risk than alpha-glucosidase inhibitors (HR, 0.59) and dipeptidyl-peptidase-4 (DPP-4) inhibitors (HR, 0.93).

In patients with overweight or obesity, GLP-1 RAs were associated with a lower risk for CRC than most of the other antidiabetics, including insulin (HR, 0.5), metformin (HR, 0.58), SGLT2 inhibitors (HR, 0.68), sulfonylureas (HR, 0.63), thiazolidinediones (HR, 0.73), and DPP-4 inhibitors (HR, 0.77).

Consistent findings were observed in women and men.

“Our results clearly demonstrate that GLP-1 RAs are significantly more effective than popular antidiabetic drugs, such as metformin or insulin, at preventing the development of CRC,” said Nathan Berger, MD, co-lead researcher, professor of experimental medicine, and member of the Case Comprehensive Cancer Center.
 

Targets for Future Research

Study limitations include potential unmeasured or uncontrolled confounders, self-selection, reverse causality, and other biases involved in observational studies, the research team noted.

Further research is warranted to investigate the effects in patients with prior antidiabetic treatments, underlying mechanisms, potential variation in effects among different GLP-1 RAs, and the potential of GLP-1 RAs to reduce the risks for other obesity-associated cancers, the researchers wrote.

“To our knowledge, this is the first indication this popular weight loss and antidiabetic class of drugs reduces incidence of CRC, relative to other antidiabetic agents,” said Rong Xu, PhD, co-lead researcher, professor of medicine, and member of the Case Comprehensive Cancer Center.

The study was supported by the National Cancer Institute Case Comprehensive Cancer Center, American Cancer Society, Landon Foundation-American Association for Cancer Research, National Institutes of Health Director’s New Innovator Award Program, National Institute on Aging, and National Institute on Alcohol Abuse and Alcoholism. Several authors reported grants from the National Institutes of Health during the conduct of the study.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Glucagon-like peptide 1 receptor agonists (GLP-1 RAs) are associated with a reduced risk for colorectal cancer (CRC) in patients with type 2 diabetes, with and without overweight or obesity, according to a new analysis.

In particular, GLP-1 RAs were associated with decreased risk compared with other antidiabetic treatments, including insulinmetformin, sodium-glucose cotransporter 2 (SGLT2) inhibitors, sulfonylureas, and thiazolidinediones.

More profound effects were seen in patients with overweight or obesity, “suggesting a potential protective effect against CRC partially mediated by weight loss and other mechanisms related to weight loss,” Lindsey Wang, an undergraduate student at Case Western Reserve University, Cleveland, Ohio, and colleagues wrote in JAMA Oncology.
 

Testing Treatments

GLP-1 RAs, usually given by injection, are approved by the US Food and Drug Administration to treat type 2 diabetes. They can lower blood sugar levels, improve insulin sensitivity, and help patients manage their weight.

Diabetes, overweight, and obesity are known risk factors for CRC and make prognosis worse. Ms. Wang and colleagues hypothesized that GLP-1 RAs might reduce CRC risk compared with other antidiabetics, including metformin and insulin, which have also been shown to reduce CRC risk.

Using a national database of more than 101 million electronic health records, Ms. Wang and colleagues conducted a population-based study of more than 1.2 million patients who had medical encounters for type 2 diabetes and were subsequently prescribed antidiabetic medications between 2005 and 2019. The patients had no prior antidiabetic medication use nor CRC diagnosis.

The researchers analyzed the effects of GLP-1 RAs on CRC incidence compared with the other prescribed antidiabetic drugs, matching for demographics, adverse socioeconomic determinants of health, preexisting medical conditions, family and personal history of cancers and colonic polyps, lifestyle factors, and procedures such as colonoscopy.

During a 15-year follow-up, GLP-1 RAs were associated with decreased risk for CRC compared with insulin (hazard ratio [HR], 0.56), metformin (HR, 0.75), SGLT2 inhibitors (HR, 0.77), sulfonylureas (HR, 0.82), and thiazolidinediones (HR, 0.82) in the overall study population.

For instance, among 22,572 patients who took insulin, 167 cases of CRC occurred, compared with 94 cases among the matched GLP-1 RA cohort. Among 18,518 patients who took metformin, 153 cases of CRC occurred compared with 96 cases among the matched GLP-1 RA cohort.

GLP-1 RAs also were associated with lower but not statistically significant risk than alpha-glucosidase inhibitors (HR, 0.59) and dipeptidyl-peptidase-4 (DPP-4) inhibitors (HR, 0.93).

In patients with overweight or obesity, GLP-1 RAs were associated with a lower risk for CRC than most of the other antidiabetics, including insulin (HR, 0.5), metformin (HR, 0.58), SGLT2 inhibitors (HR, 0.68), sulfonylureas (HR, 0.63), thiazolidinediones (HR, 0.73), and DPP-4 inhibitors (HR, 0.77).

Consistent findings were observed in women and men.

“Our results clearly demonstrate that GLP-1 RAs are significantly more effective than popular antidiabetic drugs, such as metformin or insulin, at preventing the development of CRC,” said Nathan Berger, MD, co-lead researcher, professor of experimental medicine, and member of the Case Comprehensive Cancer Center.
 

Targets for Future Research

Study limitations include potential unmeasured or uncontrolled confounders, self-selection, reverse causality, and other biases involved in observational studies, the research team noted.

Further research is warranted to investigate the effects in patients with prior antidiabetic treatments, underlying mechanisms, potential variation in effects among different GLP-1 RAs, and the potential of GLP-1 RAs to reduce the risks for other obesity-associated cancers, the researchers wrote.

“To our knowledge, this is the first indication this popular weight loss and antidiabetic class of drugs reduces incidence of CRC, relative to other antidiabetic agents,” said Rong Xu, PhD, co-lead researcher, professor of medicine, and member of the Case Comprehensive Cancer Center.

The study was supported by the National Cancer Institute Case Comprehensive Cancer Center, American Cancer Society, Landon Foundation-American Association for Cancer Research, National Institutes of Health Director’s New Innovator Award Program, National Institute on Aging, and National Institute on Alcohol Abuse and Alcoholism. Several authors reported grants from the National Institutes of Health during the conduct of the study.
 

A version of this article appeared on Medscape.com.

Glucagon-like peptide 1 receptor agonists (GLP-1 RAs) are associated with a reduced risk for colorectal cancer (CRC) in patients with type 2 diabetes, with and without overweight or obesity, according to a new analysis.

In particular, GLP-1 RAs were associated with decreased risk compared with other antidiabetic treatments, including insulinmetformin, sodium-glucose cotransporter 2 (SGLT2) inhibitors, sulfonylureas, and thiazolidinediones.

More profound effects were seen in patients with overweight or obesity, “suggesting a potential protective effect against CRC partially mediated by weight loss and other mechanisms related to weight loss,” Lindsey Wang, an undergraduate student at Case Western Reserve University, Cleveland, Ohio, and colleagues wrote in JAMA Oncology.
 

Testing Treatments

GLP-1 RAs, usually given by injection, are approved by the US Food and Drug Administration to treat type 2 diabetes. They can lower blood sugar levels, improve insulin sensitivity, and help patients manage their weight.

Diabetes, overweight, and obesity are known risk factors for CRC and make prognosis worse. Ms. Wang and colleagues hypothesized that GLP-1 RAs might reduce CRC risk compared with other antidiabetics, including metformin and insulin, which have also been shown to reduce CRC risk.

Using a national database of more than 101 million electronic health records, Ms. Wang and colleagues conducted a population-based study of more than 1.2 million patients who had medical encounters for type 2 diabetes and were subsequently prescribed antidiabetic medications between 2005 and 2019. The patients had no prior antidiabetic medication use nor CRC diagnosis.

The researchers analyzed the effects of GLP-1 RAs on CRC incidence compared with the other prescribed antidiabetic drugs, matching for demographics, adverse socioeconomic determinants of health, preexisting medical conditions, family and personal history of cancers and colonic polyps, lifestyle factors, and procedures such as colonoscopy.

During a 15-year follow-up, GLP-1 RAs were associated with decreased risk for CRC compared with insulin (hazard ratio [HR], 0.56), metformin (HR, 0.75), SGLT2 inhibitors (HR, 0.77), sulfonylureas (HR, 0.82), and thiazolidinediones (HR, 0.82) in the overall study population.

For instance, among 22,572 patients who took insulin, 167 cases of CRC occurred, compared with 94 cases among the matched GLP-1 RA cohort. Among 18,518 patients who took metformin, 153 cases of CRC occurred compared with 96 cases among the matched GLP-1 RA cohort.

GLP-1 RAs also were associated with lower but not statistically significant risk than alpha-glucosidase inhibitors (HR, 0.59) and dipeptidyl-peptidase-4 (DPP-4) inhibitors (HR, 0.93).

In patients with overweight or obesity, GLP-1 RAs were associated with a lower risk for CRC than most of the other antidiabetics, including insulin (HR, 0.5), metformin (HR, 0.58), SGLT2 inhibitors (HR, 0.68), sulfonylureas (HR, 0.63), thiazolidinediones (HR, 0.73), and DPP-4 inhibitors (HR, 0.77).

Consistent findings were observed in women and men.

“Our results clearly demonstrate that GLP-1 RAs are significantly more effective than popular antidiabetic drugs, such as metformin or insulin, at preventing the development of CRC,” said Nathan Berger, MD, co-lead researcher, professor of experimental medicine, and member of the Case Comprehensive Cancer Center.
 

Targets for Future Research

Study limitations include potential unmeasured or uncontrolled confounders, self-selection, reverse causality, and other biases involved in observational studies, the research team noted.

Further research is warranted to investigate the effects in patients with prior antidiabetic treatments, underlying mechanisms, potential variation in effects among different GLP-1 RAs, and the potential of GLP-1 RAs to reduce the risks for other obesity-associated cancers, the researchers wrote.

“To our knowledge, this is the first indication this popular weight loss and antidiabetic class of drugs reduces incidence of CRC, relative to other antidiabetic agents,” said Rong Xu, PhD, co-lead researcher, professor of medicine, and member of the Case Comprehensive Cancer Center.

The study was supported by the National Cancer Institute Case Comprehensive Cancer Center, American Cancer Society, Landon Foundation-American Association for Cancer Research, National Institutes of Health Director’s New Innovator Award Program, National Institute on Aging, and National Institute on Alcohol Abuse and Alcoholism. Several authors reported grants from the National Institutes of Health during the conduct of the study.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article