User login
Earlier colorectal cancer screening appears cost-effective in overweight, obese patients
Starting colorectal cancer screening earlier than age 50 appears to be cost-effective for both men and women across all body mass index (BMI) measures, according to a study published in Clinical Gastroenterology and Hepatology.
In particular, colonoscopy is cost-effective at age 45 for all BMI strata and at age 40 in obese men. In addition, fecal immunochemical testing (FIT) is highly cost-effective at ages 40 or 45 for all BMI values, wrote Aaron Yeoh, MD, a gastroenterologist at the Stanford (Calif.) University, and colleagues.
Increased body fatness, defined as a high BMI, has increased sharply in recent decades and has been associated with a higher risk of colorectal cancer (CRC). Given the rising incidence of CRC in younger people, the American Cancer Society and U.S. Preventive Services Task Force now endorse screening at age 45. In previous analyses, Dr. Yeoh and colleagues suggested that the policy is likely to be cost-effective, but they didn’t explore the potential differences by BMI.
“Our results suggest that 45 years of age is a reasonable screening initiation age for women and men with BMI ranging from normal through all classes of obesity,” the authors wrote. “Before changing screening policy, supportive data from clinical studies would be needed. Our approach can be applied to future efforts aiming to risk-stratify CRC screening based on multiple clinical factors or biomarkers.”
The research team examined the potential effectiveness and cost-effectiveness of screening tailored to BMI starting as early as age 40 and ending at age 75 in 10 separate cohorts of men and women of normal weight (18.5 to <25 kg/m2), overweight (25 to <30 kg/m2), and three strata of obesity – obese I (30 to <35 kg/m2), obese II (35 to <40 kg/m2), and obese III (>40 kg/m2).
For each cohort, the researchers estimated incremental costs per quality-adjusted life year (QALY) gained by initiating screening at age 40 versus age 45 versus age 50, or by shortening colonoscopy intervals. They modeled screening colonoscopy every 10 years (Colo10) or every 5 years (Colo5), or annual FIT, offered from ages 40, 45, or 50 through age 75 with 100% adherence, with postpolypectomy surveillance through age 80.
For model inputs, the research team favored high-quality data from meta-analyses or large prospective trials. Screening, treatment, and complication costs were set at 2018 Centers for Medicare & Medicaid Services rates for ages 65 and older and modified to reflect commercial costs at ages 65 and younger. The authors assumed use of moderate sedation, and sensitivity analyses addressed possible increased costs and complications of colonoscopy under propofol.
Overall, without screening, sex-specific total CRC deaths were similar for people with overweight or obesity I-III and slightly higher than for people with normal BMI. For both men and women across all BMI strata, Colo10 or FIT starting at age 50 substantially decreased CRC incidence and mortality versus no screening, and the magnitude of the clinical impact was comparable across BMI.
For both sexes across BMI, Colo10 or FIT starting at age 50 was highly cost-effective. The cost per QALY gained for Colo10 compared with no screening became more favorable as BMI increased from normal to obesity III. FIT was cost-saving compared with no screening for all cohorts and was cost-saving or highly cost-effective compared with Colo10 within each cohort.
Initiating Colo10 at age 45 showed incremental decreases in CRC incidence and mortality, which were modest compared with the gains of Colo10 at age 50 versus no screening. However, the incremental gains were achieved at acceptable incremental costs ranging from $64,500 to $85,900 per QALY gained in women and from $33,400 to $64,200 per QALY gained in men.
Initiating Colo10 at age 40 in women and men in the lowest three BMI strata was associated with high incremental costs per QALY gained. In contrast, Colo10 initiation at age 40 cost $80,400 per QALY gained in men with obesity III and $93,300 per QALY gained in men with obesity II.
FIT starting at ages 40 or 45 yielded progressively greater decreases in CRC incidence and mortality for both men and women across BMI strata, and it was highly cost-effective versus starting at later ages. Compared with Colo10, at every screening initiation age, FIT was cost-saving or preferred based on very high incremental costs per QALY, and FIT required substantially fewer colonoscopies per person.
Intensifying screening by shortening the colonoscopy interval to Colo5 was never preferred over shifting Colo10 to earlier screening initiation ages. In all cohorts, Colo5 was either less effective and more costly than Colo10 at a younger age, or when it was more effective, the cost per QALY gained was substantially higher than $100,000 per QALY gained.
Additional studies are needed to understand obesity-specific colonoscopy risks and costs, the authors wrote. In addition, obesity is only one of several factors that should be considered when tailoring CRC screening to the level of CRC risk, they wrote.
“As the search for a multifactor prediction tool that is ready for clinical application continues, we face the question of how to approach single CRC risk factors such as obesity,” they wrote. “While screening guidelines based on BMI can be envisioned if supportive clinical data accumulate, clinical implementation must overcome operational challenges.”
The study funding was not disclosed. One author reported advisory and consultant roles for several medical companies, and the remaining authors disclosed no conflicts.
Obesity is associated with an increased risk of colorectal cancer, along with cancers of the breast, endometrium, and esophagus. Even maternal obesity is associated with higher offspring colorectal cancer rates. Key mechanisms that underlie these associations include high insulin levels in obesity that propel tumor growth, adipose tissue that secretes inflammatory cytokines, and high glucose levels that act as fuel for cancer proliferation.
For men with BMI over 35, moving the colonoscopy screening age earlier to age 40 was cost-effective. However, it’s not clear that in practice the juice is worth the squeeze. Changing screening initiation times further based on personalized factors such as BMI could make screening more confusing for patients and physicians and may hurt uptake, a critical factor for the success of any screening program.
The study supports the current paradigm that screening starting at age 45 is cost-effective among men and women at all BMI ranges, a reassuring conclusion. It also serves as a sobering reminder that promoting metabolic health in our patients, our schools, and our communities is a valuable endeavor.
Sarah McGill, MD, MSc, FACG, FASGE, is associate professor medicine, gastroenterology, and hepatology at the University of North Carolina at Chapel Hill. She receives research funding from Olympus America, Finch Therapeutics, Genentech, Guardant Health, and Exact Sciences.
Obesity is associated with an increased risk of colorectal cancer, along with cancers of the breast, endometrium, and esophagus. Even maternal obesity is associated with higher offspring colorectal cancer rates. Key mechanisms that underlie these associations include high insulin levels in obesity that propel tumor growth, adipose tissue that secretes inflammatory cytokines, and high glucose levels that act as fuel for cancer proliferation.
For men with BMI over 35, moving the colonoscopy screening age earlier to age 40 was cost-effective. However, it’s not clear that in practice the juice is worth the squeeze. Changing screening initiation times further based on personalized factors such as BMI could make screening more confusing for patients and physicians and may hurt uptake, a critical factor for the success of any screening program.
The study supports the current paradigm that screening starting at age 45 is cost-effective among men and women at all BMI ranges, a reassuring conclusion. It also serves as a sobering reminder that promoting metabolic health in our patients, our schools, and our communities is a valuable endeavor.
Sarah McGill, MD, MSc, FACG, FASGE, is associate professor medicine, gastroenterology, and hepatology at the University of North Carolina at Chapel Hill. She receives research funding from Olympus America, Finch Therapeutics, Genentech, Guardant Health, and Exact Sciences.
Obesity is associated with an increased risk of colorectal cancer, along with cancers of the breast, endometrium, and esophagus. Even maternal obesity is associated with higher offspring colorectal cancer rates. Key mechanisms that underlie these associations include high insulin levels in obesity that propel tumor growth, adipose tissue that secretes inflammatory cytokines, and high glucose levels that act as fuel for cancer proliferation.
For men with BMI over 35, moving the colonoscopy screening age earlier to age 40 was cost-effective. However, it’s not clear that in practice the juice is worth the squeeze. Changing screening initiation times further based on personalized factors such as BMI could make screening more confusing for patients and physicians and may hurt uptake, a critical factor for the success of any screening program.
The study supports the current paradigm that screening starting at age 45 is cost-effective among men and women at all BMI ranges, a reassuring conclusion. It also serves as a sobering reminder that promoting metabolic health in our patients, our schools, and our communities is a valuable endeavor.
Sarah McGill, MD, MSc, FACG, FASGE, is associate professor medicine, gastroenterology, and hepatology at the University of North Carolina at Chapel Hill. She receives research funding from Olympus America, Finch Therapeutics, Genentech, Guardant Health, and Exact Sciences.
Starting colorectal cancer screening earlier than age 50 appears to be cost-effective for both men and women across all body mass index (BMI) measures, according to a study published in Clinical Gastroenterology and Hepatology.
In particular, colonoscopy is cost-effective at age 45 for all BMI strata and at age 40 in obese men. In addition, fecal immunochemical testing (FIT) is highly cost-effective at ages 40 or 45 for all BMI values, wrote Aaron Yeoh, MD, a gastroenterologist at the Stanford (Calif.) University, and colleagues.
Increased body fatness, defined as a high BMI, has increased sharply in recent decades and has been associated with a higher risk of colorectal cancer (CRC). Given the rising incidence of CRC in younger people, the American Cancer Society and U.S. Preventive Services Task Force now endorse screening at age 45. In previous analyses, Dr. Yeoh and colleagues suggested that the policy is likely to be cost-effective, but they didn’t explore the potential differences by BMI.
“Our results suggest that 45 years of age is a reasonable screening initiation age for women and men with BMI ranging from normal through all classes of obesity,” the authors wrote. “Before changing screening policy, supportive data from clinical studies would be needed. Our approach can be applied to future efforts aiming to risk-stratify CRC screening based on multiple clinical factors or biomarkers.”
The research team examined the potential effectiveness and cost-effectiveness of screening tailored to BMI starting as early as age 40 and ending at age 75 in 10 separate cohorts of men and women of normal weight (18.5 to <25 kg/m2), overweight (25 to <30 kg/m2), and three strata of obesity – obese I (30 to <35 kg/m2), obese II (35 to <40 kg/m2), and obese III (>40 kg/m2).
For each cohort, the researchers estimated incremental costs per quality-adjusted life year (QALY) gained by initiating screening at age 40 versus age 45 versus age 50, or by shortening colonoscopy intervals. They modeled screening colonoscopy every 10 years (Colo10) or every 5 years (Colo5), or annual FIT, offered from ages 40, 45, or 50 through age 75 with 100% adherence, with postpolypectomy surveillance through age 80.
For model inputs, the research team favored high-quality data from meta-analyses or large prospective trials. Screening, treatment, and complication costs were set at 2018 Centers for Medicare & Medicaid Services rates for ages 65 and older and modified to reflect commercial costs at ages 65 and younger. The authors assumed use of moderate sedation, and sensitivity analyses addressed possible increased costs and complications of colonoscopy under propofol.
Overall, without screening, sex-specific total CRC deaths were similar for people with overweight or obesity I-III and slightly higher than for people with normal BMI. For both men and women across all BMI strata, Colo10 or FIT starting at age 50 substantially decreased CRC incidence and mortality versus no screening, and the magnitude of the clinical impact was comparable across BMI.
For both sexes across BMI, Colo10 or FIT starting at age 50 was highly cost-effective. The cost per QALY gained for Colo10 compared with no screening became more favorable as BMI increased from normal to obesity III. FIT was cost-saving compared with no screening for all cohorts and was cost-saving or highly cost-effective compared with Colo10 within each cohort.
Initiating Colo10 at age 45 showed incremental decreases in CRC incidence and mortality, which were modest compared with the gains of Colo10 at age 50 versus no screening. However, the incremental gains were achieved at acceptable incremental costs ranging from $64,500 to $85,900 per QALY gained in women and from $33,400 to $64,200 per QALY gained in men.
Initiating Colo10 at age 40 in women and men in the lowest three BMI strata was associated with high incremental costs per QALY gained. In contrast, Colo10 initiation at age 40 cost $80,400 per QALY gained in men with obesity III and $93,300 per QALY gained in men with obesity II.
FIT starting at ages 40 or 45 yielded progressively greater decreases in CRC incidence and mortality for both men and women across BMI strata, and it was highly cost-effective versus starting at later ages. Compared with Colo10, at every screening initiation age, FIT was cost-saving or preferred based on very high incremental costs per QALY, and FIT required substantially fewer colonoscopies per person.
Intensifying screening by shortening the colonoscopy interval to Colo5 was never preferred over shifting Colo10 to earlier screening initiation ages. In all cohorts, Colo5 was either less effective and more costly than Colo10 at a younger age, or when it was more effective, the cost per QALY gained was substantially higher than $100,000 per QALY gained.
Additional studies are needed to understand obesity-specific colonoscopy risks and costs, the authors wrote. In addition, obesity is only one of several factors that should be considered when tailoring CRC screening to the level of CRC risk, they wrote.
“As the search for a multifactor prediction tool that is ready for clinical application continues, we face the question of how to approach single CRC risk factors such as obesity,” they wrote. “While screening guidelines based on BMI can be envisioned if supportive clinical data accumulate, clinical implementation must overcome operational challenges.”
The study funding was not disclosed. One author reported advisory and consultant roles for several medical companies, and the remaining authors disclosed no conflicts.
Starting colorectal cancer screening earlier than age 50 appears to be cost-effective for both men and women across all body mass index (BMI) measures, according to a study published in Clinical Gastroenterology and Hepatology.
In particular, colonoscopy is cost-effective at age 45 for all BMI strata and at age 40 in obese men. In addition, fecal immunochemical testing (FIT) is highly cost-effective at ages 40 or 45 for all BMI values, wrote Aaron Yeoh, MD, a gastroenterologist at the Stanford (Calif.) University, and colleagues.
Increased body fatness, defined as a high BMI, has increased sharply in recent decades and has been associated with a higher risk of colorectal cancer (CRC). Given the rising incidence of CRC in younger people, the American Cancer Society and U.S. Preventive Services Task Force now endorse screening at age 45. In previous analyses, Dr. Yeoh and colleagues suggested that the policy is likely to be cost-effective, but they didn’t explore the potential differences by BMI.
“Our results suggest that 45 years of age is a reasonable screening initiation age for women and men with BMI ranging from normal through all classes of obesity,” the authors wrote. “Before changing screening policy, supportive data from clinical studies would be needed. Our approach can be applied to future efforts aiming to risk-stratify CRC screening based on multiple clinical factors or biomarkers.”
The research team examined the potential effectiveness and cost-effectiveness of screening tailored to BMI starting as early as age 40 and ending at age 75 in 10 separate cohorts of men and women of normal weight (18.5 to <25 kg/m2), overweight (25 to <30 kg/m2), and three strata of obesity – obese I (30 to <35 kg/m2), obese II (35 to <40 kg/m2), and obese III (>40 kg/m2).
For each cohort, the researchers estimated incremental costs per quality-adjusted life year (QALY) gained by initiating screening at age 40 versus age 45 versus age 50, or by shortening colonoscopy intervals. They modeled screening colonoscopy every 10 years (Colo10) or every 5 years (Colo5), or annual FIT, offered from ages 40, 45, or 50 through age 75 with 100% adherence, with postpolypectomy surveillance through age 80.
For model inputs, the research team favored high-quality data from meta-analyses or large prospective trials. Screening, treatment, and complication costs were set at 2018 Centers for Medicare & Medicaid Services rates for ages 65 and older and modified to reflect commercial costs at ages 65 and younger. The authors assumed use of moderate sedation, and sensitivity analyses addressed possible increased costs and complications of colonoscopy under propofol.
Overall, without screening, sex-specific total CRC deaths were similar for people with overweight or obesity I-III and slightly higher than for people with normal BMI. For both men and women across all BMI strata, Colo10 or FIT starting at age 50 substantially decreased CRC incidence and mortality versus no screening, and the magnitude of the clinical impact was comparable across BMI.
For both sexes across BMI, Colo10 or FIT starting at age 50 was highly cost-effective. The cost per QALY gained for Colo10 compared with no screening became more favorable as BMI increased from normal to obesity III. FIT was cost-saving compared with no screening for all cohorts and was cost-saving or highly cost-effective compared with Colo10 within each cohort.
Initiating Colo10 at age 45 showed incremental decreases in CRC incidence and mortality, which were modest compared with the gains of Colo10 at age 50 versus no screening. However, the incremental gains were achieved at acceptable incremental costs ranging from $64,500 to $85,900 per QALY gained in women and from $33,400 to $64,200 per QALY gained in men.
Initiating Colo10 at age 40 in women and men in the lowest three BMI strata was associated with high incremental costs per QALY gained. In contrast, Colo10 initiation at age 40 cost $80,400 per QALY gained in men with obesity III and $93,300 per QALY gained in men with obesity II.
FIT starting at ages 40 or 45 yielded progressively greater decreases in CRC incidence and mortality for both men and women across BMI strata, and it was highly cost-effective versus starting at later ages. Compared with Colo10, at every screening initiation age, FIT was cost-saving or preferred based on very high incremental costs per QALY, and FIT required substantially fewer colonoscopies per person.
Intensifying screening by shortening the colonoscopy interval to Colo5 was never preferred over shifting Colo10 to earlier screening initiation ages. In all cohorts, Colo5 was either less effective and more costly than Colo10 at a younger age, or when it was more effective, the cost per QALY gained was substantially higher than $100,000 per QALY gained.
Additional studies are needed to understand obesity-specific colonoscopy risks and costs, the authors wrote. In addition, obesity is only one of several factors that should be considered when tailoring CRC screening to the level of CRC risk, they wrote.
“As the search for a multifactor prediction tool that is ready for clinical application continues, we face the question of how to approach single CRC risk factors such as obesity,” they wrote. “While screening guidelines based on BMI can be envisioned if supportive clinical data accumulate, clinical implementation must overcome operational challenges.”
The study funding was not disclosed. One author reported advisory and consultant roles for several medical companies, and the remaining authors disclosed no conflicts.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Proximal ADR could become important new quality metric
Measurement of the proximal adenoma detection rate may be an important new quality metric for screening colonoscopy, propose researchers in a study that found proportionately more adenomas detected in the right colon with increasing patient age.
As patients age, in fact, the rate of increase of proximal adenomas is far greater than for distal adenomas in both men and women and in all races, wrote Lawrence Kosinski, MD, founder and chief medical officer of Sonar MD in Chicago, and colleagues.
Adenoma detection rate (ADR), the proportion of screening colonoscopies performed by a physician that detect at least one histologically confirmed colorectal adenoma or adenocarcinoma, has become an accepted quality metric because of the association of high ADR with lower rates of postcolonoscopy colorectal cancer (CRC). ADR varies widely among endoscopists, however, which could be related to differences in adenoma detection in different parts of the colon.
the authors wrote. “These differences could be clinically important if CRC occurs after colonoscopy.” The study was published in Techniques and Innovations in Gastrointestinal Endoscopy .
Dr. Kosinski and colleagues analyzed retrospective claims data from all colonoscopies performed between 2016-2018 submitted to the Health Care Service Corporation, which is the exclusive Blue Cross Blue Shield licensee for Illinois, Texas, Oklahoma, New Mexico, and Montana. All 50 states were represented in the patient population, though Illinois and Texas accounted for 66% of the cases.
The research team limited the study group to include patients who underwent a screening colonoscopy, representing 30.9% of the total population. They further refined the data to include only screening colonoscopies performed by the 710 endoscopists with at least 100 screenings during the study period, representing 34.5% of the total patients. They also excluded 10,685 cases with family history because the high-risk patients could alter the results.
Using ICD-10 codes, the researchers identified the polyp detection locations and then calculated the ADR for the entire colon (T-ADR) and both the proximal (P-ADR) and distal (D-ADR) colon to determine differences in the ratio of P-ADR versus D-ADR by age, sex, and race. They were unable to determine whether the polyps were adenomas or sessile serrated lesions, so the ADR calculations include both.
The 182,296 screening colonoscopies included 93,164 women (51%) and 89,132 men (49%). About 79% of patients were aged 50-64 years, and 5.8% were under age 50. The dataset preceded the U.S. Preventive Services Task Force recommendation to initiate screening at age 45.
Overall, T-ADR was consistent with accepted norms in both men (25.99%) and women (19.72%). Compared with women, men had a 4.5% higher prevalence of proximal adenomas and a 2.5% higher prevalence of distal adenomas at any age. The small cohort of Native Americans (296 patients) had a numerically higher T-ADR, P-ADR, and D-ADR than other groups.
By age, T-ADR increased significantly with advancing age, from 0.13 in patients under age 40 to 0.39 in ages 70 and older. The increase was driven by a sharp rise in P-ADR, particularly after age 60. There was a relatively small increase in D-ADR after ages 45-49.
Notably, the P-ADR/D-ADR ratio increased from 1.2 in patients under age 40 to 2.65 in ages 75 and older in both men and women.
Since the experience of the endoscopist affects ADR, the research team also calculated the ADR data by the number of total colonoscopies by endoscopist per decile. T-ADR, P-ADR, and D-ADR were associated directly in a linear relationship with the number of total colonoscopies performed. The slope of the P-ADR trendline was 2.3 times higher than the slope of the D-ADR trendline, indicating a higher volume of procedures directly related to higher polyp detection – specifically the P-ADR.
“Our data demonstrate that it is feasible to measure P-ADR in clinical practice,” the authors wrote. “We propose that P-ADR be considered a quality metric for colonoscopy.”
In addition, because of considerable variation in ADR based on age and sex, calculated ADR should be normalized by the age and sex of the specific patient profile of each endoscopist so relevant benchmarks can be established based on practice demographics, they wrote. For example, an endoscopist with a practice that includes predominantly younger women would have a different benchmark than a colleague with an older male population.
“With appropriate use of gender and age adjustments to ADR, endoscopists in need of further education and mentoring can be identified,” they wrote.
The authors declared no funding for the study. One author reported advisory roles for several medical companies, and the remaining authors disclosed no conflicts.
“What gets measured gets managed” is a common mantra in quality improvement. Adenoma detection rate (ADR) is currently one measure of a “quality” colonoscopy and a metric that is studied to determine means for improvement. ADR is an imperfect measure as it does not necessarily reflect true risk of a postcolonoscopy cancer in all parts of the colon since many post colonoscopy cancers are found in the proximal colon. To try to better understand potential differences in polyps found in different segments of the colon, and to determine if this was a metric that could be measured, Kosinski and colleagues studied a large claims database to understand the potential difference in ADR in the proximal versus distal colon.
Sunanda Kane, MD, MSPH, is professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Kane has no relevant conflicts of interest.
“What gets measured gets managed” is a common mantra in quality improvement. Adenoma detection rate (ADR) is currently one measure of a “quality” colonoscopy and a metric that is studied to determine means for improvement. ADR is an imperfect measure as it does not necessarily reflect true risk of a postcolonoscopy cancer in all parts of the colon since many post colonoscopy cancers are found in the proximal colon. To try to better understand potential differences in polyps found in different segments of the colon, and to determine if this was a metric that could be measured, Kosinski and colleagues studied a large claims database to understand the potential difference in ADR in the proximal versus distal colon.
Sunanda Kane, MD, MSPH, is professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Kane has no relevant conflicts of interest.
“What gets measured gets managed” is a common mantra in quality improvement. Adenoma detection rate (ADR) is currently one measure of a “quality” colonoscopy and a metric that is studied to determine means for improvement. ADR is an imperfect measure as it does not necessarily reflect true risk of a postcolonoscopy cancer in all parts of the colon since many post colonoscopy cancers are found in the proximal colon. To try to better understand potential differences in polyps found in different segments of the colon, and to determine if this was a metric that could be measured, Kosinski and colleagues studied a large claims database to understand the potential difference in ADR in the proximal versus distal colon.
Sunanda Kane, MD, MSPH, is professor of medicine in the division of gastroenterology and hepatology at the Mayo Clinic, Rochester, Minn. Dr. Kane has no relevant conflicts of interest.
Measurement of the proximal adenoma detection rate may be an important new quality metric for screening colonoscopy, propose researchers in a study that found proportionately more adenomas detected in the right colon with increasing patient age.
As patients age, in fact, the rate of increase of proximal adenomas is far greater than for distal adenomas in both men and women and in all races, wrote Lawrence Kosinski, MD, founder and chief medical officer of Sonar MD in Chicago, and colleagues.
Adenoma detection rate (ADR), the proportion of screening colonoscopies performed by a physician that detect at least one histologically confirmed colorectal adenoma or adenocarcinoma, has become an accepted quality metric because of the association of high ADR with lower rates of postcolonoscopy colorectal cancer (CRC). ADR varies widely among endoscopists, however, which could be related to differences in adenoma detection in different parts of the colon.
the authors wrote. “These differences could be clinically important if CRC occurs after colonoscopy.” The study was published in Techniques and Innovations in Gastrointestinal Endoscopy .
Dr. Kosinski and colleagues analyzed retrospective claims data from all colonoscopies performed between 2016-2018 submitted to the Health Care Service Corporation, which is the exclusive Blue Cross Blue Shield licensee for Illinois, Texas, Oklahoma, New Mexico, and Montana. All 50 states were represented in the patient population, though Illinois and Texas accounted for 66% of the cases.
The research team limited the study group to include patients who underwent a screening colonoscopy, representing 30.9% of the total population. They further refined the data to include only screening colonoscopies performed by the 710 endoscopists with at least 100 screenings during the study period, representing 34.5% of the total patients. They also excluded 10,685 cases with family history because the high-risk patients could alter the results.
Using ICD-10 codes, the researchers identified the polyp detection locations and then calculated the ADR for the entire colon (T-ADR) and both the proximal (P-ADR) and distal (D-ADR) colon to determine differences in the ratio of P-ADR versus D-ADR by age, sex, and race. They were unable to determine whether the polyps were adenomas or sessile serrated lesions, so the ADR calculations include both.
The 182,296 screening colonoscopies included 93,164 women (51%) and 89,132 men (49%). About 79% of patients were aged 50-64 years, and 5.8% were under age 50. The dataset preceded the U.S. Preventive Services Task Force recommendation to initiate screening at age 45.
Overall, T-ADR was consistent with accepted norms in both men (25.99%) and women (19.72%). Compared with women, men had a 4.5% higher prevalence of proximal adenomas and a 2.5% higher prevalence of distal adenomas at any age. The small cohort of Native Americans (296 patients) had a numerically higher T-ADR, P-ADR, and D-ADR than other groups.
By age, T-ADR increased significantly with advancing age, from 0.13 in patients under age 40 to 0.39 in ages 70 and older. The increase was driven by a sharp rise in P-ADR, particularly after age 60. There was a relatively small increase in D-ADR after ages 45-49.
Notably, the P-ADR/D-ADR ratio increased from 1.2 in patients under age 40 to 2.65 in ages 75 and older in both men and women.
Since the experience of the endoscopist affects ADR, the research team also calculated the ADR data by the number of total colonoscopies by endoscopist per decile. T-ADR, P-ADR, and D-ADR were associated directly in a linear relationship with the number of total colonoscopies performed. The slope of the P-ADR trendline was 2.3 times higher than the slope of the D-ADR trendline, indicating a higher volume of procedures directly related to higher polyp detection – specifically the P-ADR.
“Our data demonstrate that it is feasible to measure P-ADR in clinical practice,” the authors wrote. “We propose that P-ADR be considered a quality metric for colonoscopy.”
In addition, because of considerable variation in ADR based on age and sex, calculated ADR should be normalized by the age and sex of the specific patient profile of each endoscopist so relevant benchmarks can be established based on practice demographics, they wrote. For example, an endoscopist with a practice that includes predominantly younger women would have a different benchmark than a colleague with an older male population.
“With appropriate use of gender and age adjustments to ADR, endoscopists in need of further education and mentoring can be identified,” they wrote.
The authors declared no funding for the study. One author reported advisory roles for several medical companies, and the remaining authors disclosed no conflicts.
Measurement of the proximal adenoma detection rate may be an important new quality metric for screening colonoscopy, propose researchers in a study that found proportionately more adenomas detected in the right colon with increasing patient age.
As patients age, in fact, the rate of increase of proximal adenomas is far greater than for distal adenomas in both men and women and in all races, wrote Lawrence Kosinski, MD, founder and chief medical officer of Sonar MD in Chicago, and colleagues.
Adenoma detection rate (ADR), the proportion of screening colonoscopies performed by a physician that detect at least one histologically confirmed colorectal adenoma or adenocarcinoma, has become an accepted quality metric because of the association of high ADR with lower rates of postcolonoscopy colorectal cancer (CRC). ADR varies widely among endoscopists, however, which could be related to differences in adenoma detection in different parts of the colon.
the authors wrote. “These differences could be clinically important if CRC occurs after colonoscopy.” The study was published in Techniques and Innovations in Gastrointestinal Endoscopy .
Dr. Kosinski and colleagues analyzed retrospective claims data from all colonoscopies performed between 2016-2018 submitted to the Health Care Service Corporation, which is the exclusive Blue Cross Blue Shield licensee for Illinois, Texas, Oklahoma, New Mexico, and Montana. All 50 states were represented in the patient population, though Illinois and Texas accounted for 66% of the cases.
The research team limited the study group to include patients who underwent a screening colonoscopy, representing 30.9% of the total population. They further refined the data to include only screening colonoscopies performed by the 710 endoscopists with at least 100 screenings during the study period, representing 34.5% of the total patients. They also excluded 10,685 cases with family history because the high-risk patients could alter the results.
Using ICD-10 codes, the researchers identified the polyp detection locations and then calculated the ADR for the entire colon (T-ADR) and both the proximal (P-ADR) and distal (D-ADR) colon to determine differences in the ratio of P-ADR versus D-ADR by age, sex, and race. They were unable to determine whether the polyps were adenomas or sessile serrated lesions, so the ADR calculations include both.
The 182,296 screening colonoscopies included 93,164 women (51%) and 89,132 men (49%). About 79% of patients were aged 50-64 years, and 5.8% were under age 50. The dataset preceded the U.S. Preventive Services Task Force recommendation to initiate screening at age 45.
Overall, T-ADR was consistent with accepted norms in both men (25.99%) and women (19.72%). Compared with women, men had a 4.5% higher prevalence of proximal adenomas and a 2.5% higher prevalence of distal adenomas at any age. The small cohort of Native Americans (296 patients) had a numerically higher T-ADR, P-ADR, and D-ADR than other groups.
By age, T-ADR increased significantly with advancing age, from 0.13 in patients under age 40 to 0.39 in ages 70 and older. The increase was driven by a sharp rise in P-ADR, particularly after age 60. There was a relatively small increase in D-ADR after ages 45-49.
Notably, the P-ADR/D-ADR ratio increased from 1.2 in patients under age 40 to 2.65 in ages 75 and older in both men and women.
Since the experience of the endoscopist affects ADR, the research team also calculated the ADR data by the number of total colonoscopies by endoscopist per decile. T-ADR, P-ADR, and D-ADR were associated directly in a linear relationship with the number of total colonoscopies performed. The slope of the P-ADR trendline was 2.3 times higher than the slope of the D-ADR trendline, indicating a higher volume of procedures directly related to higher polyp detection – specifically the P-ADR.
“Our data demonstrate that it is feasible to measure P-ADR in clinical practice,” the authors wrote. “We propose that P-ADR be considered a quality metric for colonoscopy.”
In addition, because of considerable variation in ADR based on age and sex, calculated ADR should be normalized by the age and sex of the specific patient profile of each endoscopist so relevant benchmarks can be established based on practice demographics, they wrote. For example, an endoscopist with a practice that includes predominantly younger women would have a different benchmark than a colleague with an older male population.
“With appropriate use of gender and age adjustments to ADR, endoscopists in need of further education and mentoring can be identified,” they wrote.
The authors declared no funding for the study. One author reported advisory roles for several medical companies, and the remaining authors disclosed no conflicts.
FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY
Genomic features may explain sex differences in HBV-associated HCC
In findings that point to a potential treatment strategy, researchers in China have discovered how two risk factors – male hormones and aflatoxin – may drive hepatocellular carcinoma (HCC). The liver cancer genetics and biology differ between men and women and help explain why aflatoxin exposure increases the risk of HCC in hepatitis B virus (HBV)–infected patients, particularly in men.
The researchers found evidence that androgen signaling increased aflatoxin metabolism and genotoxicity, reduced DNA repair capabilities, and quelled antitumor immunity, Chungui Xu, PhD, with the State Key Lab of Molecular Oncology at the National Cancer Center at Peking Union Medical College in Beijing, and colleagues wrote. The study was published in Cellular and Molecular Gastroenterology and Hepatology.
“Androgen signaling in the context of genotoxic stress repressed DNA damage repair,” the authors wrote. “The alteration caused more nuclear DNA leakage into cytosol to activate the cGAS-STING pathway, which increased T-cell infiltration into tumor mass and improved anti–programmed cell death protein 1 [PD-1] immunotherapy in HCCs.”
In the study, the researchers conducted genomic analyses of HCC tumor samples from people with HBV who were exposed to aflatoxin in Qidong, China, an area that until recently had some of the highest liver cancer rates in the world. In subsequent experiments in cell lines and mice, the team investigated how the genetic alterations and transcription dysfunctions reflected the combined carcinogenic effects of aflatoxin and HPV.
Dr. Xu and colleagues performed whole-genome, whole-exome, and RNA sequencing on tumor and matched nonneoplastic liver tissues from 101 HBV-related HCC patients (47 men and 54 women). The patients had received primary hepatectomy without systemic treatment or radiation therapy and were followed for 5 years. Aflatoxin exposure was confirmed by recording aflatoxin M1 in their urine 3-18 years before HCC diagnosis. For comparison, the research team analyzed 113 HBV-related HCC samples without aflatoxin exposure from the Cancer Genome Atlas database. They also looked at 181 Chinese HCC samples from the International Cancer Genome Consortium that had no record of aflatoxin exposure. They found no sex differences in mutation patterns for previously identified HCC driver genes, but the tumor mutation burden was higher in the Qidong set.
In the Qidong samples, the research team identified 71 genes with significantly different mutation frequencies by sex. Among those, 62 genes were associated more frequently with men, and 9 genes were associated with women. None of the genes have been reported previously as HCC drivers, although some have been found previously in other cancers, such as melanoma, lung cancer, and thyroid adenocarcinoma.
From whole-genome sequencing of 88 samples, the research team detected HBV integration in 37 samples and identified 110 breakpoints. No difference in HBV breakpoint numbers was detected between the sexes, though there were differences in somatic mutation profiles and in HBV integration, and only men had HBV breakpoints binding to androgen receptors.
From RNA sequencing of 87 samples, the research team identified 3,070 significantly differentially expressed genes between men and women. The transcription levels of estrogen receptor 1 and 2 were similar between the sexes, but men expressed higher androgen receptor levels.
The researchers then analyzed the variation in gene expression between the male and female gene sets to understand HCC transcriptional dysfunction. The samples from men showed different biological capabilities, with several signaling pathways related to HCC development and progression that were up-regulated. The male samples also showed repression of specific antitumor immunity.
Men’s HCC tumor samples expressed higher levels of aflatoxin metabolism-related genes, such as AHR and CYP1A1, but lower levels of GSTM1 genes.
Turning to cell lines, the researchers used HBV-positive HepG2.2.15 cells and PLC/PRF/5 cells to test sex hormones in the regulation of AHR and CYP1A1 and how their interactions affected aflatoxin B1 cytotoxicity. After aflatoxin treatment, the addition of testosterone to the cultures significantly enhanced the transcription levels of AHR and CYP1A1. The aflatoxin dose needed to cause cell death was reduced by half in the presence of testosterone.
DNA damage from aflatoxin activates DNA repair mechanisms, so the research team analyzed different repair pathways. In the male tumor samples, the most down-regulated pathway was NHEJ. The male samples expressed significantly lower levels of NHEJ factors than did the female samples, including XRCC4, MRE11, ATM, HRCC5, and NBN.
In cell lines, the researchers tested the effects of androgen alone and with aflatoxin on the regulation of NHEJ factors. The transcriptional levels of XRCC4, LIG4, and MRE11 were reduced significantly in cells treated with both aflatoxin and testosterone, compared with those treated with aflatoxin alone. Notably, the addition of 17beta-estradiol estrogen partially reversed the reduction of XRCC4 and MRE11 expression.
The tumor samples from men also showed different gene signatures of immune responses and inflammation from the samples from women. The genes related to interferon I signaling and response were up-regulated significantly in male samples but not in female samples. In addition, the samples from men showed repression of antigen-specific antitumor immunity. The research team detected significantly increased CD8+T-cell infiltration in tumor tissues of men but not women, as well as higher transcriptional levels of PD-1 and CTLA-4, which are two immune checkpoint proteins on T cells that keep them from attacking the tumor. The data indicate that androgen signaling in established HBV-related HCCs contribute to the development of an immunosuppressive microenvironment, the authors wrote, which could render the tumor sensitive to anti–PD-1 immunotherapy.
In mice, the researchers examined the impact of a favorable androgen pathway on anti–PD-1 treatment effects against hepatoma. They administered tamoxifen to block ER signaling in syngeneic tumor-bearing mice. In both male and female mice, tamoxifen enhanced the anti–PD-1 effects to eradicate the tumor quickly. They also administered flutamide to tumor-bearing mice to block the androgen pathway and found no significant difference in tumor growth in female mice, but in male mice, tumors grew faster in the flutamide-treated mice.
“Therapeutics that favor androgen signaling and/or blocking estrogen signaling may provide a new strategy to improve the efficacy of immune checkpoint inhibitors against HCC in combination with radiotherapy or chemotherapy that induced DNA damage,” the authors wrote. “The adjuvant effects of tamoxifen for favorable androgen signaling to boost the anti–PD-1 effect in HCC patients needs future study in a prospective HCC cohort.”
The study was supported by the National Natural Science Foundation Fund of China, Innovation Fund for Medical Sciences of Chinese Academy of Medical Sciences, State Key Project for Infectious Diseases, and Peking Union Medical College. The authors disclosed no conflicts.
To read an editorial that accompanied this study in Cellular and Molecular Gastroenterology and Hepatology, go to https://www.cmghjournal.org/article/S2352-345X(22)00234-X/fulltext.
In findings that point to a potential treatment strategy, researchers in China have discovered how two risk factors – male hormones and aflatoxin – may drive hepatocellular carcinoma (HCC). The liver cancer genetics and biology differ between men and women and help explain why aflatoxin exposure increases the risk of HCC in hepatitis B virus (HBV)–infected patients, particularly in men.
The researchers found evidence that androgen signaling increased aflatoxin metabolism and genotoxicity, reduced DNA repair capabilities, and quelled antitumor immunity, Chungui Xu, PhD, with the State Key Lab of Molecular Oncology at the National Cancer Center at Peking Union Medical College in Beijing, and colleagues wrote. The study was published in Cellular and Molecular Gastroenterology and Hepatology.
“Androgen signaling in the context of genotoxic stress repressed DNA damage repair,” the authors wrote. “The alteration caused more nuclear DNA leakage into cytosol to activate the cGAS-STING pathway, which increased T-cell infiltration into tumor mass and improved anti–programmed cell death protein 1 [PD-1] immunotherapy in HCCs.”
In the study, the researchers conducted genomic analyses of HCC tumor samples from people with HBV who were exposed to aflatoxin in Qidong, China, an area that until recently had some of the highest liver cancer rates in the world. In subsequent experiments in cell lines and mice, the team investigated how the genetic alterations and transcription dysfunctions reflected the combined carcinogenic effects of aflatoxin and HPV.
Dr. Xu and colleagues performed whole-genome, whole-exome, and RNA sequencing on tumor and matched nonneoplastic liver tissues from 101 HBV-related HCC patients (47 men and 54 women). The patients had received primary hepatectomy without systemic treatment or radiation therapy and were followed for 5 years. Aflatoxin exposure was confirmed by recording aflatoxin M1 in their urine 3-18 years before HCC diagnosis. For comparison, the research team analyzed 113 HBV-related HCC samples without aflatoxin exposure from the Cancer Genome Atlas database. They also looked at 181 Chinese HCC samples from the International Cancer Genome Consortium that had no record of aflatoxin exposure. They found no sex differences in mutation patterns for previously identified HCC driver genes, but the tumor mutation burden was higher in the Qidong set.
In the Qidong samples, the research team identified 71 genes with significantly different mutation frequencies by sex. Among those, 62 genes were associated more frequently with men, and 9 genes were associated with women. None of the genes have been reported previously as HCC drivers, although some have been found previously in other cancers, such as melanoma, lung cancer, and thyroid adenocarcinoma.
From whole-genome sequencing of 88 samples, the research team detected HBV integration in 37 samples and identified 110 breakpoints. No difference in HBV breakpoint numbers was detected between the sexes, though there were differences in somatic mutation profiles and in HBV integration, and only men had HBV breakpoints binding to androgen receptors.
From RNA sequencing of 87 samples, the research team identified 3,070 significantly differentially expressed genes between men and women. The transcription levels of estrogen receptor 1 and 2 were similar between the sexes, but men expressed higher androgen receptor levels.
The researchers then analyzed the variation in gene expression between the male and female gene sets to understand HCC transcriptional dysfunction. The samples from men showed different biological capabilities, with several signaling pathways related to HCC development and progression that were up-regulated. The male samples also showed repression of specific antitumor immunity.
Men’s HCC tumor samples expressed higher levels of aflatoxin metabolism-related genes, such as AHR and CYP1A1, but lower levels of GSTM1 genes.
Turning to cell lines, the researchers used HBV-positive HepG2.2.15 cells and PLC/PRF/5 cells to test sex hormones in the regulation of AHR and CYP1A1 and how their interactions affected aflatoxin B1 cytotoxicity. After aflatoxin treatment, the addition of testosterone to the cultures significantly enhanced the transcription levels of AHR and CYP1A1. The aflatoxin dose needed to cause cell death was reduced by half in the presence of testosterone.
DNA damage from aflatoxin activates DNA repair mechanisms, so the research team analyzed different repair pathways. In the male tumor samples, the most down-regulated pathway was NHEJ. The male samples expressed significantly lower levels of NHEJ factors than did the female samples, including XRCC4, MRE11, ATM, HRCC5, and NBN.
In cell lines, the researchers tested the effects of androgen alone and with aflatoxin on the regulation of NHEJ factors. The transcriptional levels of XRCC4, LIG4, and MRE11 were reduced significantly in cells treated with both aflatoxin and testosterone, compared with those treated with aflatoxin alone. Notably, the addition of 17beta-estradiol estrogen partially reversed the reduction of XRCC4 and MRE11 expression.
The tumor samples from men also showed different gene signatures of immune responses and inflammation from the samples from women. The genes related to interferon I signaling and response were up-regulated significantly in male samples but not in female samples. In addition, the samples from men showed repression of antigen-specific antitumor immunity. The research team detected significantly increased CD8+T-cell infiltration in tumor tissues of men but not women, as well as higher transcriptional levels of PD-1 and CTLA-4, which are two immune checkpoint proteins on T cells that keep them from attacking the tumor. The data indicate that androgen signaling in established HBV-related HCCs contribute to the development of an immunosuppressive microenvironment, the authors wrote, which could render the tumor sensitive to anti–PD-1 immunotherapy.
In mice, the researchers examined the impact of a favorable androgen pathway on anti–PD-1 treatment effects against hepatoma. They administered tamoxifen to block ER signaling in syngeneic tumor-bearing mice. In both male and female mice, tamoxifen enhanced the anti–PD-1 effects to eradicate the tumor quickly. They also administered flutamide to tumor-bearing mice to block the androgen pathway and found no significant difference in tumor growth in female mice, but in male mice, tumors grew faster in the flutamide-treated mice.
“Therapeutics that favor androgen signaling and/or blocking estrogen signaling may provide a new strategy to improve the efficacy of immune checkpoint inhibitors against HCC in combination with radiotherapy or chemotherapy that induced DNA damage,” the authors wrote. “The adjuvant effects of tamoxifen for favorable androgen signaling to boost the anti–PD-1 effect in HCC patients needs future study in a prospective HCC cohort.”
The study was supported by the National Natural Science Foundation Fund of China, Innovation Fund for Medical Sciences of Chinese Academy of Medical Sciences, State Key Project for Infectious Diseases, and Peking Union Medical College. The authors disclosed no conflicts.
To read an editorial that accompanied this study in Cellular and Molecular Gastroenterology and Hepatology, go to https://www.cmghjournal.org/article/S2352-345X(22)00234-X/fulltext.
In findings that point to a potential treatment strategy, researchers in China have discovered how two risk factors – male hormones and aflatoxin – may drive hepatocellular carcinoma (HCC). The liver cancer genetics and biology differ between men and women and help explain why aflatoxin exposure increases the risk of HCC in hepatitis B virus (HBV)–infected patients, particularly in men.
The researchers found evidence that androgen signaling increased aflatoxin metabolism and genotoxicity, reduced DNA repair capabilities, and quelled antitumor immunity, Chungui Xu, PhD, with the State Key Lab of Molecular Oncology at the National Cancer Center at Peking Union Medical College in Beijing, and colleagues wrote. The study was published in Cellular and Molecular Gastroenterology and Hepatology.
“Androgen signaling in the context of genotoxic stress repressed DNA damage repair,” the authors wrote. “The alteration caused more nuclear DNA leakage into cytosol to activate the cGAS-STING pathway, which increased T-cell infiltration into tumor mass and improved anti–programmed cell death protein 1 [PD-1] immunotherapy in HCCs.”
In the study, the researchers conducted genomic analyses of HCC tumor samples from people with HBV who were exposed to aflatoxin in Qidong, China, an area that until recently had some of the highest liver cancer rates in the world. In subsequent experiments in cell lines and mice, the team investigated how the genetic alterations and transcription dysfunctions reflected the combined carcinogenic effects of aflatoxin and HPV.
Dr. Xu and colleagues performed whole-genome, whole-exome, and RNA sequencing on tumor and matched nonneoplastic liver tissues from 101 HBV-related HCC patients (47 men and 54 women). The patients had received primary hepatectomy without systemic treatment or radiation therapy and were followed for 5 years. Aflatoxin exposure was confirmed by recording aflatoxin M1 in their urine 3-18 years before HCC diagnosis. For comparison, the research team analyzed 113 HBV-related HCC samples without aflatoxin exposure from the Cancer Genome Atlas database. They also looked at 181 Chinese HCC samples from the International Cancer Genome Consortium that had no record of aflatoxin exposure. They found no sex differences in mutation patterns for previously identified HCC driver genes, but the tumor mutation burden was higher in the Qidong set.
In the Qidong samples, the research team identified 71 genes with significantly different mutation frequencies by sex. Among those, 62 genes were associated more frequently with men, and 9 genes were associated with women. None of the genes have been reported previously as HCC drivers, although some have been found previously in other cancers, such as melanoma, lung cancer, and thyroid adenocarcinoma.
From whole-genome sequencing of 88 samples, the research team detected HBV integration in 37 samples and identified 110 breakpoints. No difference in HBV breakpoint numbers was detected between the sexes, though there were differences in somatic mutation profiles and in HBV integration, and only men had HBV breakpoints binding to androgen receptors.
From RNA sequencing of 87 samples, the research team identified 3,070 significantly differentially expressed genes between men and women. The transcription levels of estrogen receptor 1 and 2 were similar between the sexes, but men expressed higher androgen receptor levels.
The researchers then analyzed the variation in gene expression between the male and female gene sets to understand HCC transcriptional dysfunction. The samples from men showed different biological capabilities, with several signaling pathways related to HCC development and progression that were up-regulated. The male samples also showed repression of specific antitumor immunity.
Men’s HCC tumor samples expressed higher levels of aflatoxin metabolism-related genes, such as AHR and CYP1A1, but lower levels of GSTM1 genes.
Turning to cell lines, the researchers used HBV-positive HepG2.2.15 cells and PLC/PRF/5 cells to test sex hormones in the regulation of AHR and CYP1A1 and how their interactions affected aflatoxin B1 cytotoxicity. After aflatoxin treatment, the addition of testosterone to the cultures significantly enhanced the transcription levels of AHR and CYP1A1. The aflatoxin dose needed to cause cell death was reduced by half in the presence of testosterone.
DNA damage from aflatoxin activates DNA repair mechanisms, so the research team analyzed different repair pathways. In the male tumor samples, the most down-regulated pathway was NHEJ. The male samples expressed significantly lower levels of NHEJ factors than did the female samples, including XRCC4, MRE11, ATM, HRCC5, and NBN.
In cell lines, the researchers tested the effects of androgen alone and with aflatoxin on the regulation of NHEJ factors. The transcriptional levels of XRCC4, LIG4, and MRE11 were reduced significantly in cells treated with both aflatoxin and testosterone, compared with those treated with aflatoxin alone. Notably, the addition of 17beta-estradiol estrogen partially reversed the reduction of XRCC4 and MRE11 expression.
The tumor samples from men also showed different gene signatures of immune responses and inflammation from the samples from women. The genes related to interferon I signaling and response were up-regulated significantly in male samples but not in female samples. In addition, the samples from men showed repression of antigen-specific antitumor immunity. The research team detected significantly increased CD8+T-cell infiltration in tumor tissues of men but not women, as well as higher transcriptional levels of PD-1 and CTLA-4, which are two immune checkpoint proteins on T cells that keep them from attacking the tumor. The data indicate that androgen signaling in established HBV-related HCCs contribute to the development of an immunosuppressive microenvironment, the authors wrote, which could render the tumor sensitive to anti–PD-1 immunotherapy.
In mice, the researchers examined the impact of a favorable androgen pathway on anti–PD-1 treatment effects against hepatoma. They administered tamoxifen to block ER signaling in syngeneic tumor-bearing mice. In both male and female mice, tamoxifen enhanced the anti–PD-1 effects to eradicate the tumor quickly. They also administered flutamide to tumor-bearing mice to block the androgen pathway and found no significant difference in tumor growth in female mice, but in male mice, tumors grew faster in the flutamide-treated mice.
“Therapeutics that favor androgen signaling and/or blocking estrogen signaling may provide a new strategy to improve the efficacy of immune checkpoint inhibitors against HCC in combination with radiotherapy or chemotherapy that induced DNA damage,” the authors wrote. “The adjuvant effects of tamoxifen for favorable androgen signaling to boost the anti–PD-1 effect in HCC patients needs future study in a prospective HCC cohort.”
The study was supported by the National Natural Science Foundation Fund of China, Innovation Fund for Medical Sciences of Chinese Academy of Medical Sciences, State Key Project for Infectious Diseases, and Peking Union Medical College. The authors disclosed no conflicts.
To read an editorial that accompanied this study in Cellular and Molecular Gastroenterology and Hepatology, go to https://www.cmghjournal.org/article/S2352-345X(22)00234-X/fulltext.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Nonheavy alcohol use associated with liver fibrosis, NASH
according to a new report.
An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.
“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.
“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing associations
NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.
Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.
Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.
Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.
The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.
Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.
Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.
The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.
At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).
Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.
Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.
After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.
In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.
“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
Implications for patient care
The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.
“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”
Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.
The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.
Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.
“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”
The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
according to a new report.
An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.
“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.
“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing associations
NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.
Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.
Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.
Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.
The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.
Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.
Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.
The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.
At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).
Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.
Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.
After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.
In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.
“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
Implications for patient care
The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.
“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”
Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.
The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.
Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.
“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”
The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
according to a new report.
An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.
“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.
“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing associations
NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.
Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.
Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.
Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.
The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.
Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.
Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.
The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.
At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).
Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.
Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.
After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.
In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.
“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
Implications for patient care
The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.
“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”
Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.
The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.
Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.
“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”
The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Dietary interventions can support IBD treatment
For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.
“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.
“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing diets
Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.
In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.
For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.
Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.
Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.
When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.
For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.
“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”
For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.
For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
Helping patients
Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.
“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.
Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.
Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.
“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”
As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.
“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”
The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.
“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.
“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing diets
Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.
In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.
For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.
Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.
Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.
When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.
For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.
“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”
For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.
For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
Helping patients
Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.
“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.
Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.
Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.
“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”
As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.
“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”
The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.
“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.
“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing diets
Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.
In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.
For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.
Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.
Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.
When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.
For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.
“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”
For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.
For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
Helping patients
Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.
“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.
Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.
Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.
“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”
As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.
“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”
The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Hospitals with more diverse and uninsured patients more likely to provide delayed fracture care
Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.
“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.
“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.
The study was published online in JAMA Network Open.
Site performance varied
Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.
The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.
The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.
The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.
Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.
The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).
At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.
There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.
At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.
Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
Addressing inequalities
Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”
Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.
“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”
Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.
“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.
Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.
“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”
Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.
“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”
The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.
“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.
“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.
The study was published online in JAMA Network Open.
Site performance varied
Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.
The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.
The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.
The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.
Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.
The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).
At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.
There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.
At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.
Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
Addressing inequalities
Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”
Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.
“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”
Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.
“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.
Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.
“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”
Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.
“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”
The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.
“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.
“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.
The study was published online in JAMA Network Open.
Site performance varied
Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.
The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.
The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.
The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.
Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.
The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).
At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.
There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.
At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.
Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
Addressing inequalities
Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”
Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.
“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”
Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.
“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.
Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.
“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”
Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.
“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”
The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
AI versus other interventions for colonoscopy: How do they compare?
AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.
“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.
“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”
The study was published online in the Journal of Clinical Gastroenterology.
Analyzing detection rates
Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.
Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.
No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.
The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).
Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.
The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).
Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).
In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).
Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
No AI advantage for serrated polyps
Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).
AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).
“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.
Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.
“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
Additional considerations
Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.
“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”
Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.
For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.
“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”
No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.
“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.
“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”
The study was published online in the Journal of Clinical Gastroenterology.
Analyzing detection rates
Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.
Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.
No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.
The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).
Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.
The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).
Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).
In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).
Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
No AI advantage for serrated polyps
Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).
AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).
“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.
Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.
“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
Additional considerations
Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.
“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”
Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.
For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.
“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”
No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.
“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.
“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”
The study was published online in the Journal of Clinical Gastroenterology.
Analyzing detection rates
Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.
Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.
No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.
The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).
Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.
The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).
Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).
In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).
Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
No AI advantage for serrated polyps
Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).
AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).
“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.
Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.
“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
Additional considerations
Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.
“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”
Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.
For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.
“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”
No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY
Virtual yoga program appears to improve IBS symptoms, fatigue, stress
Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.
“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.
“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”
The study was published online in the American Journal of Gastroenterology.
Online yoga program vs. IBS advice only
IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.
During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.
Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.
The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.
The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.
The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.
The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.
The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.
They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.
Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.
The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.
In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.
Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.
Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.
No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.
“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”
The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
Future program considerations
Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.
“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”
Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.
“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.
Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.
“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”
The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.
“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.
“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”
The study was published online in the American Journal of Gastroenterology.
Online yoga program vs. IBS advice only
IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.
During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.
Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.
The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.
The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.
The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.
The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.
The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.
They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.
Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.
The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.
In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.
Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.
Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.
No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.
“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”
The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
Future program considerations
Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.
“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”
Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.
“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.
Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.
“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”
The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.
“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.
“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”
The study was published online in the American Journal of Gastroenterology.
Online yoga program vs. IBS advice only
IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.
During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.
Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.
The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.
The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.
The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.
The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.
The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.
They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.
Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.
The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.
In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.
Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.
Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.
No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.
“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”
The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
Future program considerations
Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.
“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”
Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.
“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.
Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.
“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”
The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY
Celiac disease linked to higher risk for rheumatoid arthritis, juvenile idiopathic arthritis
Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.
“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.
“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.
Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.
With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.
For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.
The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.
When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.
For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.
As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.
When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.
This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
Additional findings
Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.
Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.
Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.
“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.
“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.
Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.
“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.
Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.
Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”
The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.
“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.
“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.
Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.
With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.
For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.
The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.
When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.
For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.
As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.
When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.
This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
Additional findings
Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.
Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.
Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.
“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.
“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.
Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.
“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.
Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.
Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”
The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.
“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.
“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.
Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.
With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.
For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.
The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.
When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.
For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.
As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.
When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.
This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
Additional findings
Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.
Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.
Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.
“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.
“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.
Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.
“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.
Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.
Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”
The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY
Flu vaccination associated with reduced stroke risk
The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.
“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.
“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”
The study was published in the Lancet Public Health.
Large effect size
The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.
The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.
Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.
About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.
Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.
The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.
The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.
“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.
Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
Promoting cardiovascular health
In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.
Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.
Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.
“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”
Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
‘Call to action’
Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”
Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.
“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”
The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.
“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.
“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”
The study was published in the Lancet Public Health.
Large effect size
The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.
The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.
Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.
About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.
Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.
The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.
The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.
“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.
Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
Promoting cardiovascular health
In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.
Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.
Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.
“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”
Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
‘Call to action’
Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”
Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.
“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”
The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.
“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.
“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”
The study was published in the Lancet Public Health.
Large effect size
The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.
The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.
Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.
About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.
Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.
The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.
The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.
“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.
Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
Promoting cardiovascular health
In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.
Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.
Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.
“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”
Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
‘Call to action’
Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”
Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.
“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”
The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM LANCET PUBLIC HEALTH