Nonheavy alcohol use associated with liver fibrosis, NASH

Article Type
Changed
Fri, 12/23/2022 - 11:40

Nonheavy alcohol use – fewer than 14 drinks per week for women and fewer than 21 drinks per week for men – is associated with liver fibrosis and nonalcoholic steatohepatitis (NASH), according to a new report.

An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.

“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.

“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing associations

NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.

Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.

Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.

Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.

The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.

Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.

Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.

The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.

At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).

Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.

Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.

After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.

In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.

“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
 

 

 

Implications for patient care

The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.

“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”

Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.

The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.

Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.

“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”

The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Nonheavy alcohol use – fewer than 14 drinks per week for women and fewer than 21 drinks per week for men – is associated with liver fibrosis and nonalcoholic steatohepatitis (NASH), according to a new report.

An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.

“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.

“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing associations

NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.

Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.

Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.

Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.

The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.

Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.

Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.

The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.

At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).

Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.

Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.

After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.

In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.

“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
 

 

 

Implications for patient care

The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.

“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”

Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.

The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.

Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.

“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”

The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Nonheavy alcohol use – fewer than 14 drinks per week for women and fewer than 21 drinks per week for men – is associated with liver fibrosis and nonalcoholic steatohepatitis (NASH), according to a new report.

An analysis of current drinkers in the Framingham Heart Study found that a higher number of drinks per week and higher frequency of drinking were associated with increased odds of fibrosis among patients whose consumption fell below the threshold for heavy alcohol use.

“Although the detrimental effects of heavy alcohol use are well accepted, there is no consensus guideline on how to counsel patients about how nonheavy alcohol use may affect liver health,” Brooke Rice, MD, an internal medicine resident at Boston University, said in an interview.

“Current terminology classifies fatty liver disease as either alcoholic or nonalcoholic,” she said. “Our results call this strict categorization into question, suggesting that even nonheavy alcohol use should be considered as a factor contributing to more advanced nonalcoholic fatty liver disease [NAFLD] phenotypes.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing associations

NAFLD and alcohol-related liver disease, which are the most common causes of chronic liver disease worldwide, are histologically identical but distinguished by the presence of significant alcohol use, the study authors wrote.

Heavy alcohol use, based on guidelines from the American Association for the Study of Liver Diseases, is defined as more than 14 drinks per week for women or more than 21 drinks per week for men.

Although heavy alcohol use is consistently associated with cirrhosis and steatohepatitis, studies of nonheavy alcohol use have shown conflicting results, the authors wrote. However, evidence suggests that the pattern of alcohol consumption – particularly increased weekly drinking and binge drinking – may be an important predictor.

Dr. Rice and colleagues conducted a cross-sectional study of 2,629 current drinkers in the Framingham Heart Study who completed alcohol-use questionnaires and vibration-controlled transient elastography between April 2016 and April 2019. They analyzed the association between fibrosis and several alcohol-use measures, including total consumption and drinking patterns, among nonheavy alcohol users whose liver disease would be classified as “nonalcoholic” by current nomenclature.

The research team defined clinically significant fibrosis as a liver stiffness measurement of 8.2 kPa or higher. For at-risk NASH, the researchers used two FibroScan-AST (FAST) score thresholds – greater than 0.35 or 0.67 and higher. They also considered additional metabolic factors such as physical activity, body mass index, blood pressure, glucose measures, and metabolic syndrome.

Participants were asked to estimate the frequency of alcohol use (average number of drinking days per week during the past year) and the usual quantity of alcohol consumed (average number of drinks on a typical drinking day during the past year). Researchers multiplied the figures to estimate the average total number of drinks per week.

Among the 2,629 current drinkers (53% women, 47% men), the average age was 54 years, 7.2% had diabetes, and 26.9% met the criteria for metabolic syndrome. Participants drank about 3 days per week on average with a usual consumption of two drinks per drinking day, averaging a total weekly alcohol consumption of six drinks.

The average liver stiffness measurement was 5.6 kPa, and 8.2% had significant fibrosis.

At the FAST score threshold of 0.67 or greater, 1.9% of participants were likely to have at-risk NASH, with a higher prevalence in those with obesity (4.5%) or diabetes (9.5%). At the FAST score threshold of greater than 0.35, the prevalence of at-risk NASH was 12.4%, which was higher in those with obesity (26.3%) or diabetes (34.4%).

Overall, an increased total number of drinks per week and higher frequency of drinking days were associated with increased odds of fibrosis.

Almost 17.5% of participants engaged in risky weekly drinking, which was defined as 8 or more drinks per week for women and 15 or more drinks per week for men. Risky weekly drinking was also associated with higher odds of fibrosis.

After excluding 158 heavy drinkers, the prevalence of fibrosis was unchanged at 8%, and an increased total of drinks per week remained significantly associated with fibrosis.

In addition, multiple alcohol-use measures were positively associated with a FAST score greater than 0.35 and were similar after excluding heavy alcohol users. These measures include the number of drinks per week, the frequency of drinking days, and binge drinking.

“We showed that nonheavy alcohol use is associated with fibrosis and at-risk NASH, which are both predictors of long-term liver-related morbidity and mortality,” Dr. Rice said.
 

 

 

Implications for patient care

The findings have important implications for both NAFLD clinical trials and patient care, the study authors wrote. For instance, the U.S. Dietary Guidelines for Americans recommend limiting alcohol use to one drink per day for women and two drinks per day for men.

“Our results reinforce the importance of encouraging all patients to reduce alcohol intake as much as possible and to at least adhere to current U.S. Dietary Guidelines recommended limits,” Dr. Rice said. “Almost half of participants in our study consumed in excess of these limits, which strongly associated with at-risk NASH.”

Additional long-term studies are needed to determine the benefits of limiting alcohol consumption to reduce liver-related morbidity and mortality, the authors wrote.

The effect of alcohol consumption on liver health “has been controversial, since some studies have suggested that nonheavy alcohol use can even have some beneficial metabolic effects and has been associated with reduced risk of fatty liver disease, while other studies have found that nonheavy alcohol use is associated with increased risk for liver-related clinical outcomes,” Fredrik Åberg, MD, PhD, a hepatologist and liver transplant specialist at Helsinki University Hospital, said in an interview.

Dr. Åberg wasn’t involved with this study but has researched alcohol consumption and liver disease. Among non–heavy alcohol users, drinking more alcohol per week is associated with increased hospitalization for liver disease, hepatocellular carcinoma, and liver-related death, he and his colleagues have found.

“We concluded that the net effect of non-heavy drinking on the liver is harm,” he said. “Overall, this study by Rice and colleagues supports the recommendation that persons with mild liver disease should reduce their drinking, and persons with severe liver disease (cirrhosis and advanced fibrosis) should abstain from alcohol use.”

The study authors are supported in part by the National Institute of Diabetes and Digestive and Kidney Diseases, a Doris Duke Charitable Foundation Grant, a Gilead Sciences Research Scholars Award, the Boston University Department of Medicine Career Investment Award, and the Boston University Clinical Translational Science Institute. The Framingham Heart Study is supported in part by the National Heart, Lung, and Blood Institute. The authors and Dr. Åberg reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Dietary interventions can support IBD treatment

Article Type
Changed
Fri, 12/23/2022 - 11:41

Some solid food diets may aid in the treatment of inflammatory bowel disease (IBD), though the overall quality of evidence remains low and additional data are needed, according to a new report.

For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.

“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.

“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing diets

Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.

In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.

For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.

Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.

Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.

When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.

For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.

“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”

For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.

For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
 

 

 

Helping patients

Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.

Dr. James D. Lewis

“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.

Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.

Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.

“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”

As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.

“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”

The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Some solid food diets may aid in the treatment of inflammatory bowel disease (IBD), though the overall quality of evidence remains low and additional data are needed, according to a new report.

For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.

“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.

“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing diets

Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.

In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.

For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.

Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.

Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.

When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.

For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.

“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”

For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.

For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
 

 

 

Helping patients

Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.

Dr. James D. Lewis

“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.

Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.

Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.

“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”

As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.

“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”

The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Some solid food diets may aid in the treatment of inflammatory bowel disease (IBD), though the overall quality of evidence remains low and additional data are needed, according to a new report.

For Crohn’s disease, a diet low in refined carbohydrates and a symptoms-guided diet appeared to help with remission, yet reduction of refined carbohydrates or red meat didn’t reduce the risk of relapse. For ulcerative colitis, solid food diets were similar to control measures.

“The Internet has a dizzying array of diet variants touted to benefit inflammation and IBD, which has led to much confusion among patients, and even clinicians, over what is truly effective or not,” Berkeley Limketkai, MD, PhD, director of clinical research at the Center for Inflammatory Bowel Disease at the University of California, Los Angeles, said in an interview.

“Even experiences shared by well-meaning individuals might not be generalizable to others,” he said. “The lack of clarity on what is or is not effective motivated us to perform this systematic review and meta-analysis.”

The study was published online in Clinical Gastroenterology and Hepatology.
 

Analyzing diets

Some nutritional therapies, such as exclusive enteral nutrition, have good evidence to support their use in the treatment of IBD, Dr. Limketkai said. However, patients often find maintaining a liquid diet difficult, particularly over a long period of time, so clinicians and patients have been interested in solid food diets as a treatment for IBD.

In 2019, Dr. Limketkai and colleagues conducted a systematic review and meta-analysis of randomized controlled trials focused on solid food diets for IBD that was published with the Cochrane Collaboration. At that time, the data were considered sparse, and the certainty of evidence was very low or low. Since then, several high-quality trials have been published.

For this study, Dr. Limketkai and colleagues conducted an updated review of 36 studies and a meta-analysis of 27 studies that compared a solid food diet with a control diet in patients with Crohn’s disease or ulcerative colitis. The intervention arm had to involve a well-defined diet, not merely a “usual” diet.

Among the studies, 12 evaluated dietary interventions for inducing clinical remission in patients with active Crohn’s disease, and 639 patients were involved. Overall, a low–refined carbohydrate diet was superior to a high-carbohydrate diet or a low-fiber diet. In addition, a symptoms-guided diet, which sequentially eliminated foods that aggravated a patient’s symptoms, was superior to conventional nutrition advice. However, the studies had serious imprecisions and very low certainty of evidence.

Compared with respective controls, a highly restrictive organic diet, a low-microparticle diet, and a low-calcium diet were ineffective at inducing remission of Crohn’s disease. Studies focused on immunoglobulin G-based measures were also inconsistent.

When comparing diets touted to benefit patients with Crohn’s disease, the Specific Carbohydrate Diet was similar to the Mediterranean diet and the whole-food diet, though the certainty of evidence was low. Partial enteral nutrition was similar to exclusive enteral nutrition, though there was substantial statistical heterogeneity between studies and very low certainty of evidence.

For maintenance of Crohn’s disease remission, researchers evaluated 14 studies that included 1,211 patients with inactive disease. Partial enteral nutrition appeared to reduce the risk of relapse, although evidence certainty was very low. In contrast, reducing red meat or refined carbohydrates did not lower the risk of relapse.

“These findings seemingly contradict our belief that red meat and refined carbohydrates have proinflammatory effects, although there are other studies that appear to show inconsistent, weak, or no association between consumption of unprocessed red meat and disease,” Dr. Limketkai said. “The caveat is that our findings are based on weak evidence, which may change as more studies are performed over time.”

For induction of remission in ulcerative colitis, researchers evaluated three studies that included 124 participants with active disease. When compared with participants’ usual diet, there was no benefit from a diet that excluded symptom-provoking foods, fried foods, refined carbohydrates, additives, preservatives, most condiments, spices, and beverages other than boiled water. Other studies found no benefit from eliminating cow milk protein or gluten.

For maintenance of ulcerative colitis remission, they looked at four studies that included 101 patients with inactive disease. Overall, there was no benefit from a carrageenan-free diet, anti-inflammatory diet, or cow milk protein elimination diet.
 

 

 

Helping patients

Although the certainty of evidence remains very low or low for most dietary trials in IBD, the emerging data suggest that nutrition plays an important role in IBD management and should be considered in the overall treatment plan for patients, the study authors wrote.

Dr. James D. Lewis

“Patients continue to look for ways to control their IBD, particularly with diet. Providers continue to struggle with making evidence-based recommendations about dietary interventions for IBD. This systematic review is a useful tool for providers to advise their patients,” James D. Lewis, MD, associate director of the inflammatory bowel diseases program at the University of Pennsylvania, Philadelphia, said in an interview.

Dr. Lewis, who wasn’t involved with this study, has researched dietary interventions for IBD. He and his colleagues have found that reducing red meat does not lower the rate of Crohn’s disease flares and that the Mediterranean diet and Specific Carbohydrate Diet appear to be similar for inducing clinical remission.

Based on this review, partial enteral nutrition could be an option for patients with Crohn’s disease, Dr. Lewis said.

“Partial enteral nutrition is much easier than exclusive enteral nutrition for patients,” he said. “However, there remains uncertainty as to whether the solid food component of a partial enteral nutrition approach impacts outcomes.”

As more dietary studies become available, the certainty of evidence could improve and lead to better recommendations for patients, Dr. Limketkai and colleagues wrote. They are conducting several studies focused on the concept of precision nutrition.

“While certain diets may be helpful and effective for IBD, different diets work differently in different people. This concept is no different than the fact that different IBD medications work differently in different individuals,” Dr. Limketkai said. “However, given the current state of evidence for dietary interventions in IBD, we still have a long path of research ahead of us.”

The study received no funding. The study authors reported no conflicts of interest. Dr. Lewis reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hospitals with more diverse and uninsured patients more likely to provide delayed fracture care

Article Type
Changed
Wed, 12/21/2022 - 10:05

Patients who seek fracture care at a facility that treats a higher proportion of patients from racial or ethnic minorities or a higher number of uninsured patients are more likely to face a longer-than-recommended delay in treatment, according to new data.

Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.

“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.

“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.

The study was published online  in JAMA Network Open.
 

Site performance varied

Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.

The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.

The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.

The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.

Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.

The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).

At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.

There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.

At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.

Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
 

 

 

Addressing inequalities

Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”

Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.

“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”

Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.

“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.

Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.

“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”

Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.

“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”

The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients who seek fracture care at a facility that treats a higher proportion of patients from racial or ethnic minorities or a higher number of uninsured patients are more likely to face a longer-than-recommended delay in treatment, according to new data.

Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.

“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.

“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.

The study was published online  in JAMA Network Open.
 

Site performance varied

Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.

The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.

The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.

The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.

Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.

The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).

At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.

There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.

At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.

Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
 

 

 

Addressing inequalities

Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”

Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.

“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”

Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.

“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.

Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.

“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”

Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.

“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”

The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Patients who seek fracture care at a facility that treats a higher proportion of patients from racial or ethnic minorities or a higher number of uninsured patients are more likely to face a longer-than-recommended delay in treatment, according to new data.

Regardless of individual patient-level characteristics such as race, ethnicity, or insurance status, these patients were more likely to miss the recommended 24-hour benchmark for surgery.

“Institutions that treat a less diverse patient population appeared to be more resilient to the mix of insurance status in their patient population and were more likely to meet time-to-surgery benchmarks, regardless of patient insurance status or population-based insurance mix,” write study author Ida Leah Gitajn, MD, an orthopedic trauma surgeon at Dartmouth-Hitchcock Medical Center, Lebanon, N.H., and colleagues.

“While it is unsurprising that increased delays were associated with underfunded institutions, the association between institutional-level racial disparity and surgical delays implies structural health systems bias,” the authors wrote.

The study was published online  in JAMA Network Open.
 

Site performance varied

Racial inequalities in health care utilization and outcomes have been documented in many medical specialties, including orthopedic trauma, the study authors write. However, previous studies evaluating racial disparities in fracture care have been limited to patient-level associations rather than hospital-level factors.

The investigators conducted a secondary analysis of prospectively collected multicenter data for 2,565 patients with hip and femur fractures enrolled in two randomized trials at 23 sites in the United States and Canada. The researchers assessed whether disparities in meeting 24-hour time-to-surgery benchmarks exist at the patient level or at the institutional level, evaluating the association of race, ethnicity, and insurance status.

The cohort study used data from the Program of Randomized Trials to Evaluate Preoperative Antiseptic Skin Solutions in Orthopaedic Trauma (PREP-IT), which enrolled patients from 2018-2021 and followed them for 1 year. All patients with hip and femur fractures enrolled in the PREP-IT program were included in the analysis, which was conducted from April to September of this year.

The cohort included 2,565 patients with an average age of about 65 years. About 82% of patients were White, 13.4% were Black, 3.2% were Asian, and 1.1% were classified as another race or ethnicity. Among the study population, 32.5% of participants were employed, and 92.2% had health insurance. Nearly 40% had a femur fracture with an average injury severity score of 10.4.

Overall, 596 patients (23.2%) didn’t meet the 24-hour time-to-operating-room benchmark. Patients who didn’t meet the 24-hour surgical window were more likely to be older, women, and have a femur fracture. They were less likely to be employed.

The 23 sites had variability in meeting the 24-hour benchmark, race and ethnicity distribution, and population-based health insurance. Institutions met benchmarks at frequencies ranging from 45.2% (for 196 of 433 procedures) to 97.4% (37 of 38 procedures). Minority race and ethnicity distribution ranged from 0% (in 99 procedures) to 58.2% (in 53 of 91 procedures). The proportion of uninsured patients ranged from 0% (in 64 procedures) to 34.2% (in 13 of 38 procedures).

At the patient level, there was no association between missing the 24-hour benchmark and race or ethnicity, and there was no independent association between hospital population racial composition and surgical delay. In an analysis that controlled for patient-level characteristics, there was no association between missing the 24-hour benchmark and patient-level insurance status.

There was an independent association, however, between the hospital population insurance coverage and hospital population racial composition as an interaction term, suggesting a moderating effect (P = .03), the study authors write.

At low rates of uninsured patients, the probability of missing the 24-hour benchmark was 12.5%-14.6% when racial composition varied from 0%-50% minority patients. In contrast, at higher rates of uninsured patients, the risk of missing the 24-hour window was higher among more diverse populations. For instance, at 30% uninsured, the risk of missing the benchmark was 0.5% when the racial composition was low and 17.6% at 50% minority patients.

Additional studies are needed to understand the findings and how health system programs or structures play a role, the authors write. For instance, well-funded health systems that care for a higher proportion of insured patients likely have quality improvement programs and other support structures, such as operating room access, that ensure appropriate time-to-surgery benchmarks for time-sensitive fractures, they say.
 

 

 

Addressing inequalities

Troy Amen, MD, MBA, an orthopedic surgery resident at the Hospital for Special Surgery, New York, said, “Despite these disparities being reported and well documented in recent years, unfortunately, not enough has been done to address them or understand their fundamental root causes.”

Dr. Amen, who wasn’t involved with this study, has researched racial and ethnic disparities in hip fracture surgery care across the United States. He and his colleagues found disparities in delayed time-to-surgery, particularly for Black patients.

“We live in a country and society where we want and strive for equality of care for patients regardless of race, ethnicity, gender, sexual orientation, or background,” he said. “We have a moral imperative to address these disparities as health care providers, not only among ourselves, but also in conjunction with lawmakers, hospital administrators, and health policy specialists.”

Uma Srikumaran, MD, an associate professor of orthopedic surgery at Johns Hopkins University, Baltimore, wasn’t involved with this study but has researched racial disparities in the timing of radiographic assessment and surgical treatment of hip fractures.

“Though we understand that racial disparities are pervasive in health care, we have a great deal left to understand about the extent of those disparities and all the various factors that contribute to them,” Dr. Srikumaran told this news organization.

Dr. Srikumaran and colleagues have found that Black patients had longer wait times for evaluation and surgery than White patients.

“We all want to get to the solutions, but those can be difficult to execute without an intricate understanding of the problem,” he said. “We should encourage this type of research all throughout health care in general but also very locally, as solutions are not likely to be one-size-fits-all.”

Dr. Srikumaran pointed to the need to measure the problem in specific pathologies, populations, geographies, hospital types, and other factors.

“Studying the trends of this issue will help us determine whether our national or local initiatives are making a difference and which interventions are most effective for a particular hospital, geographic location, or particular pathology,” he said. “Accordingly, if a particular hospital or health system isn’t looking at differences in the delivery of care by race, they are missing an opportunity to ensure equity and raise overall quality.”

The study was supported by funding from the Patient Centered Outcomes Research Institute. Dr. Gitajn reported receiving personal fees for consulting and teaching work from Stryker outside the submitted work. Dr. Amen and Dr. Srikumaran reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI versus other interventions for colonoscopy: How do they compare?

Article Type
Changed
Wed, 12/21/2022 - 10:07

Colonoscopies with artificial intelligence demonstrate significantly better adenoma detection rates (ADRs) than most other endoscopic interventions, according to a new report.

AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.

“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.

“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”

The study was published online in the Journal of Clinical Gastroenterology.
 

Analyzing detection rates

Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.

Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.

No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.

The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).

Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.

The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).

Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).

In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).

Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
 

 

 

No AI advantage for serrated polyps

Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).

AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).

“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.

Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.

“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
 

Additional considerations

Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.

“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”

Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.

For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.

“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”

No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Colonoscopies with artificial intelligence demonstrate significantly better adenoma detection rates (ADRs) than most other endoscopic interventions, according to a new report.

AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.

“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.

“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”

The study was published online in the Journal of Clinical Gastroenterology.
 

Analyzing detection rates

Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.

Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.

No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.

The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).

Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.

The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).

Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).

In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).

Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
 

 

 

No AI advantage for serrated polyps

Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).

AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).

“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.

Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.

“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
 

Additional considerations

Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.

“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”

Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.

For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.

“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”

No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Colonoscopies with artificial intelligence demonstrate significantly better adenoma detection rates (ADRs) than most other endoscopic interventions, according to a new report.

AI-based tools appear to outperform other methods intended to increase ADRs, including distal attachment devices, dye-based/virtual chromoendoscopy, water-based techniques, and balloon-assisted devices, researchers found in a systematic review and meta-analysis.

“ADR is a very important quality metric. The higher the ADR, the less likely the chance of interval cancer,” first author Muhammad Aziz, MD, co-chief gastroenterology fellow at the University of Toledo (Ohio), told this news organization. Interval cancer refers to colorectal cancer that is diagnosed within 5 years of a patient’s undergoing a negative colonoscopy.

“Numerous interventions have been attempted and researched to see the impact on ADR,” he said. “The new kid on the block – AI-assisted colonoscopy – is a game-changer. I knew that AI was impactful in improving ADR, but I didn’t know it would be the best.”

The study was published online in the Journal of Clinical Gastroenterology.
 

Analyzing detection rates

Current guidelines set an ADR benchmark of 25% overall, with 30% for men and 20% for women undergoing screening colonoscopy. Every 1% increase in ADR results in a 3% reduction in colorectal cancer, Dr. Aziz and his co-authors write.

Several methods can improve ADR over standard colonoscopy. Computer-aided detection and AI methods, which have emerged in recent years, alert the endoscopist of potential lesions in real time with visual signals.

No direct comparative studies had been conducted, so to make an indirect comparison, Dr. Aziz and colleagues undertook a systematic review and network meta-analysis of 94 randomized controlled trials that included 61,172 patients and 20 different study interventions.

The research team assessed the impact of AI in comparison with other endoscopic methods, using relative risk for proportional outcomes and mean difference for continuous outcomes. About 63% of the colonoscopies were for screening and surveillance, and 37% were diagnostic. The effectiveness was ranked by P-score (the probability of being the best treatment).

Overall, AI had the highest P-score (0.96), signifying the best modality of all interventions for improving ADR, the study authors write. A sensitivity analysis using the fixed effects model did not significantly alter the effect measure.

The network meta-analysis showed significantly higher ADR for AI, compared with autofluorescence imaging (relative risk, 1.33), dye-based chromoendoscopy (RR, 1.22), Endocap (RR, 1.32), Endocuff (RR, 1.19), Endocuff Vision (RR, 1.26), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR,1.26), full-spectrum endoscopy (RR, 1.40), high-definition (HD) colonoscopy (RR, 1.41), linked color imaging (1.21), narrow-band imaging (RR, 1.33), water exchange (RR, 1.22), and water immersion (RR, 1.47).

Among 34 studies of colonoscopies for screening or surveillance only, the ADR was significantly improved for linked color imaging (RR, 1.18), I-Scan with contrast and surface enhancement (RR, 1.25), Endocuff (RR, 1.20), Endocuff Vision (RR, 1.13), and water exchange (RR, 1.24), compared with HD colonoscopy. Only one AI study was included in this analysis, because the others had significantly more patients who underwent colonoscopy for diagnostic indications. In this case, AI did not improve ADR, compared with HD colonoscopy (RR, 1.44).

In addition, a significantly improved polyp detection rate (PDR) was noted for AI, compared with autofluorescence imaging (RR, 1.28), Endocap (RR, 1.18), Endocuff Vision (RR, 1.21), EndoRings (RR, 1.30), flexible spectral imaging color enhancement (RR, 1.21), full-spectrum endoscopy (RR, 1.39), HD colonoscopy (RR, 1.34), linked color imaging (RR, 1.19), and narrow-band imaging (RR, 1.21). Again, AI had the highest P-score (RR, 0.93).

Among 17 studies of colonoscopy for screening and surveillance, only one AI study was included for PDR. A significantly higher PDR was noted for AI, compared with HD colonoscopy (RR, 1.33). None of the other interventions improved PDR over HD colonoscopy.
 

 

 

No AI advantage for serrated polyps

Twenty-three studies evaluated detection for serrated polyps, including three AI studies. AI did not improve the serrated polyp detection rate (SPDR), compared with other interventions. However, several modalities did improve SPDR: G-EYE, compared with full-spectrum endoscopy (RR, 3.93), linked color imaging, compared with full-spectrum endoscopy (RR, 1.88), and HD colonoscopy (RR, 1.71), and Endocuff Vision, compared with HD colonoscopy (RR, 1.36). G-EYE had the highest P-score (0.93).

AI significantly improved adenomas per colonoscopy, compared with full-spectrum endoscopy (mean difference, 0.38), HD colonoscopy (MD, 0.18), and narrow-band imaging (MD, 0.13), the authors note. However, the number of adenomas detected per colonoscopy was significantly lower for AI, compared with Endocap (-0.13). Endocap had the highest P-score (0.92).

“The strengths of this study include the wide range of endoscopic add-ons included, the number of trials included, and the granularity of some of the reporting data,” Jeremy Glissen Brown, MD, a gastroenterologist and an assistant professor of medicine at Duke University, told this news organization.

Dr. Glissen Brown, who wasn’t involved with this study, researches AI tools for polyp detection. He and colleagues have found that AI decreases adenoma miss rates and increases the number of first-pass adenomas detected per colonoscopy.

“The limitations include significant heterogeneity among many of the comparisons, as well as a high risk of bias, as it is technically difficult to achieve blinding of provider participants in the device-based RCTs [randomized controlled trials] that this analysis was based on,” he said.
 

Additional considerations

Dr. Aziz and colleagues note the need for additional studies of AI-based detection, particularly for screening and surveillance. For widespread adoption into clinical practice, new systems must have higher specificity, sensitivity, accuracy, and efficiency, they write.

“AI technology needs further optimization, as there is still the aspect of having a lot of false positives – lesions detected but not necessarily adenomas that can turn into cancer,” Dr. Aziz said. “This decreases the efficiency of the colonoscopy and increases the anesthesia and sedation time. In addition, different AI systems have different diagnostic yield, as it all depends on the images that were fed to the system or algorithm.”

Dr. Glissen Brown also pointed to the low number of AI-based studies involving serrated polyp lesion detection. Future research could investigate whether computer-aided detection systems (CADe) decrease miss rates and increase detection rates for sessile serrated lesions, he said.

For practical clinical purposes, Dr. Glissen Brown highlighted the potential complementary nature of the various colonoscopy tools. When used together, for instance, AI and Endocuff may increase ADRs even further and decrease the number of missed polyps through different mechanisms, he said.

“It is also important in device research to interrogate the cost versus benefit of any intervention or combination of interventions,” he said. “I think with CADe this is still something that we are figuring out. We will need to find novel ways of making these technologies affordable, especially as the debate of which clinically meaningful outcomes we examine when it comes to AI continues to evolve.”

No funding source for the study was reported. Two authors have received grant support from or have consulted for several pharmaceutical and medical device companies. Dr. Glissen Brown has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Virtual yoga program appears to improve IBS symptoms, fatigue, stress

Article Type
Changed
Tue, 12/06/2022 - 11:20

An online yoga program appears to be effective, feasible, and safe for patients with irritable bowel syndrome (IBS), according to a new report.

Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.

“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.

“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”

The study was published online in the American Journal of Gastroenterology.
 

Online yoga program vs. IBS advice only

IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.

During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.

Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.

The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.

The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.

The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.

The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.

The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.

They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.

Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.

The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.

In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.

Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.

Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.

No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.

“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”

The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
 

 

 

Future program considerations

Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.

“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”

Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.

“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.

Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.

“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”

The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An online yoga program appears to be effective, feasible, and safe for patients with irritable bowel syndrome (IBS), according to a new report.

Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.

“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.

“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”

The study was published online in the American Journal of Gastroenterology.
 

Online yoga program vs. IBS advice only

IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.

During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.

Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.

The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.

The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.

The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.

The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.

The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.

They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.

Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.

The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.

In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.

Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.

Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.

No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.

“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”

The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
 

 

 

Future program considerations

Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.

“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”

Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.

“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.

Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.

“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”

The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

An online yoga program appears to be effective, feasible, and safe for patients with irritable bowel syndrome (IBS), according to a new report.

Participants reported a decrease in IBS-related symptoms and improvements in quality of life, fatigue, and perceived stress.

“IBS affects upwards of 15%-20% of the North American population, and despite our advances in the area, we have very limited options to offer our patients,” Maitreyi Raman, MD, an associate professor of medicine at the University of Calgary (Alta.), said in an interview.

“Often, we are focused on treating symptoms but not addressing the underlying cause,” said Dr. Raman, who is director of Alberta’s Collaboration of Excellence for Nutrition in Digestive Diseases. “With advances around the gut microbiome and the evolving science on the brain-gut axis, mind-body interventions could offer a therapeutic option that patients can use to improve the overall course of their disease.”

The study was published online in the American Journal of Gastroenterology.
 

Online yoga program vs. IBS advice only

IBS often involves alterations of the gut-brain axis and can be affected by psychological or physiological stress, the study authors write. Previous studies have found that in-person yoga programs can manage IBS symptoms and improve physiological, psychological, and emotional health.

During the COVID-19 pandemic, yoga programs had to switch to a virtual format – a delivery method that could remain relevant due to limited health care resources. However, the efficacy, feasibility, and safety of virtual yoga for people with IBS were unknown.

Dr. Raman and colleagues conducted a randomized, two-group, controlled clinical trial at the University of Calgary (Alta.) between March 2021 and December 2022. The 79 participants weren’t blinded to the trial arms – an online yoga program or an advice-only control group.

The eligible participants had a diagnosis of IBS, scored at least 75 out of 500 points on the IBS Symptoms Severity Scale (IBS-SSS) for mild IBS, and were on stable doses of medications for IBS. They were instructed to continue with their current therapies during the study but didn’t start new medications or make major changes to their diet or physical patterns.

The yoga program was based on Upa Yoga, a subtype of Hatha Yoga developed by the Isha Foundation of Inner Sciences. The program was delivered by a certified yoga facilitator from the Isha Foundation and included directional movements, neck rotations, breathing practices, breath watching, and mantra meditation with aum/om chanting.

The online classes of three to seven participants were delivered in 60-minute sessions for 8 weeks. The participants were also asked to practice at home daily with the support of yoga videos.

The advice-only control group included a 10-minute video with general education on IBS, the mind-gut connection in IBS, and the role of mind-body therapies in managing IBS. The participants received a list of IBS-related resources from the Canadian Digestive Health Foundation, a link to an IBS patient support group, and information about physical activity guidelines from the World Health Organization.

The research team looked for a primary endpoint of at least a 50-point reduction on the IBS-SSS, which is considered clinically meaningful.

They also measured for secondary outcomes, such as quality of life, anxiety, depression, perceived stress, COVID-19–related stress, fatigue, somatic symptoms, self-compassion, and intention to practice yoga.

Among the 79 participants, 38 were randomized to the yoga program and 41 were randomized to the advice-only control group. The average age was 45 years. Most (92%) were women, and 81% were White. The average IBS duration since diagnosis was 11.5 years.

The overall average IBS-SSS was moderate, at 245.3, at the beginning of the program, and dropped to 207.9 at week 8. The score decreased from 255.2 to 200.5 in the yoga group and from 236.1 to 213.5 in the control group. The difference between the groups was 32 points, which wasn’t statistically significant, though symptom improvement began after 4 weeks in the yoga group.

In the yoga group, 14 participants (37%) met the target decrease of 50 points or more, compared with eight participants (20%) in the control group. These 22 “responders” reported improvements in IBS symptoms, quality of life, perceived stress, and COVID-19–related stress.

Specifically, among the 14 responders in the yoga group, there were significant improvements in IBS symptoms, quality of life, fatigue, somatic symptoms, self-compassion, and COVID-19–related stress. In the control group, there were significant improvements in IBS symptoms and COVID-19–related stress.

Using an intent-to-treat analysis, the research team found that the yoga group had improved quality of life, fatigue, and perceived stress. In the control group, improvements were seen only in COVID-19–related stress.

No significant improvements were found in anxiety or depression between the groups, although the changes in depression scores were in favor of the yoga group. The intention to practice yoga dropped in both groups during the study period, but it wasn’t associated with the actual yoga practice minutes or change in IBS-SSS scores.

“We saw a surprising improvement in quality of life,” Dr. Raman said. “Although we talk about quality of life as an important endpoint, it can be hard to show in studies, so that was a nice finding to demonstrate in this study.”

The yoga intervention was feasible in terms of adherence (79%), attrition rate (20%), and high program satisfaction, the researchers write. Safety was demonstrated by the absence of any adverse events.
 

 

 

Future program considerations

Dr. Raman and colleagues are interested in understanding the mechanisms that underlie the efficacy of mind-body interventions. They also plan to test the virtual yoga program in a mobile app, called LyfeMD, which is intended to support patients with digestive diseases through evidence-based dietary programs and mind-body interventions, such as guided meditation, breathing exercises, and cognitive behavioral therapy.

“We know that patients are looking for all possible resources,” Dr. Raman said. “Our next goal is to better understand how an app-based intervention can be effective, even without a live instructor.”

Future studies should also consider clinicians’ perspectives, she noted. In previous studies, Dr. Raman and colleagues have found that physicians are open to recommending yoga as a therapeutic option for patients, but some are unsure how to prescribe a recommended dose, frequency, or type of yoga.

“When treating patients with IBS, it is important to think broadly and creatively about all our treatment options,” said Elyse Thakur, PhD, a clinical health psychologist at Atrium Health Gastroenterology and Hepatology, Charlotte, N.C.

Dr. Thakur, who wasn’t involved with this study, specializes in gastrointestinal health psychology. She and colleagues use numerous complementary and alternative medicine options with patients.

“We have to remember that people may respond differently to available treatment options,” she said. “It is imperative to understand the evidence so we can have productive conversations with our patients about the pros and cons and the potential benefits and limitations.”

The study did not receive a specific grant from a funding agency. The authors and Dr. Thakur declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Celiac disease linked to higher risk for rheumatoid arthritis, juvenile idiopathic arthritis

Article Type
Changed
Fri, 11/18/2022 - 10:09

Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.

Children with celiac disease are nearly three times as likely to develop JIA relative to the general population. Adults with celiac disease are nearly two times as likely to be diagnosed with RA.

“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.

“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.

Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.

With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.

For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.

The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.

When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.

For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.

As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.

When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.

This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
 

 

 

Additional findings

Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.

Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.

Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.

“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.

“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.

Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.

“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.

Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.

Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”

The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.

Children with celiac disease are nearly three times as likely to develop JIA relative to the general population. Adults with celiac disease are nearly two times as likely to be diagnosed with RA.

“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.

“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.

Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.

With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.

For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.

The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.

When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.

For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.

As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.

When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.

This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
 

 

 

Additional findings

Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.

Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.

Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.

“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.

“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.

Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.

“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.

Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.

Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”

The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Celiac disease is linked to juvenile idiopathic arthritis (JIA) in children and rheumatoid arthritis (RA) in adults, according to an analysis of nationwide data in Sweden.

Children with celiac disease are nearly three times as likely to develop JIA relative to the general population. Adults with celiac disease are nearly two times as likely to be diagnosed with RA.

“I hope that our study can ultimately change clinical practice by lowering the threshold to evaluate celiac disease patients for inflammatory joint diseases,” John B. Doyle, MD, a gastroenterology fellow at Columbia University Irving Medical Center in New York, told this news organization.

“Inflammatory joint diseases, such as JIA and RA, are notoriously difficult to diagnose given their variable presentations,” he said. “But if JIA or RA can be identified sooner by physicians, patients will ultimately benefit by starting disease-modifying therapy earlier in their disease course.”

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

Celiac disease has been linked to numerous autoimmune diseases, including type 1 diabetes, autoimmune thyroid disease, lupus, and inflammatory bowel disease (IBD), Dr. Doyle noted. However, a definitive epidemiologic association between celiac disease and inflammatory joint diseases such as JIA or RA hasn›t been established.

Dr. Doyle and colleagues conducted a nationwide population-based, retrospective matched cohort study using the Epidemiology Strengthened by Histopathology Reports in Sweden. They identified 24,014 patients diagnosed with biopsy-proven celiac disease between 2004 and 2017.

With these data, each patient was matched to five reference individuals in the general population by age, sex, calendar year, and geographic region, for a total of 117,397 people without a previous diagnosis of celiac disease. The researchers calculated the incidence and estimated the relative risk for JIA in patients younger than 18 years and RA in patients aged 18 years or older.

For those younger than 18 years, the incidence rate of JIA was 5.9 per 10,000 person-years among the 9,415 patients with celiac disease versus 2.2 per 10,000 person-years in the general population, over a follow-up of 7 years. Those with celiac disease were 2.7 times as likely to develop JIA.

The association between celiac disease and JIA remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. The incidence rate of JIA among patients with celiac disease was higher in both females and males, and across all age groups studied.

When 6,703 children with celiac disease were compared with their 9,089 siblings without celiac disease, the higher risk for JIA in patients with celiac disease fell slightly short of statistical significance.

For those aged 18 years or older, the incidence rate of RA was 8.4 per 10,000 person-years among the 14,599 patients with celiac disease versus 5.1 per 10,000 person-years in the general population, over a follow-up of 8.8 years. Those with celiac disease were 1.7 times as likely to develop RA.

As with the younger cohort, the association between celiac disease and RA in the adult group remained similar after adjustment for education, Nordic country of birth, type 1 diabetes, autoimmune thyroid disease, lupus, and IBD. Although both men and women with celiac disease had higher rates of RA, the risk was higher among those in whom disease was diagnosed at age 18-59 years compared with those who received a diagnosis at age 60 years or older.

When 9,578 adults with celiac disease were compared with their 17,067 siblings without celiac disease, the risk for RA remained higher in patients with celiac disease.

This suggests “that the association between celiac disease and RA is unlikely to be explained by environmental factors alone,” Dr. Doyle said.
 

 

 

Additional findings

Notably, the primary analysis excluded patients diagnosed with JIA or RA before their celiac disease diagnosis. In additional analyses, however, significant associations emerged.

Among children with celiac disease, 0.5% had a previous diagnosis of JIA, compared with 0.1% of matched comparators. Those with celiac disease were 3.5 times more likely to have a JIA diagnosis.

Among adults with celiac disease, 0.9% had a previous diagnosis of RA, compared with 0.6% of matched comparators. Those with celiac disease were 1.4 times more likely to have a RA diagnosis.

“We found that diagnoses of these types of arthritis were more common before a diagnosis of celiac disease compared to the general population,” Benjamin Lebwohl, MD, director of clinical research at the Celiac Disease Center at Columbia University, New York, told this news organization.

“This suggests that undiagnosed and untreated celiac disease might be contributing to these others autoimmune conditions,” he said.

Dr. Doyle and Dr. Lebwohl emphasized the practical implications for clinicians caring for patients with celiac disease. Among patients with celiac disease and inflammatory joint symptoms, clinicians should have a low threshold to evaluate for JIA or RA, they said.

“Particularly in pediatrics, we are trained to screen patients with JIA for celiac disease, but this study points to the possible bidirectional association and the importance of maintaining a clinical suspicion for JIA and RA among established celiac disease patients,” Marisa Stahl, MD, assistant professor of pediatrics and associate program director of the pediatric gastroenterology, hepatology, and nutrition fellowship training program at the University of Colorado at Denver, Aurora, said in an interview.

Dr. Stahl, who wasn’t involved with this study, conducts research at the Colorado Center for Celiac Disease. She and colleagues are focused on understanding the genetic and environmental factors that lead to the development of celiac disease and other autoimmune diseases.

Given the clear association between celiac disease and other autoimmune diseases, Dr. Stahl agreed that clinicians should have a low threshold for screening, with “additional workup for other autoimmune diseases once an autoimmune diagnosis is established.”

The study was supported by Karolinska Institutet and the Swedish Research Council. Dr. Lebwohl coordinates a study on behalf of the Swedish IBD quality register, which has received funding from Janssen. The other authors declared no conflicts of interest. Dr. Stahl reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Flu vaccination associated with reduced stroke risk

Article Type
Changed
Fri, 11/18/2022 - 07:51

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Influenza vaccination is associated with a reduced risk of stroke among adults, even if they aren’t at high risk for stroke, according to new research.

The risk of stroke was about 23% lower in the 6 months following a flu shot, regardless of the patient’s age, sex, or underlying health conditions.

“There is an established link between upper respiratory infection and both heart attack and stroke. This has been very salient in the past few years throughout the COVID-19 pandemic,” study author Jessalyn Holodinsky, PhD, a stroke epidemiologist and postdoctoral fellow in clinical neurosciences at the University of Calgary (Alta.) told this news organization.

“It is also known that the flu shot can reduce risk of heart attack and hospitalization for those with heart disease,” she said. “Given both of these [observations], we thought it prudent to study whether there is a link between vaccination for influenza and stroke.”

The study was published in the Lancet Public Health.
 

Large effect size

The investigators analyzed administrative data from 2009 through 2018 from the Alberta Health Care Insurance Plan, which covers all residents of Alberta. The province provides free seasonal influenza vaccines to residents under the insurance plan.

The research team looked for stroke events such as acute ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage, and transient ischemic attack. They then analyzed the risk of stroke events among those with or without a flu shot in the previous 6 months. They accounted for multiple factors, including age, sex, income, location, and factors related to stroke risk, such as anticoagulant use, atrial fibrillation, chronic obstructive pulmonary disease, diabetes, and hypertension.

Among the 4.1 million adults included in the researchers’ analysis, about 1.8 million (43%) received at least one vaccination during the study period. Nearly 97,000 people received a flu vaccine in each year they were in the study, including 29,288 who received a shot in all 10 flu seasons included in the study.

About 38,000 stroke events were recorded, including about 34,000 (90%) first stroke events. Among the 10% of strokes that were recurrent events, the maximum number of stroke events in one person was nine.

Overall, patients who received at least one influenza vaccine were more likely to be older, be women, and have higher rates of comorbidities. The vaccinated group had a slightly higher proportion of people who lived in urban areas, but the income levels were similar between the vaccinated and unvaccinated groups.

The crude incidence of stroke was higher among people who had ever received an influenza vaccination, at 1.25%, compared with 0.52% among those who hadn’t been vaccinated. However, after adjusting for age, sex, underlying conditions, and socioeconomic status, recent flu vaccination (that is, in the previous 6 months) was associated with a 23% reduced risk of stroke.

The significant reduction in risk applied to all stroke types, particularly acute ischemic stroke and intracerebral hemorrhage. In addition, influenza vaccination was associated with a reduced risk across all ages and risk profiles, except patients without hypertension.

“What we were most surprised by was the sheer magnitude of the effect and that it existed across different adult age groups, for both sexes, and for those with and without risk factors for stroke,” said Dr. Holodinsky.

Vaccination was associated with a larger reduction in stroke risk in men than in women, perhaps because unvaccinated men had a significantly higher baseline risk for stroke than unvaccinated women, the study authors write.
 

 

 

Promoting cardiovascular health

In addition, vaccination was associated with a greater relative reduction in stroke risk in younger age groups, lower income groups, and those with diabetes, chronic obstructive pulmonary disease, and anticoagulant use.

Among 2.4 million people observed for the entire study period, vaccination protection increased with the number of vaccines received. People who were vaccinated serially each year had a significantly lower risk of stroke than those who received one shot.

Dr. Holodinsky and colleagues are conducting additional research into influenza vaccination, including stroke risk in children. They’re also investigating whether the reduced risk applies to other vaccinations for respiratory illnesses, such as COVID-19 and pneumonia.

“We hope that this added effect of vaccination encourages more adults to receive the flu shot,” she said. “One day, vaccinations might be considered a key pillar of cardiovascular health, along with diet, exercise, control of hypertension and high cholesterol, and smoking cessation.”

Future research should also investigate the reasons why adults – particularly people at high risk with underlying conditions – don’t receive recommended influenza vaccines, the study authors wrote.
 

‘Call to action’

Bahar Behrouzi, an MD-PhD candidate focused on clinical epidemiology at the Institute of Health Policy, Management, and Evaluation, University of Toronto, said: “There are a variety of observational studies around the world that show that flu vaccine uptake is low among the general population and high-risk persons. In studying these questions, our hope is that we can continue to build confidence in viral respiratory vaccines like the influenza vaccine by continuing to generate rigorous evidence with the latest data.”

Ms. Behrouzi, who wasn’t involved with this study, has researched influenza vaccination and cardiovascular risk. She and her colleagues have found that flu vaccines were associated with a 34% lower risk of major adverse cardiovascular events, including a 45% reduced risk among patients with recent acute coronary syndrome.

“The broader public health message is for people to advocate for themselves and get the seasonal flu vaccine, especially if they are part of an at-risk group,” she said. “In our studies, we have positioned this message as a call to action not only for the public, but also for health care professionals – particularly specialists such as cardiologists or neurologists – to encourage or remind them to engage in conversation about the broad benefits of vaccination beyond just preventing or reducing the severity of flu infection.”

The study was conducted without outside funding. Dr. Holodinsky and Ms. Behrouzi have reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM LANCET PUBLIC HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Liver disease-related deaths rise during pandemic

Article Type
Changed
Mon, 11/14/2022 - 12:06

 

U.S. mortality for alcohol-associated liver disease (ALD) and non-alcoholic fatty liver disease (NAFLD) increased at “alarming” rates during the COVID-19 pandemic, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Yee Hui Yeo

Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.

“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
 

Recent U.S. liver disease death rates

Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.

Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.

For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.

The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.

For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.

The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
 

Hepatitis B and C gains lost in pandemic

In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.

“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”

By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
 

 

 

Biggest rise in young adults

By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.

For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.

By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.

“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.

The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

U.S. mortality for alcohol-associated liver disease (ALD) and non-alcoholic fatty liver disease (NAFLD) increased at “alarming” rates during the COVID-19 pandemic, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Yee Hui Yeo

Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.

“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
 

Recent U.S. liver disease death rates

Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.

Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.

For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.

The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.

For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.

The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
 

Hepatitis B and C gains lost in pandemic

In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.

“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”

By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
 

 

 

Biggest rise in young adults

By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.

For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.

By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.

“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.

The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.

 

U.S. mortality for alcohol-associated liver disease (ALD) and non-alcoholic fatty liver disease (NAFLD) increased at “alarming” rates during the COVID-19 pandemic, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Yee Hui Yeo

Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.

“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
 

Recent U.S. liver disease death rates

Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.

Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.

For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.

The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.

For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.

The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
 

Hepatitis B and C gains lost in pandemic

In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.

“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”

By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
 

 

 

Biggest rise in young adults

By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.

For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.

By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.

“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.

The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LIVER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Living donor liver transplants on rise for most urgent need

Article Type
Changed
Mon, 11/14/2022 - 11:59

Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).

Dr. Benjamin Rosenthal

The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.

“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”

Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.

Trends and outcomes in LDLT at high MELD scores

Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.

In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).

The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.

Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.

For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.

Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.

In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.

However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.

The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.

“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”

The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.

The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.

Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.

The authors didn’t report a funding source. The authors declared no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).

Dr. Benjamin Rosenthal

The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.

“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”

Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.

Trends and outcomes in LDLT at high MELD scores

Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.

In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).

The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.

Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.

For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.

Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.

In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.

However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.

The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.

“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”

The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.

The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.

Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.

The authors didn’t report a funding source. The authors declared no relevant disclosures.

Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.

Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).

Dr. Benjamin Rosenthal

The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.

“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”

Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.

Trends and outcomes in LDLT at high MELD scores

Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.

In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).

The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.

Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.

For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.

Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.

In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.

However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.

The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.

“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”

The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.

The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.

Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.

The authors didn’t report a funding source. The authors declared no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LIVER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pediatric celiac disease incidence varies across U.S., Europe

Article Type
Changed
Thu, 11/10/2022 - 07:38

The incidence of new celiac disease with onset by age 10 appears to be rising and varies widely by region, suggesting different environmental, genetic, and epigenetic influences within the United States, according to a new report.

The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.

“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.

“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.

The study was published online in The American Journal of Gastroenterology.
 

Celiac disease incidence

The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.

As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.

From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.

Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.

CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.

In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.

As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.

Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.

Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.

Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.

“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
 

 

 

Site-specific risks

In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.

Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.

The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.

“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”

Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.

Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.

“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”

The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The incidence of new celiac disease with onset by age 10 appears to be rising and varies widely by region, suggesting different environmental, genetic, and epigenetic influences within the United States, according to a new report.

The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.

“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.

“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.

The study was published online in The American Journal of Gastroenterology.
 

Celiac disease incidence

The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.

As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.

From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.

Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.

CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.

In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.

As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.

Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.

Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.

Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.

“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
 

 

 

Site-specific risks

In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.

Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.

The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.

“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”

Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.

Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.

“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”

The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The incidence of new celiac disease with onset by age 10 appears to be rising and varies widely by region, suggesting different environmental, genetic, and epigenetic influences within the United States, according to a new report.

The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.

“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.

“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.

The study was published online in The American Journal of Gastroenterology.
 

Celiac disease incidence

The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.

As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.

From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.

Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.

CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.

In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.

As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.

Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.

Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.

Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.

“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
 

 

 

Site-specific risks

In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.

Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.

The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.

“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”

Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.

Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.

“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”

The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article