User login
Liver disease-related deaths rise during pandemic
according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.
“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
Recent U.S. liver disease death rates
Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.
Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.
For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.
The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.
For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.
The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
Hepatitis B and C gains lost in pandemic
In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.
“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”
By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
Biggest rise in young adults
By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.
For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.
By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.
“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.
The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.
according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.
“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
Recent U.S. liver disease death rates
Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.
Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.
For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.
The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.
For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.
The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
Hepatitis B and C gains lost in pandemic
In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.
“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”
By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
Biggest rise in young adults
By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.
For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.
By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.
“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.
The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.
according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Between 2019 and 2021, ALD-related deaths increased by 17.6% and NAFLD-related deaths increased by 14.5%, Yee Hui Yeo, MD, a resident physician and hepatology-focused investigator at Cedars-Sinai Medical Center in Los Angeles, said at a preconference press briefing.
“Even before the pandemic, the mortality rates for these two diseases have been increasing, with NAFLD having an even steeper increasing trend,” he said. “During the pandemic, these two diseases had a significant surge.”
Recent U.S. liver disease death rates
Dr. Yeo and colleagues analyzed data from the Center for Disease Control and Prevention’s National Vital Statistic System to estimate the age-standardized mortality rates (ASMR) of liver disease between 2010 and 2021, including ALD, NAFLD, hepatitis B, and hepatitis C. Using prediction modeling analyses based on trends from 2010 to 2019, they predicted mortality rates for 2020-2021 and compared them with the observed rates to quantify the differences related to the pandemic.
Between 2010 and 2021, there were about 626,000 chronic liver disease–related deaths, including about 343,000 ALD-related deaths, 204,000 hepatitis C–related deaths, 58,000 NAFLD-related deaths, and 21,000 hepatitis B–related deaths.
For ALD-related deaths, the annual percentage change was 3.5% for 2010-2019 and 17.6% for 2019-2021. The observed ASMR in 2020 was significantly higher than predicted, at 15.7 deaths per 100,000 people versus 13.0 predicted from the 2010-2019 rate. The trend continued in 2021, with 17.4 deaths per 100,000 people versus 13.4 in the previous decade.
The highest numbers of ALD-related deaths during the COVID-19 pandemic occurred in Alaska, Montana, Wyoming, Colorado, New Mexico, and South Dakota.
For NAFLD-related deaths, the annual percentage change was 7.6% for 2010-2014, 11.8% for 2014-2019, and 14.5% for 2019-2021. The observed ASMR was also higher than predicted, at 3.1 deaths per 100,000 people versus 2.6 in 2020, as well as 3.4 versus 2.8 in 2021.
The highest numbers of NAFLD-related deaths during the COVID-19 pandemic occurred in Oklahoma, Indiana, Kentucky, Tennessee, and West Virginia.
Hepatitis B and C gains lost in pandemic
In contrast, the annual percentage change in was –1.9% for hepatitis B and –2.8% for hepatitis C. After new treatment for hepatitis C emerged in 2013-2014, mortality rates were –7.8% for 2014-2019, Dr. Yeo noted.
“However, during the pandemic, we saw that this decrease has become a nonsignificant change,” he said. “That means our progress of the past 5 or 6 years has already stopped during the pandemic.”
By race and ethnicity, the increase in ALD-related mortality was most pronounced in non-Hispanic White, non-Hispanic Black, and Alaska Native/American Indian populations, Dr. Yeo said. Alaska Natives and American Indians had the highest annual percentage change, at 18%, followed by non-Hispanic Whites at 11.7% and non-Hispanic Blacks at 10.8%. There were no significant differences in race and ethnicity for NAFLD-related deaths, although all groups had major increases in recent years.
Biggest rise in young adults
By age, the increase in ALD-related mortality was particularly severe for ages 25-44, with an annual percentage change of 34.6% in 2019-2021, as compared with 13.7% for ages 45-64 and 12.6% for ages 65 and older.
For NAFLD-related deaths, another major increase was observed among ages 25-44, with an annual percentage change of 28.1% for 2019-2021, as compared with 12% for ages 65 and older and 7.4% for ages 45-64.
By sex, the ASMR increase in NAFLD-related mortality was steady throughout 2010-2021 for both men and women. In contrast, ALD-related death increased sharply between 2019 and 2021, with an annual percentage change of 19.1% for women and 16.7% for men.
“The increasing trend in mortality rates for ALD and NAFLD has been quite alarming, with disparities in age, race, and ethnicity,” Dr. Yeo said.
The study received no funding support. Some authors disclosed research funding, advisory board roles, and consulting fees with various pharmaceutical companies.
FROM THE LIVER MEETING
Living donor liver transplants on rise for most urgent need
Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).
The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.
“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”
Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.
Trends and outcomes in LDLT at high MELD scores
Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.
In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).
The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.
Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.
For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.
Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.
In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.
However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.
The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.
“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”
The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.
The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.
Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.
The authors didn’t report a funding source. The authors declared no relevant disclosures.
Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).
The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.
“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”
Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.
Trends and outcomes in LDLT at high MELD scores
Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.
In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).
The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.
Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.
For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.
Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.
In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.
However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.
The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.
“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”
The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.
The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.
Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.
The authors didn’t report a funding source. The authors declared no relevant disclosures.
Living donor liver transplants (LDLT) for recipients with the most urgent need for a liver transplant in the next 3 months – a model for end-stage liver disease (MELD) score of 25 or higher – have become more frequent during the past decade, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
Among LDLT recipients, researchers found comparable patient and graft survival at low and high MELD scores. But among patients with high MELD scores, researchers found lower adjusted graft survival and a higher transplant rate among those with living donors, compared with recipients of deceased donor liver transplantation (DDLT).
The findings suggest certain advantages of LDLT over DDLT may be lost in the high-MELD setting in terms of graft survival, said Benjamin Rosenthal, MD, an internal medicine resident focused on transplant hepatology at the Hospital of the University of Pennsylvania, Philadelphia.
“Historically, in the United States especially, living donor liver transplantation has been offered to patients with low or moderate MELD,” he said. “The outcomes of LDLT at high MELD are currently unknown.”
Previous data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL) found that LDLT offered a survival benefit versus remaining on the wait list, independent of MELD score, he said. A recent study also has demonstrated a survival benefit across MELD scores of 11-26, but findings for MELD scores of 25 and higher have been mixed.
Trends and outcomes in LDLT at high MELD scores
Dr. Rosenthal and colleagues conducted a retrospective cohort study of adult LDLT recipients from 2010 to 2021 using data from the Organ Procurement and Transplantation Network (OPTN), the U.S. donation and transplantation system.
In baseline characteristics among LDLT transplant recipients, there weren’t significant differences in age, sex, race, and ethnicity for MELD scores below 25 or at 25 and higher. There also weren’t significant differences in donor age, relationship, use of nondirected grafts, or percentage of right and left lobe donors for LDLT recipients. However, recipients with high MELD scores had more nonalcoholic steatohepatitis (29.5% versus 24.6%) and alcohol-assisted cirrhosis (21.6% versus 14.3%).
The research team evaluated graft survival among LDLT recipients by MELD below 25 and at 25 or higher. They also compared posttransplant patient and graft survival between LDLT and DDLT recipients with a MELD of 25 or higher. They excluded transplant candidates on the wait list for Status 1/1A, redo transplant, or multiorgan transplant.
Among the 3,590 patients who had LDLT between 2010 and 2021, 342 patients (9.5%) had a MELD of 25 or higher at transplant. There was some progression during the waiting period, Dr. Rosenthal noted, with a median listing MELD score of 19 among those who had a MELD of 25 or higher at transplant and 21 among those who had a MELD of 30 or higher at transplant.
For LDLT recipients with MELD scores above or below 25, researchers found no significant differences in adjusted patient survival or adjusted graft survival.
Then the team compared outcomes of LDLT and DDLT in high-MELD recipients. Among the 67,279-patient DDLT comparator group, 27,552 patients (41%) had a MELD of 25 or higher at transplant.
In terms of LDLT versus DDLT, unadjusted and adjusted patient survival were no different for patients with MELD of 25 or higher. In addition, unadjusted graft survival was no different.
However, adjusted graft survival was worse for LDLT recipients with high MELD scores. In addition, the retransplant rate was higher in LDLT recipients, at 5.7% versus 2.4%.
The reason why graft survival may be worse remains unclear, Dr. Rosenthal said. One hypothesis is that a low graft-to-recipient weight ratio in LDLT can cause small-for-size syndrome. However, these ratios were not available from OPTN.
“Further studies should be done to see what the benefit is, with graft-to-recipient weight ratios included,” he said. “The differences between DDLT and LDLT in this setting should be further explored as well.”
The research team also described temporal and transplant center trends for LDLT by MELD group. For temporal trends, they expanded the study period from 2002-2021.
The found a marked U.S. increase in the percentage of LDLT with a MELD of 25 or higher, particularly in the last decade and especially in the last 5 years. But the percentage of LDLT with high MELD remains lower than 15%, even in recent years, Dr. Rosenthal noted.
Across transplant centers, there was a trend toward centers with increasing LDLT volume having a greater proportion of LDLT recipients with a MELD of 25 or higher. At the 19.6% of centers performing 10 or fewer LDLT during the study period, none of the LDLT recipients had a MELD of 25 or higher, Dr. Rosenthal said.
The authors didn’t report a funding source. The authors declared no relevant disclosures.
FROM THE LIVER MEETING
Pediatric celiac disease incidence varies across U.S., Europe
, according to a new report.
The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.
“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.
“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.
The study was published online in The American Journal of Gastroenterology.
Celiac disease incidence
The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.
As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.
From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.
Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.
CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.
In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.
As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.
Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.
Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.
Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.
“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
Site-specific risks
In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.
Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.
The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.
“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”
Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.
Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.
“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”
The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to a new report.
The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.
“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.
“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.
The study was published online in The American Journal of Gastroenterology.
Celiac disease incidence
The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.
As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.
From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.
Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.
CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.
In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.
As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.
Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.
Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.
Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.
“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
Site-specific risks
In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.
Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.
The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.
“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”
Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.
Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.
“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”
The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to a new report.
The overall high incidence among pediatric patients warrants a low threshold for screening and additional research on region-specific celiac disease triggers, the authors write.
“Determining the true incidence of celiac disease (CD) is not possible without nonbiased screening for the disease. This is because many cases occur with neither a family history nor with classic symptoms,” write Edwin Liu, MD, a pediatric gastroenterologist at the Children’s Hospital Colorado Anschutz Medical Campus and director of the Colorado Center for Celiac Disease, and colleagues.
“Individuals may have celiac disease autoimmunity without having CD if they have transient or fluctuating antibody levels, low antibody levels without biopsy evaluation, dietary modification influencing further evaluation, or potential celiac disease,” they write.
The study was published online in The American Journal of Gastroenterology.
Celiac disease incidence
The Environmental Determinants of Diabetes in the Young (TEDDY) study prospectively follows children born between 2004 and 2010 who are at genetic risk for both type 1 diabetes and CD at six clinical sites in four countries: the United States, Finland, Germany, and Sweden. In the United States, patients are enrolled in Colorado, Georgia, and Washington.
As part of TEDDY, children are longitudinally monitored for celiac disease autoimmunity (CDA) by assessment of autoantibodies to tissue transglutaminase (tTGA). The protocol is designed to analyze the development of persistent tTGA positivity, CDA, and subsequent CD. The study population contains various DQ2.5 and DQ8.1 combinations, which represent the highest-risk human leukocyte antigen (HLA) DQ haplogentotypes for CD.
From September 2004 through February 2010, more than 424,000 newborns were screened for specific HLA haplogenotypes, and 8,676 children were enrolled in TEDDY at the six clinical sites. The eligible haplogenotypes included DQ2.5/DQ2.5, DQ2.5/DQ8.1, DQ8.1/DQ8.1, and DQ8.1/DQ4.2.
Blood samples were obtained and stored every 3 months until age 48 months and at least every 6 months after that. At age 2, participants were screened annually for tTGA. With the first tTGA-positive result, all prior collected samples from the patient were tested for tTGA to determine the earliest time point of autoimmunity.
CDA, a primary study outcome, was defined as positivity in two consecutive tTGA tests at least 3 months apart.
In seropositive children, CD was defined on the basis of a duodenal biopsy with a Marsh score of 2 or higher. The decision to perform a biopsy was determined by the clinical gastroenterologist and was outside of the study protocol. When a biopsy wasn’t performed, participants with an average tTGA of 100 units or greater from two positive tests were considered to have CD for the study purposes.
As of July 2020, among the children who had undergone one or more tTGA tests, 6,628 HLA-typed eligible children were found to carry the DQ2.5, the D8.1, or both haplogenotypes and were included in the analysis. The median follow-up period was 11.5 years.
Overall, 580 children (9%) had a first-degree relative with type 1 diabetes, and 317 children (5%) reported a first-degree relative with CD.
Among the 6,628 children, 1,299 (20%) met the CDA outcome, and 529 (8%) met the study diagnostic criteria for CD on the basis of biopsy or persistently high tTGA levels. The median age at CDA across all sites was 41 months. Most children with CDA were asymptomatic.
Overall, the 10-year cumulative incidence was highest in Sweden, at 8.4% for CDA and 3% for CD. Within the United States, Colorado had the highest cumulative incidence for both endpoints, at 6.5% for CDA and 2.4% for CD. Washington had the lowest incidence across all sites, at 4.6% for CDA and 0.9% for CD.
“CDA and CD risk varied substantially by haplogenotype and by clinical center, but the relative risk by region was preserved regardless of the haplogenotype,” the authors write. “For example, the disease burden for each region remained highest in Sweden and lowest in Washington state for all haplogenotypes.”
Site-specific risks
In the HLA, sex, and family-adjusted model, Colorado children had a 2.5-fold higher risk of CD, compared with Washington children. Likewise, Swedish children had a 1.8-fold higher risk of CD than children in Germany, a 1.7-fold higher than children in the United States, and a 1.4-fold higher risk than children in Finland.
Among DQ2.5 participants, Sweden demonstrated the highest risk, with 63.1% of patients developing CDA by age 10 and 28.3% developing CD by age 10. Finland consistently had a higher incidence of CDA than Colorado, at 60.4% versus 50.9%, for DQ2.5 participants but a lower incidence of CD than Colorado, at 20.3% versus 22.6%.
The research team performed a post hoc sensitivity analysis using a lower tTGA cutoff to reduce bias in site differences for biopsy referral and to increase sensitivity of the CD definition for incidence estimation. When the tTGA cutoff was lowered to an average two-visit tTGA of 67.4 or higher, more children met the serologic criteria for CD.
“Even with this lower cutoff, the differences in the risk of CD between clinical sites and countries were still observed with statistical significance,” the authors write. “This indicates that the regional differences in CD incidence could not be solely attributed to detection biases posed by differential biopsy rates.”
Multiple environmental factors likely account for the differences in autoimmunity among regions, the authors write. These variables include diet, chemical exposures, vaccination patterns, early-life gastrointestinal infections, and interactions among these factors. For instance, the Swedish site has the lowest rotavirus vaccination rates and the highest median gluten intake among the TEDDY sites.
Future prospective studies should capture environmental, genetic, and epigenetic exposures to assess causal pathways and plan for preventive strategies, the authors write. The TEDDY study is pursuing this research.
“From a policy standpoint, this informs future screening practices and supports efforts toward mass screening, at least in some areas,” the authors write. “In the clinical setting, this points to the importance for clinicians to have a low threshold for CD screening in the appropriate clinical setting.”
The TEDDY study is funded by several grants from the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Environmental Health Sciences, the Centers for Disease Control and Prevention, and the Juvenile Diabetes Research Foundation. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AMERICAN JOURNAL OF GASTROENTEROLOGY
NAFLD patients with diabetes have higher fibrosis progression rate
Among people with nonalcoholic fatty liver disease (NAFLD), the fibrosis progression rate was higher among those who also had diabetes, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
NAFLD patients with type 2 diabetes progressed by one stage about every 6 years, compared with one stage about every 8 years among patients without diabetes, said Daniel Huang, MBBS, a visiting scholar at the University of California San Diego (UCSD) NAFLD Research Center and a transplant hepatologist at National University Hospital in Singapore.
“We now know that fibrosis stage is a major determinant of liver-related outcomes in NAFLD, as well as overall mortality,” he said. “Liver fibrosis progresses by approximately one stage every 7 years for individuals with NASH (nonalcoholic steatohepatitis).”
Recent UCSD data have indicated that about 14% of patients over age 50 with type 2 diabetes have NAFLD with advanced fibrosis, he noted. Previous studies have shown that diabetes is associated with higher rates of advanced fibrosis, cirrhosis, and hepatocellular carcinoma, but limited data exist around whether the fibrosis progression rate is higher among diabetics.
National study cohort
Dr. Huang and colleagues conducted a multicenter, multiethnic prospective cohort study within the NASH Clinical Research Network consortium to examine the fibrosis progression rate and the fibrosis regression rate among patients with or without diabetes. The study included adult participants at eight sites across the United States who had biopsy-confirmed NAFLD and available paired liver biopsies that were at least 1 year apart.
Clinical and laboratory data were obtained at enrollment and prospectively at 48-week intervals and recorded at the time of any liver biopsies. A central pathology committee conducted the liver histology assessment, and the entire pathology committee was blinded to clinical data and the sequence of liver biopsy. The fibrosis progression and regression rates were defined as the change in fibrosis stage over time between biopsies, measured in years.
The study comprised 447 adult participants with NAFLD: 208 patients with type 2 diabetes and 239 patients without diabetes, Dr. Huang said. The mean age was 51, and the mean body mass index was 34.7. The patients with diabetes were more likely to be older, to be women, and to have metabolic syndrome, NASH, and a higher fibrosis stage.
Notably, the median HbA1c among patients with diabetes was 6.8%, indicating a cohort with fairly well-controlled blood sugar. The median time between biopsies was 3.3 years.
Difference in progression, not regression
Overall, 151 participants (34%) experienced fibrosis progression, the primary study outcome. In a secondary outcome, 102 participants (23%) had fibrosis regression. The remaining 194 participants (43%) had no change in fibrosis stage. About 26% of patients with types 2 diabetes progressed to advanced fibrosis, as compared with 14.1% of patients without diabetes.
Among all those with fibrosis progression, the rate was 0.15 stages per year, with an average progression rate of one stage over 6.7 years. For patients with diabetes, the progression rate was significantly higher at 0.17 stages per year, compared with 0.13 stages per year among patients without diabetes, Dr. Huang said. That translated to an average progression of one stage over 5.9 years for patients with diabetes and 7.7 years for patients without diabetes.
In contrast, the regression rate was similar between those with or without diabetes at baseline, at –0.13 stages per year for those with diabetes versus –0.14 stages per year for those without diabetes. The similar outcome translated to an average regression of one stage over 7.7 years among those with diabetes and 7.1 years among those without diabetes.
Type 2 diabetes was an independent predictor of fibrosis progression in NAFLD, in both unadjusted and multivariable adjusted models, including baseline fibrosis stage, Dr. Huang said. In addition, patients with diabetes had a significantly higher cumulative incidence of fibrosis progression at 4 years (23% versus 19%), 8 years (59% versus 49%), and 12 years (93% versus 76%).
The research team didn’t find a significant difference in HbA1c as a predictor of fibrosis progression when using a cutoff of 7%.
“It is possible that poor glycemic control may accelerate fibrosis further, but we need studies to validate this,” Dr. Huang said. “These data have important implications for clinical practice and clinical trial design. Patients with NAFLD and diabetes may require more frequent monitoring for disease progression.”
The NASH Clinical Research Network consortium is sponsored by the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Huang has served on an advisory board for Eisai. The other authors declared various research support and advisory roles with numerous pharmaceutical companies.
Among people with nonalcoholic fatty liver disease (NAFLD), the fibrosis progression rate was higher among those who also had diabetes, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
NAFLD patients with type 2 diabetes progressed by one stage about every 6 years, compared with one stage about every 8 years among patients without diabetes, said Daniel Huang, MBBS, a visiting scholar at the University of California San Diego (UCSD) NAFLD Research Center and a transplant hepatologist at National University Hospital in Singapore.
“We now know that fibrosis stage is a major determinant of liver-related outcomes in NAFLD, as well as overall mortality,” he said. “Liver fibrosis progresses by approximately one stage every 7 years for individuals with NASH (nonalcoholic steatohepatitis).”
Recent UCSD data have indicated that about 14% of patients over age 50 with type 2 diabetes have NAFLD with advanced fibrosis, he noted. Previous studies have shown that diabetes is associated with higher rates of advanced fibrosis, cirrhosis, and hepatocellular carcinoma, but limited data exist around whether the fibrosis progression rate is higher among diabetics.
National study cohort
Dr. Huang and colleagues conducted a multicenter, multiethnic prospective cohort study within the NASH Clinical Research Network consortium to examine the fibrosis progression rate and the fibrosis regression rate among patients with or without diabetes. The study included adult participants at eight sites across the United States who had biopsy-confirmed NAFLD and available paired liver biopsies that were at least 1 year apart.
Clinical and laboratory data were obtained at enrollment and prospectively at 48-week intervals and recorded at the time of any liver biopsies. A central pathology committee conducted the liver histology assessment, and the entire pathology committee was blinded to clinical data and the sequence of liver biopsy. The fibrosis progression and regression rates were defined as the change in fibrosis stage over time between biopsies, measured in years.
The study comprised 447 adult participants with NAFLD: 208 patients with type 2 diabetes and 239 patients without diabetes, Dr. Huang said. The mean age was 51, and the mean body mass index was 34.7. The patients with diabetes were more likely to be older, to be women, and to have metabolic syndrome, NASH, and a higher fibrosis stage.
Notably, the median HbA1c among patients with diabetes was 6.8%, indicating a cohort with fairly well-controlled blood sugar. The median time between biopsies was 3.3 years.
Difference in progression, not regression
Overall, 151 participants (34%) experienced fibrosis progression, the primary study outcome. In a secondary outcome, 102 participants (23%) had fibrosis regression. The remaining 194 participants (43%) had no change in fibrosis stage. About 26% of patients with types 2 diabetes progressed to advanced fibrosis, as compared with 14.1% of patients without diabetes.
Among all those with fibrosis progression, the rate was 0.15 stages per year, with an average progression rate of one stage over 6.7 years. For patients with diabetes, the progression rate was significantly higher at 0.17 stages per year, compared with 0.13 stages per year among patients without diabetes, Dr. Huang said. That translated to an average progression of one stage over 5.9 years for patients with diabetes and 7.7 years for patients without diabetes.
In contrast, the regression rate was similar between those with or without diabetes at baseline, at –0.13 stages per year for those with diabetes versus –0.14 stages per year for those without diabetes. The similar outcome translated to an average regression of one stage over 7.7 years among those with diabetes and 7.1 years among those without diabetes.
Type 2 diabetes was an independent predictor of fibrosis progression in NAFLD, in both unadjusted and multivariable adjusted models, including baseline fibrosis stage, Dr. Huang said. In addition, patients with diabetes had a significantly higher cumulative incidence of fibrosis progression at 4 years (23% versus 19%), 8 years (59% versus 49%), and 12 years (93% versus 76%).
The research team didn’t find a significant difference in HbA1c as a predictor of fibrosis progression when using a cutoff of 7%.
“It is possible that poor glycemic control may accelerate fibrosis further, but we need studies to validate this,” Dr. Huang said. “These data have important implications for clinical practice and clinical trial design. Patients with NAFLD and diabetes may require more frequent monitoring for disease progression.”
The NASH Clinical Research Network consortium is sponsored by the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Huang has served on an advisory board for Eisai. The other authors declared various research support and advisory roles with numerous pharmaceutical companies.
Among people with nonalcoholic fatty liver disease (NAFLD), the fibrosis progression rate was higher among those who also had diabetes, according to new findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
NAFLD patients with type 2 diabetes progressed by one stage about every 6 years, compared with one stage about every 8 years among patients without diabetes, said Daniel Huang, MBBS, a visiting scholar at the University of California San Diego (UCSD) NAFLD Research Center and a transplant hepatologist at National University Hospital in Singapore.
“We now know that fibrosis stage is a major determinant of liver-related outcomes in NAFLD, as well as overall mortality,” he said. “Liver fibrosis progresses by approximately one stage every 7 years for individuals with NASH (nonalcoholic steatohepatitis).”
Recent UCSD data have indicated that about 14% of patients over age 50 with type 2 diabetes have NAFLD with advanced fibrosis, he noted. Previous studies have shown that diabetes is associated with higher rates of advanced fibrosis, cirrhosis, and hepatocellular carcinoma, but limited data exist around whether the fibrosis progression rate is higher among diabetics.
National study cohort
Dr. Huang and colleagues conducted a multicenter, multiethnic prospective cohort study within the NASH Clinical Research Network consortium to examine the fibrosis progression rate and the fibrosis regression rate among patients with or without diabetes. The study included adult participants at eight sites across the United States who had biopsy-confirmed NAFLD and available paired liver biopsies that were at least 1 year apart.
Clinical and laboratory data were obtained at enrollment and prospectively at 48-week intervals and recorded at the time of any liver biopsies. A central pathology committee conducted the liver histology assessment, and the entire pathology committee was blinded to clinical data and the sequence of liver biopsy. The fibrosis progression and regression rates were defined as the change in fibrosis stage over time between biopsies, measured in years.
The study comprised 447 adult participants with NAFLD: 208 patients with type 2 diabetes and 239 patients without diabetes, Dr. Huang said. The mean age was 51, and the mean body mass index was 34.7. The patients with diabetes were more likely to be older, to be women, and to have metabolic syndrome, NASH, and a higher fibrosis stage.
Notably, the median HbA1c among patients with diabetes was 6.8%, indicating a cohort with fairly well-controlled blood sugar. The median time between biopsies was 3.3 years.
Difference in progression, not regression
Overall, 151 participants (34%) experienced fibrosis progression, the primary study outcome. In a secondary outcome, 102 participants (23%) had fibrosis regression. The remaining 194 participants (43%) had no change in fibrosis stage. About 26% of patients with types 2 diabetes progressed to advanced fibrosis, as compared with 14.1% of patients without diabetes.
Among all those with fibrosis progression, the rate was 0.15 stages per year, with an average progression rate of one stage over 6.7 years. For patients with diabetes, the progression rate was significantly higher at 0.17 stages per year, compared with 0.13 stages per year among patients without diabetes, Dr. Huang said. That translated to an average progression of one stage over 5.9 years for patients with diabetes and 7.7 years for patients without diabetes.
In contrast, the regression rate was similar between those with or without diabetes at baseline, at –0.13 stages per year for those with diabetes versus –0.14 stages per year for those without diabetes. The similar outcome translated to an average regression of one stage over 7.7 years among those with diabetes and 7.1 years among those without diabetes.
Type 2 diabetes was an independent predictor of fibrosis progression in NAFLD, in both unadjusted and multivariable adjusted models, including baseline fibrosis stage, Dr. Huang said. In addition, patients with diabetes had a significantly higher cumulative incidence of fibrosis progression at 4 years (23% versus 19%), 8 years (59% versus 49%), and 12 years (93% versus 76%).
The research team didn’t find a significant difference in HbA1c as a predictor of fibrosis progression when using a cutoff of 7%.
“It is possible that poor glycemic control may accelerate fibrosis further, but we need studies to validate this,” Dr. Huang said. “These data have important implications for clinical practice and clinical trial design. Patients with NAFLD and diabetes may require more frequent monitoring for disease progression.”
The NASH Clinical Research Network consortium is sponsored by the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Huang has served on an advisory board for Eisai. The other authors declared various research support and advisory roles with numerous pharmaceutical companies.
FROM THE LIVER MEETING
Noninvasive tests may provide prognostic value in NAFLD
Fibrosis stages and liver stiffness measured by vibration-controlled transient elastography (LSM-VCTE) through FibroScan were significant predictors of event-free survival, said Ferenc Mozes, DPhil, a postdoctoral research assistant at the University of Oxford, England, who has worked on biomarker evaluation of nonalcoholic steatohepatitis (NASH) as a member of the Liver Investigation: Testing Marker Utility in Steatohepatitis (LITMUS) consortium.
“Liver histology is highly prognostic of liver-related outcomes in patients with NAFLD and NASH,” he said. “Not just that, but liver histology is also accepted, and furthermore mandated by the FDA, as a surrogate endpoint in pharmaceutical trials for NASH.”
However, liver histology is disadvantaged by sampling- and observer-dependent errors, he noted, as well as nonzero risk for patients. In recent years, researchers have hypothesized that noninvasive surrogate endpoints could be used as a way to speed up the development of new pharmaceutical treatments.
Dr. Mozes and colleagues evaluated the prognostic performance of histologically assessed liver fibrosis and three noninvasive tests (NITs): LSM-VCTE, Fibrosis-4 index (FIB-4), and NAFLD fibrosis score (NFS). They conducted an individual participant data meta-analysis, which first established the diagnostic performance of NITs in identifying patients with NAFLD who had advanced fibrosis (stages F3 and F4). The research team then expanded the search by reaching out to authors to ask for outcomes data and including studies with baseline LSM-VCTE and liver histology performed within 6 months, as well as at least 1 year of follow-up data.
The composite endpoint included all-cause mortality or liver-related outcomes such as decompensation of cirrhosis, hepatocellular cancer, liver transplantation, a model of end-stage liver disease (MELD) score higher than 14, or histological progression to cirrhosis. Participants were censored at the last follow-up time or at the occurrence of the first liver-related event.
Based on Kaplan-Meier survival analysis, participants were stratified into groups based on thresholds derived from the literature: fibrosis stage 0-2 (F0-2), F3, F4; LSM less than 10 kPa, LSM equal to or more than 10 kPa and less than 20 kPa, and LSM equal to or more than 20 kPa; FIB4 less than 1.3, FIB equal to or more than 1.3 and less than 2.67, and FIB4 equal to or more than 2.67; and NFS less than –1.455, NFS equal to or more than –1.455 and less than 0.676, and NFS equal to or more than 0.676.
The research team included 13 studies from Europe and Asia with data on 1,796 patients. The median follow-up time was 64 months, both from biopsy and LSM-VCTE. The fibrosis stages were typical of what would be seen in tertiary care.
Overall, 125 patients (7%) reached the composite endpoint. They tended to be older and more likely to have type 2 diabetes, higher fibrosis stages, and cirrhosis. Among those, 80 participants died, including 25 from liver-related mortality. In addition, 23 had ascites, 28 had hepatocellular cancer, and 31 progressed to cirrhosis or a MELD score greater than 14.
On the Kaplan-Meier curves, both the histology and noninvasive tests showed significant differences among the three strata for event-free survival probability.
Based on univariable Cox proportional hazard modeling, fibrosis stages F3 and F4 and continuous LSM-VCTE were significantly predictive of event-free survival probability. In multivariable models, fibrosis stage 4 and the two higher strata of LSM-VCTE were significantly predictive.
The study had several limitations, Dr. Mozes noted, by using cohort studies that weren’t initially designed to evaluate prognostic performance. They also couldn’t account for treatment effects and had no central histology reading. In addition, there may have been geographical variation in practice, as well as changes in practice over time as FibroScan technology improved in recent years.
“It turns out that stratifying patients by NIT score ranges can predict event-free survival probability,” he said. “This could pave the way into considering noninvasive tests as surrogate endpoints in clinical trials.”
In the ongoing study, Dr. Mozes and colleagues plan to look at additional aspects, such as MELD differentiation, histologic progression, and whether the NIT cutoffs differ from the current factors used to define advanced fibrosis. Future research should include longitudinal data and prospective studies, he added.
The study was sponsored by the LITMUS consortium, which has received funding from the Innovative Medicines Initiative 2 Joint Undertaking and the European Union’s Horizon 2020 research and innovation program. Dr. Mozes disclosed no relevant financial relationships.
Fibrosis stages and liver stiffness measured by vibration-controlled transient elastography (LSM-VCTE) through FibroScan were significant predictors of event-free survival, said Ferenc Mozes, DPhil, a postdoctoral research assistant at the University of Oxford, England, who has worked on biomarker evaluation of nonalcoholic steatohepatitis (NASH) as a member of the Liver Investigation: Testing Marker Utility in Steatohepatitis (LITMUS) consortium.
“Liver histology is highly prognostic of liver-related outcomes in patients with NAFLD and NASH,” he said. “Not just that, but liver histology is also accepted, and furthermore mandated by the FDA, as a surrogate endpoint in pharmaceutical trials for NASH.”
However, liver histology is disadvantaged by sampling- and observer-dependent errors, he noted, as well as nonzero risk for patients. In recent years, researchers have hypothesized that noninvasive surrogate endpoints could be used as a way to speed up the development of new pharmaceutical treatments.
Dr. Mozes and colleagues evaluated the prognostic performance of histologically assessed liver fibrosis and three noninvasive tests (NITs): LSM-VCTE, Fibrosis-4 index (FIB-4), and NAFLD fibrosis score (NFS). They conducted an individual participant data meta-analysis, which first established the diagnostic performance of NITs in identifying patients with NAFLD who had advanced fibrosis (stages F3 and F4). The research team then expanded the search by reaching out to authors to ask for outcomes data and including studies with baseline LSM-VCTE and liver histology performed within 6 months, as well as at least 1 year of follow-up data.
The composite endpoint included all-cause mortality or liver-related outcomes such as decompensation of cirrhosis, hepatocellular cancer, liver transplantation, a model of end-stage liver disease (MELD) score higher than 14, or histological progression to cirrhosis. Participants were censored at the last follow-up time or at the occurrence of the first liver-related event.
Based on Kaplan-Meier survival analysis, participants were stratified into groups based on thresholds derived from the literature: fibrosis stage 0-2 (F0-2), F3, F4; LSM less than 10 kPa, LSM equal to or more than 10 kPa and less than 20 kPa, and LSM equal to or more than 20 kPa; FIB4 less than 1.3, FIB equal to or more than 1.3 and less than 2.67, and FIB4 equal to or more than 2.67; and NFS less than –1.455, NFS equal to or more than –1.455 and less than 0.676, and NFS equal to or more than 0.676.
The research team included 13 studies from Europe and Asia with data on 1,796 patients. The median follow-up time was 64 months, both from biopsy and LSM-VCTE. The fibrosis stages were typical of what would be seen in tertiary care.
Overall, 125 patients (7%) reached the composite endpoint. They tended to be older and more likely to have type 2 diabetes, higher fibrosis stages, and cirrhosis. Among those, 80 participants died, including 25 from liver-related mortality. In addition, 23 had ascites, 28 had hepatocellular cancer, and 31 progressed to cirrhosis or a MELD score greater than 14.
On the Kaplan-Meier curves, both the histology and noninvasive tests showed significant differences among the three strata for event-free survival probability.
Based on univariable Cox proportional hazard modeling, fibrosis stages F3 and F4 and continuous LSM-VCTE were significantly predictive of event-free survival probability. In multivariable models, fibrosis stage 4 and the two higher strata of LSM-VCTE were significantly predictive.
The study had several limitations, Dr. Mozes noted, by using cohort studies that weren’t initially designed to evaluate prognostic performance. They also couldn’t account for treatment effects and had no central histology reading. In addition, there may have been geographical variation in practice, as well as changes in practice over time as FibroScan technology improved in recent years.
“It turns out that stratifying patients by NIT score ranges can predict event-free survival probability,” he said. “This could pave the way into considering noninvasive tests as surrogate endpoints in clinical trials.”
In the ongoing study, Dr. Mozes and colleagues plan to look at additional aspects, such as MELD differentiation, histologic progression, and whether the NIT cutoffs differ from the current factors used to define advanced fibrosis. Future research should include longitudinal data and prospective studies, he added.
The study was sponsored by the LITMUS consortium, which has received funding from the Innovative Medicines Initiative 2 Joint Undertaking and the European Union’s Horizon 2020 research and innovation program. Dr. Mozes disclosed no relevant financial relationships.
Fibrosis stages and liver stiffness measured by vibration-controlled transient elastography (LSM-VCTE) through FibroScan were significant predictors of event-free survival, said Ferenc Mozes, DPhil, a postdoctoral research assistant at the University of Oxford, England, who has worked on biomarker evaluation of nonalcoholic steatohepatitis (NASH) as a member of the Liver Investigation: Testing Marker Utility in Steatohepatitis (LITMUS) consortium.
“Liver histology is highly prognostic of liver-related outcomes in patients with NAFLD and NASH,” he said. “Not just that, but liver histology is also accepted, and furthermore mandated by the FDA, as a surrogate endpoint in pharmaceutical trials for NASH.”
However, liver histology is disadvantaged by sampling- and observer-dependent errors, he noted, as well as nonzero risk for patients. In recent years, researchers have hypothesized that noninvasive surrogate endpoints could be used as a way to speed up the development of new pharmaceutical treatments.
Dr. Mozes and colleagues evaluated the prognostic performance of histologically assessed liver fibrosis and three noninvasive tests (NITs): LSM-VCTE, Fibrosis-4 index (FIB-4), and NAFLD fibrosis score (NFS). They conducted an individual participant data meta-analysis, which first established the diagnostic performance of NITs in identifying patients with NAFLD who had advanced fibrosis (stages F3 and F4). The research team then expanded the search by reaching out to authors to ask for outcomes data and including studies with baseline LSM-VCTE and liver histology performed within 6 months, as well as at least 1 year of follow-up data.
The composite endpoint included all-cause mortality or liver-related outcomes such as decompensation of cirrhosis, hepatocellular cancer, liver transplantation, a model of end-stage liver disease (MELD) score higher than 14, or histological progression to cirrhosis. Participants were censored at the last follow-up time or at the occurrence of the first liver-related event.
Based on Kaplan-Meier survival analysis, participants were stratified into groups based on thresholds derived from the literature: fibrosis stage 0-2 (F0-2), F3, F4; LSM less than 10 kPa, LSM equal to or more than 10 kPa and less than 20 kPa, and LSM equal to or more than 20 kPa; FIB4 less than 1.3, FIB equal to or more than 1.3 and less than 2.67, and FIB4 equal to or more than 2.67; and NFS less than –1.455, NFS equal to or more than –1.455 and less than 0.676, and NFS equal to or more than 0.676.
The research team included 13 studies from Europe and Asia with data on 1,796 patients. The median follow-up time was 64 months, both from biopsy and LSM-VCTE. The fibrosis stages were typical of what would be seen in tertiary care.
Overall, 125 patients (7%) reached the composite endpoint. They tended to be older and more likely to have type 2 diabetes, higher fibrosis stages, and cirrhosis. Among those, 80 participants died, including 25 from liver-related mortality. In addition, 23 had ascites, 28 had hepatocellular cancer, and 31 progressed to cirrhosis or a MELD score greater than 14.
On the Kaplan-Meier curves, both the histology and noninvasive tests showed significant differences among the three strata for event-free survival probability.
Based on univariable Cox proportional hazard modeling, fibrosis stages F3 and F4 and continuous LSM-VCTE were significantly predictive of event-free survival probability. In multivariable models, fibrosis stage 4 and the two higher strata of LSM-VCTE were significantly predictive.
The study had several limitations, Dr. Mozes noted, by using cohort studies that weren’t initially designed to evaluate prognostic performance. They also couldn’t account for treatment effects and had no central histology reading. In addition, there may have been geographical variation in practice, as well as changes in practice over time as FibroScan technology improved in recent years.
“It turns out that stratifying patients by NIT score ranges can predict event-free survival probability,” he said. “This could pave the way into considering noninvasive tests as surrogate endpoints in clinical trials.”
In the ongoing study, Dr. Mozes and colleagues plan to look at additional aspects, such as MELD differentiation, histologic progression, and whether the NIT cutoffs differ from the current factors used to define advanced fibrosis. Future research should include longitudinal data and prospective studies, he added.
The study was sponsored by the LITMUS consortium, which has received funding from the Innovative Medicines Initiative 2 Joint Undertaking and the European Union’s Horizon 2020 research and innovation program. Dr. Mozes disclosed no relevant financial relationships.
FROM THE LIVER MEETING
Low-carb diet aids weight loss in liver transplant recipients with obesity
A low-carbohydrate diet appears to be an effective weight-loss intervention in liver transplant recipients with obesity as compared with a calorie-restrictive diet, according to interim findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
In particular, the intervention showed significant improvements in the metabophenotype profile, including visceral adipose tissue and abdominal subcutaneous adipose tissue, said Mohammad Siddiqui, MD, a gastroenterologist and liver transplant specialist at Virginia Commonwealth University, Richmond.
“Weight gain and obesity after liver transplantation is common,” he said. “Posttransplant obesity is associated with increased cardiometabolic risk burden, increased risk of cardiovascular disease and mortality, and overall mortality.”
Previously, Dr. Siddiqui and colleagues have shown that posttransplant weight loss is difficult because of metabolic inflexibility and mitochondrial inefficiency. By specifically targeting carbohydrate utilization, metabolic flexibility could be restored in liver transplant recipients, he noted.
Dr. Siddiqui and colleagues conducted a randomized controlled trial of 27 adult liver transplant recipients with obesity for 24 weeks. The primary endpoint was change in weight, and the secondary endpoints involved metabophenotype, metabolic flexibility, mitochondrial function, and metabolic risk. The research team excluded patients with end-stage disease, terminal disease, use of weight-loss medications, pregnancy, or uncontrolled psychiatric illness that could interfere with adherence.
Among the participants, 13 were randomized to a calorie restrictive diet of less than 1,200-1,500 calories per day, and 14 were randomized to a low-carbohydrate diet of 20 grams or less per day. At enrollment, the participants underwent dietary, activity, skeletal muscle, and body composition assessments, as well as metabophenotype measurements of visceral adipose tissue, abdominal subcutaneous adipose tissue, muscle fat infiltration, fat-free muscle volume, and proton density fat fraction.
All participants were advised to maintain the same level of physical activity, which was measured through 7-day accelerometry. In addition, the patients were contacted every 2 weeks throughout the 24-week study period.
“We wanted to reinforce the dietary advice. We wanted to identify factors that may lead to compliance,” Dr. Siddiqui said. “Multiple studies have documented that the more contact that patients have during weight-loss studies with medical personnel, the more effective those strategies are.”
Overall, the dietary interventions were well tolerated, and neither group showed a significant change in renal function.
The average weight loss was –7.6 kg over 6 months in the low-carbohydrate group, as compared with –0.6 kg in the calorie-restrictive group.
The low carbohydrate diet also positively affected participants’ metabophenotype profile, particularly fat deposits. As compared with the calorie-restrictive group, the low-carbohydrate group showed statistically significant improvements in visceral adipose tissue, abdominal subcutaneous adipose tissue, and muscle fat infiltration.
The liver proton density fat fraction, which is associated with fatty liver disease, decreased by 0.53% in the low-carbohydrate group and increased by 0.46% in the calorie-restrictive group, but the difference didn’t reach statistical significance.
The fat-free muscle volume decreased by about 5% in the low-carbohydrate group. Dr. Siddiqui noted that the researchers don’t know yet whether this translates to a decrease in muscle function.
In terms of metabolic risk, the low-carbohydrate diet did not affect serum lipids (such as triglycerides or cholesterol measures), renal function (such as serum creatinine, glomerular filtration rate, or blood urea nitrogen), or insulin resistance (through glucose or hemoglobin A1c). At the same time, among patients taking insulin at the time of enrollment, about 90% of patients randomized to the low-carbohydrate group were able to reduce insulin to zero during the study.
Upon completion of the current study, Dr. Siddiqui and colleagues hope to provide foundational safety and efficacy data for carbohydrate restriction in liver transplant recipients. In the ongoing study, the researchers are further investigating the dietary intervention impacts on metabolic flexibility, skeletal muscle mitochondrial function, atherogenic lipoproteins, and vascular function.
“Are we actually, on a molecular level, fixing the fundamental problem that liver transplant recipients have to improve outcomes?” he said. “We’re doing very detailed profiling of these patients, so we will have data that shows how this actually affects them.”
Dr. Siddiqui was asked about the sustainability of the low-carbohydrate diet, particularly with a restrictive parameter of 20 grams per day. During the COVID-19 pandemic, Dr. Siddiqui noted, the study was slowed and the research team was able to collect follow-up data.
“Surprisingly, we have a high rate of compliance, even after 6 months of therapy, and I think this has to do with a patient population that’s been through cirrhosis and has almost died,” he said. “They’re far more compliant, and we’re seeing that. We’re also changing the physiology and improving mitochondrial function, which improves the weight loss and weight maintenance, though I don’t know how long that’s going to last.”
The study sponsorship was not disclosed. Dr. Siddiqui reported no relevant conflicts of interest.
A low-carbohydrate diet appears to be an effective weight-loss intervention in liver transplant recipients with obesity as compared with a calorie-restrictive diet, according to interim findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
In particular, the intervention showed significant improvements in the metabophenotype profile, including visceral adipose tissue and abdominal subcutaneous adipose tissue, said Mohammad Siddiqui, MD, a gastroenterologist and liver transplant specialist at Virginia Commonwealth University, Richmond.
“Weight gain and obesity after liver transplantation is common,” he said. “Posttransplant obesity is associated with increased cardiometabolic risk burden, increased risk of cardiovascular disease and mortality, and overall mortality.”
Previously, Dr. Siddiqui and colleagues have shown that posttransplant weight loss is difficult because of metabolic inflexibility and mitochondrial inefficiency. By specifically targeting carbohydrate utilization, metabolic flexibility could be restored in liver transplant recipients, he noted.
Dr. Siddiqui and colleagues conducted a randomized controlled trial of 27 adult liver transplant recipients with obesity for 24 weeks. The primary endpoint was change in weight, and the secondary endpoints involved metabophenotype, metabolic flexibility, mitochondrial function, and metabolic risk. The research team excluded patients with end-stage disease, terminal disease, use of weight-loss medications, pregnancy, or uncontrolled psychiatric illness that could interfere with adherence.
Among the participants, 13 were randomized to a calorie restrictive diet of less than 1,200-1,500 calories per day, and 14 were randomized to a low-carbohydrate diet of 20 grams or less per day. At enrollment, the participants underwent dietary, activity, skeletal muscle, and body composition assessments, as well as metabophenotype measurements of visceral adipose tissue, abdominal subcutaneous adipose tissue, muscle fat infiltration, fat-free muscle volume, and proton density fat fraction.
All participants were advised to maintain the same level of physical activity, which was measured through 7-day accelerometry. In addition, the patients were contacted every 2 weeks throughout the 24-week study period.
“We wanted to reinforce the dietary advice. We wanted to identify factors that may lead to compliance,” Dr. Siddiqui said. “Multiple studies have documented that the more contact that patients have during weight-loss studies with medical personnel, the more effective those strategies are.”
Overall, the dietary interventions were well tolerated, and neither group showed a significant change in renal function.
The average weight loss was –7.6 kg over 6 months in the low-carbohydrate group, as compared with –0.6 kg in the calorie-restrictive group.
The low carbohydrate diet also positively affected participants’ metabophenotype profile, particularly fat deposits. As compared with the calorie-restrictive group, the low-carbohydrate group showed statistically significant improvements in visceral adipose tissue, abdominal subcutaneous adipose tissue, and muscle fat infiltration.
The liver proton density fat fraction, which is associated with fatty liver disease, decreased by 0.53% in the low-carbohydrate group and increased by 0.46% in the calorie-restrictive group, but the difference didn’t reach statistical significance.
The fat-free muscle volume decreased by about 5% in the low-carbohydrate group. Dr. Siddiqui noted that the researchers don’t know yet whether this translates to a decrease in muscle function.
In terms of metabolic risk, the low-carbohydrate diet did not affect serum lipids (such as triglycerides or cholesterol measures), renal function (such as serum creatinine, glomerular filtration rate, or blood urea nitrogen), or insulin resistance (through glucose or hemoglobin A1c). At the same time, among patients taking insulin at the time of enrollment, about 90% of patients randomized to the low-carbohydrate group were able to reduce insulin to zero during the study.
Upon completion of the current study, Dr. Siddiqui and colleagues hope to provide foundational safety and efficacy data for carbohydrate restriction in liver transplant recipients. In the ongoing study, the researchers are further investigating the dietary intervention impacts on metabolic flexibility, skeletal muscle mitochondrial function, atherogenic lipoproteins, and vascular function.
“Are we actually, on a molecular level, fixing the fundamental problem that liver transplant recipients have to improve outcomes?” he said. “We’re doing very detailed profiling of these patients, so we will have data that shows how this actually affects them.”
Dr. Siddiqui was asked about the sustainability of the low-carbohydrate diet, particularly with a restrictive parameter of 20 grams per day. During the COVID-19 pandemic, Dr. Siddiqui noted, the study was slowed and the research team was able to collect follow-up data.
“Surprisingly, we have a high rate of compliance, even after 6 months of therapy, and I think this has to do with a patient population that’s been through cirrhosis and has almost died,” he said. “They’re far more compliant, and we’re seeing that. We’re also changing the physiology and improving mitochondrial function, which improves the weight loss and weight maintenance, though I don’t know how long that’s going to last.”
The study sponsorship was not disclosed. Dr. Siddiqui reported no relevant conflicts of interest.
A low-carbohydrate diet appears to be an effective weight-loss intervention in liver transplant recipients with obesity as compared with a calorie-restrictive diet, according to interim findings presented at the annual meeting of the American Association for the Study of Liver Diseases.
In particular, the intervention showed significant improvements in the metabophenotype profile, including visceral adipose tissue and abdominal subcutaneous adipose tissue, said Mohammad Siddiqui, MD, a gastroenterologist and liver transplant specialist at Virginia Commonwealth University, Richmond.
“Weight gain and obesity after liver transplantation is common,” he said. “Posttransplant obesity is associated with increased cardiometabolic risk burden, increased risk of cardiovascular disease and mortality, and overall mortality.”
Previously, Dr. Siddiqui and colleagues have shown that posttransplant weight loss is difficult because of metabolic inflexibility and mitochondrial inefficiency. By specifically targeting carbohydrate utilization, metabolic flexibility could be restored in liver transplant recipients, he noted.
Dr. Siddiqui and colleagues conducted a randomized controlled trial of 27 adult liver transplant recipients with obesity for 24 weeks. The primary endpoint was change in weight, and the secondary endpoints involved metabophenotype, metabolic flexibility, mitochondrial function, and metabolic risk. The research team excluded patients with end-stage disease, terminal disease, use of weight-loss medications, pregnancy, or uncontrolled psychiatric illness that could interfere with adherence.
Among the participants, 13 were randomized to a calorie restrictive diet of less than 1,200-1,500 calories per day, and 14 were randomized to a low-carbohydrate diet of 20 grams or less per day. At enrollment, the participants underwent dietary, activity, skeletal muscle, and body composition assessments, as well as metabophenotype measurements of visceral adipose tissue, abdominal subcutaneous adipose tissue, muscle fat infiltration, fat-free muscle volume, and proton density fat fraction.
All participants were advised to maintain the same level of physical activity, which was measured through 7-day accelerometry. In addition, the patients were contacted every 2 weeks throughout the 24-week study period.
“We wanted to reinforce the dietary advice. We wanted to identify factors that may lead to compliance,” Dr. Siddiqui said. “Multiple studies have documented that the more contact that patients have during weight-loss studies with medical personnel, the more effective those strategies are.”
Overall, the dietary interventions were well tolerated, and neither group showed a significant change in renal function.
The average weight loss was –7.6 kg over 6 months in the low-carbohydrate group, as compared with –0.6 kg in the calorie-restrictive group.
The low carbohydrate diet also positively affected participants’ metabophenotype profile, particularly fat deposits. As compared with the calorie-restrictive group, the low-carbohydrate group showed statistically significant improvements in visceral adipose tissue, abdominal subcutaneous adipose tissue, and muscle fat infiltration.
The liver proton density fat fraction, which is associated with fatty liver disease, decreased by 0.53% in the low-carbohydrate group and increased by 0.46% in the calorie-restrictive group, but the difference didn’t reach statistical significance.
The fat-free muscle volume decreased by about 5% in the low-carbohydrate group. Dr. Siddiqui noted that the researchers don’t know yet whether this translates to a decrease in muscle function.
In terms of metabolic risk, the low-carbohydrate diet did not affect serum lipids (such as triglycerides or cholesterol measures), renal function (such as serum creatinine, glomerular filtration rate, or blood urea nitrogen), or insulin resistance (through glucose or hemoglobin A1c). At the same time, among patients taking insulin at the time of enrollment, about 90% of patients randomized to the low-carbohydrate group were able to reduce insulin to zero during the study.
Upon completion of the current study, Dr. Siddiqui and colleagues hope to provide foundational safety and efficacy data for carbohydrate restriction in liver transplant recipients. In the ongoing study, the researchers are further investigating the dietary intervention impacts on metabolic flexibility, skeletal muscle mitochondrial function, atherogenic lipoproteins, and vascular function.
“Are we actually, on a molecular level, fixing the fundamental problem that liver transplant recipients have to improve outcomes?” he said. “We’re doing very detailed profiling of these patients, so we will have data that shows how this actually affects them.”
Dr. Siddiqui was asked about the sustainability of the low-carbohydrate diet, particularly with a restrictive parameter of 20 grams per day. During the COVID-19 pandemic, Dr. Siddiqui noted, the study was slowed and the research team was able to collect follow-up data.
“Surprisingly, we have a high rate of compliance, even after 6 months of therapy, and I think this has to do with a patient population that’s been through cirrhosis and has almost died,” he said. “They’re far more compliant, and we’re seeing that. We’re also changing the physiology and improving mitochondrial function, which improves the weight loss and weight maintenance, though I don’t know how long that’s going to last.”
The study sponsorship was not disclosed. Dr. Siddiqui reported no relevant conflicts of interest.
FROM THE LIVER MEETING
Don’t wait for patients to bring up their GI symptoms
Nearly three-quarters of Americans would wait before discussing GI symptoms with a health care provider if their bowel frequency or symptoms changed, with more than a quarter overall waiting for symptoms to become severe, according to a new survey from the American Gastroenterological Association.
Nearly 40% of people said GI symptoms had disrupted everyday activities such as exercising, running errands, and spending time with family or friends, but despite these disruptions, 30% of people said they would only discuss their bowel-related concerns if their doctor brought it up first. In response, the AGA launched “Trust Your Gut,” an awareness campaign aimed at shortening the time from the onset of bowel symptoms to discussions with health care providers.
“So many patients are either fearful or embarrassed about discussing their digestive symptoms such that they delay care unless the health care provider brings it up,” said Rajeev Jain, MD, a gastroenterologist with Texas Digestive Disease Consultants, AGA patient education adviser and a Trust Your Gut spokesperson.
“This potential delay could be detrimental in some cases, such as bleeding related to colon cancer,” he said. “If diagnosed sooner, an operation or chemotherapy could lead to treatment and a cure in those cases, versus advanced cancer that may be incurable.”
The AGA Trust Your Gut survey, conducted by Kelton Global during May 9-11, 2022, included 1,010 respondents from a nationally representative sample of U.S. adults.
Struggling with the issue
About 28% of respondents said they would see a clinician immediately if their bowel frequency or symptoms changed. However, 72% said they would wait, and on top of that, 27% said they would wait until the condition became severe or didn’t resolve over time. Women were more likely than men to say they would wait, at 72% versus 64%.
Overall, 39% of respondents said bowel issues have stopped them from doing some type of activity in the past year. Men were more likely than women to say that bowel issues have affected their ability to do an activity, at 44% versus 35%.
“Typically, when it comes to functional or motility disorders or bowel dysfunction, we tend to see a higher prevalence in women, so this was somewhat surprising to see,” said Andrea Shin, MD, a gastroenterology specialist and assistant professor of medicine at Indiana University, Indianapolis, and AGA patient education adviser designate.
“Part of this difference may be related to the communication barrier and how sex or gender affects that relationship between a clinician and a patient,” she said.
The reasons for patients’ reluctance varies, but themes of uncertainty and embarrassment are prevalent. About 33% said they’re not sure whether the symptoms are a problem, 31% said they hope the symptoms improve on their own, 23% said it’s embarrassing, and 12% don’t know what to tell the doctor. Men were more likely than women to say they don’t know what to say to a doctor about their symptoms, at 15% versus 9%.
Starting the conversation
From a young age, many respondents were raised to avoid the topic of bowel issues. About 23% said their parents encouraged them not to mention bathroom-related health issues, and 10% said they didn’t talk about bowel issues at all. Another 32% said they could talk about it but had to use code words, such as “go to the bathroom” or “potty.”
“What this highlights is that patients are culturally taught not to talk about their digestive tract, or they’re embarrassed or uncertain,” Dr. Jain said. “At the end of the day, we need to destigmatize discussions about digestive function and normalize it as part of overall health.”
The survey respondents said they’d feel most comfortable talking about bowel issues with doctors (63%) and nurses (41%), as well as a significant other (44%), parent (32%), or friend (27%). Women were more likely than men to feel comfortable turning to a nurse practitioner or physician’s assistant (47% versus 35%) or a friend (30% versus 24%).
To feel more comfortable with these conversations, 42% of survey participants said they would like their doctor or clinician to describe what’s normal. About 30% want to know the appropriate terms to describe their situation.
Health care providers should also consider the cultural and social factors that may affect a patient’s disease experience, as well as how they interact with the health care system, Shin said.
“Understanding these differences might help us to better engage with a community that is diverse,” she said. “In general, we also need to be more proactive about drawing these conversations out of patients, who may not mention it unless we ask because they find it so personal.”
The AGA Trust Your Gut campaign is supported by a sponsorship from Janssen. Dr. Jain and Dr. Shin reported to relevant disclosures.
Help your patients learn more by encouraging them to visit https://patient.gastro.org/trust-your-gut/.
Nearly three-quarters of Americans would wait before discussing GI symptoms with a health care provider if their bowel frequency or symptoms changed, with more than a quarter overall waiting for symptoms to become severe, according to a new survey from the American Gastroenterological Association.
Nearly 40% of people said GI symptoms had disrupted everyday activities such as exercising, running errands, and spending time with family or friends, but despite these disruptions, 30% of people said they would only discuss their bowel-related concerns if their doctor brought it up first. In response, the AGA launched “Trust Your Gut,” an awareness campaign aimed at shortening the time from the onset of bowel symptoms to discussions with health care providers.
“So many patients are either fearful or embarrassed about discussing their digestive symptoms such that they delay care unless the health care provider brings it up,” said Rajeev Jain, MD, a gastroenterologist with Texas Digestive Disease Consultants, AGA patient education adviser and a Trust Your Gut spokesperson.
“This potential delay could be detrimental in some cases, such as bleeding related to colon cancer,” he said. “If diagnosed sooner, an operation or chemotherapy could lead to treatment and a cure in those cases, versus advanced cancer that may be incurable.”
The AGA Trust Your Gut survey, conducted by Kelton Global during May 9-11, 2022, included 1,010 respondents from a nationally representative sample of U.S. adults.
Struggling with the issue
About 28% of respondents said they would see a clinician immediately if their bowel frequency or symptoms changed. However, 72% said they would wait, and on top of that, 27% said they would wait until the condition became severe or didn’t resolve over time. Women were more likely than men to say they would wait, at 72% versus 64%.
Overall, 39% of respondents said bowel issues have stopped them from doing some type of activity in the past year. Men were more likely than women to say that bowel issues have affected their ability to do an activity, at 44% versus 35%.
“Typically, when it comes to functional or motility disorders or bowel dysfunction, we tend to see a higher prevalence in women, so this was somewhat surprising to see,” said Andrea Shin, MD, a gastroenterology specialist and assistant professor of medicine at Indiana University, Indianapolis, and AGA patient education adviser designate.
“Part of this difference may be related to the communication barrier and how sex or gender affects that relationship between a clinician and a patient,” she said.
The reasons for patients’ reluctance varies, but themes of uncertainty and embarrassment are prevalent. About 33% said they’re not sure whether the symptoms are a problem, 31% said they hope the symptoms improve on their own, 23% said it’s embarrassing, and 12% don’t know what to tell the doctor. Men were more likely than women to say they don’t know what to say to a doctor about their symptoms, at 15% versus 9%.
Starting the conversation
From a young age, many respondents were raised to avoid the topic of bowel issues. About 23% said their parents encouraged them not to mention bathroom-related health issues, and 10% said they didn’t talk about bowel issues at all. Another 32% said they could talk about it but had to use code words, such as “go to the bathroom” or “potty.”
“What this highlights is that patients are culturally taught not to talk about their digestive tract, or they’re embarrassed or uncertain,” Dr. Jain said. “At the end of the day, we need to destigmatize discussions about digestive function and normalize it as part of overall health.”
The survey respondents said they’d feel most comfortable talking about bowel issues with doctors (63%) and nurses (41%), as well as a significant other (44%), parent (32%), or friend (27%). Women were more likely than men to feel comfortable turning to a nurse practitioner or physician’s assistant (47% versus 35%) or a friend (30% versus 24%).
To feel more comfortable with these conversations, 42% of survey participants said they would like their doctor or clinician to describe what’s normal. About 30% want to know the appropriate terms to describe their situation.
Health care providers should also consider the cultural and social factors that may affect a patient’s disease experience, as well as how they interact with the health care system, Shin said.
“Understanding these differences might help us to better engage with a community that is diverse,” she said. “In general, we also need to be more proactive about drawing these conversations out of patients, who may not mention it unless we ask because they find it so personal.”
The AGA Trust Your Gut campaign is supported by a sponsorship from Janssen. Dr. Jain and Dr. Shin reported to relevant disclosures.
Help your patients learn more by encouraging them to visit https://patient.gastro.org/trust-your-gut/.
Nearly three-quarters of Americans would wait before discussing GI symptoms with a health care provider if their bowel frequency or symptoms changed, with more than a quarter overall waiting for symptoms to become severe, according to a new survey from the American Gastroenterological Association.
Nearly 40% of people said GI symptoms had disrupted everyday activities such as exercising, running errands, and spending time with family or friends, but despite these disruptions, 30% of people said they would only discuss their bowel-related concerns if their doctor brought it up first. In response, the AGA launched “Trust Your Gut,” an awareness campaign aimed at shortening the time from the onset of bowel symptoms to discussions with health care providers.
“So many patients are either fearful or embarrassed about discussing their digestive symptoms such that they delay care unless the health care provider brings it up,” said Rajeev Jain, MD, a gastroenterologist with Texas Digestive Disease Consultants, AGA patient education adviser and a Trust Your Gut spokesperson.
“This potential delay could be detrimental in some cases, such as bleeding related to colon cancer,” he said. “If diagnosed sooner, an operation or chemotherapy could lead to treatment and a cure in those cases, versus advanced cancer that may be incurable.”
The AGA Trust Your Gut survey, conducted by Kelton Global during May 9-11, 2022, included 1,010 respondents from a nationally representative sample of U.S. adults.
Struggling with the issue
About 28% of respondents said they would see a clinician immediately if their bowel frequency or symptoms changed. However, 72% said they would wait, and on top of that, 27% said they would wait until the condition became severe or didn’t resolve over time. Women were more likely than men to say they would wait, at 72% versus 64%.
Overall, 39% of respondents said bowel issues have stopped them from doing some type of activity in the past year. Men were more likely than women to say that bowel issues have affected their ability to do an activity, at 44% versus 35%.
“Typically, when it comes to functional or motility disorders or bowel dysfunction, we tend to see a higher prevalence in women, so this was somewhat surprising to see,” said Andrea Shin, MD, a gastroenterology specialist and assistant professor of medicine at Indiana University, Indianapolis, and AGA patient education adviser designate.
“Part of this difference may be related to the communication barrier and how sex or gender affects that relationship between a clinician and a patient,” she said.
The reasons for patients’ reluctance varies, but themes of uncertainty and embarrassment are prevalent. About 33% said they’re not sure whether the symptoms are a problem, 31% said they hope the symptoms improve on their own, 23% said it’s embarrassing, and 12% don’t know what to tell the doctor. Men were more likely than women to say they don’t know what to say to a doctor about their symptoms, at 15% versus 9%.
Starting the conversation
From a young age, many respondents were raised to avoid the topic of bowel issues. About 23% said their parents encouraged them not to mention bathroom-related health issues, and 10% said they didn’t talk about bowel issues at all. Another 32% said they could talk about it but had to use code words, such as “go to the bathroom” or “potty.”
“What this highlights is that patients are culturally taught not to talk about their digestive tract, or they’re embarrassed or uncertain,” Dr. Jain said. “At the end of the day, we need to destigmatize discussions about digestive function and normalize it as part of overall health.”
The survey respondents said they’d feel most comfortable talking about bowel issues with doctors (63%) and nurses (41%), as well as a significant other (44%), parent (32%), or friend (27%). Women were more likely than men to feel comfortable turning to a nurse practitioner or physician’s assistant (47% versus 35%) or a friend (30% versus 24%).
To feel more comfortable with these conversations, 42% of survey participants said they would like their doctor or clinician to describe what’s normal. About 30% want to know the appropriate terms to describe their situation.
Health care providers should also consider the cultural and social factors that may affect a patient’s disease experience, as well as how they interact with the health care system, Shin said.
“Understanding these differences might help us to better engage with a community that is diverse,” she said. “In general, we also need to be more proactive about drawing these conversations out of patients, who may not mention it unless we ask because they find it so personal.”
The AGA Trust Your Gut campaign is supported by a sponsorship from Janssen. Dr. Jain and Dr. Shin reported to relevant disclosures.
Help your patients learn more by encouraging them to visit https://patient.gastro.org/trust-your-gut/.
Endoscopic severity score helps guide treatment in immune-mediated colitis
, according to new research presented at the annual meeting of the American College of Gastroenterology.
An endoscopy score cutoff of 4 or higher had a specificity of 82.8% across all colitis grades, and a cutoff of 5 or higher had a specificity of 87.6%, said Yinghong Wang, MD, PhD, a gastroenterologist at the University of Texas MD Anderson Cancer Center, Houston.
Immune-mediated colitis (IMC) is a common immune-related adverse event associated with immune checkpoint inhibitors. Dr. Wang and colleagues previously reported on endoscopic presentations of IMC, including severe inflammation with deep ulcerated mucosa; moderate to severe inflammation with diffuse erythema, superficial ulcers, exudate, and loss of vasculature; and mild inflammation with patchy erythema, aphtha, edema, or normal mucosa associated with histological inflammation.
Endoscopic scoring systems haven’t been established for IMC, but previous studies have shown benefits from early endoscopic evaluation. The current Common Terminology Criteria for Adverse Events (CTCAE) grading system for clinical symptoms alone has been poorly correlated with endoscopic findings and unable to provide accurate assessments, Dr. Wang said.
“There is a critical and urgent need to develop a new scoring system that could provide accurate and comprehensive assessment for IMC severity to better predict the requirement of more aggressive selective immunosuppressive therapy (SIT), which includes infliximab and vedolizumab,” she said.
Dr. Wang and colleagues conducted a retrospective international study across 14 centers to develop a new comprehensive endoscopic scoring system to assess the severity of IMC and explore its utility in predicting the need for aggressive treatment with SIT. They included 674 adult cancer patients in the United States, United Kingdom, Germany, and Australia with IMC who underwent endoscopic evaluation between 2010 and 2020.
All patients had received immune checkpoint inhibitors, an IMC diagnosis, and endoscopy and histology evaluations for IMC. In addition, all patients had diarrhea, including 92% who had grade 2 diarrhea and higher and 80% who had grade 2 colitis and higher. About 85% were treated with corticosteroids, 31% were treated with infliximab, 10% were treated with vedolizumab, and 5% were treated with both treatment types, corticosteroids and SIT.
Based on endoscopic reports, the research team looked at 10 endoscopic features and assigned one point each for erythema, edema, loss of vasculature, friability, erosions, exudate, any ulcers, large ulcers, deep ulcers, and more than two ulcers. The median IMC endoscopic score was 2.
The scoring system was devised by measuring the specificity of a selected score cutoff in predicting the need for SIT based on clinical consensus from the study group.
The researchers divided the cohort into a training set and a validation set. In the training set, an IMC endoscopy score cutoff of 4 or more had a specificity of 82.8% across all colitis grades and 96.4% among grade 1 colitis to predict SIT use. A cutoff of 5 or more had a specificity of 87.6% across all colitis grades and 98.2% among grade 1. These specificities were comparable to those of the validation sets.
At the same time, the CTCAE score was poorly associated with prediction of future SIT use, with a specificity of 27.4% for clinical colitis grading and 12.3% for diarrhea grading.
In addition, an IMC endoscopic score of 4 or 5 plus ulcer factors had a numerically higher specificity than a Mayo Endoscopic Score of 3. The IMC endoscopic score had a specificity of 85% at a cutoff of 4 and 88.2% at a cutoff of 5, as compared with 74.6% for the Mayo score.
Early endoscopic evaluation in disease course was associated with early SIT use, with a P value of less than .001.
“Implementation of this novel endoscopic scoring system could guide future IMC treatment more precisely,” Dr. Wang said.
The study funding was not disclosed. The authors reported consultant roles, advisory roles, and research support from several pharmaceutical companies.
, according to new research presented at the annual meeting of the American College of Gastroenterology.
An endoscopy score cutoff of 4 or higher had a specificity of 82.8% across all colitis grades, and a cutoff of 5 or higher had a specificity of 87.6%, said Yinghong Wang, MD, PhD, a gastroenterologist at the University of Texas MD Anderson Cancer Center, Houston.
Immune-mediated colitis (IMC) is a common immune-related adverse event associated with immune checkpoint inhibitors. Dr. Wang and colleagues previously reported on endoscopic presentations of IMC, including severe inflammation with deep ulcerated mucosa; moderate to severe inflammation with diffuse erythema, superficial ulcers, exudate, and loss of vasculature; and mild inflammation with patchy erythema, aphtha, edema, or normal mucosa associated with histological inflammation.
Endoscopic scoring systems haven’t been established for IMC, but previous studies have shown benefits from early endoscopic evaluation. The current Common Terminology Criteria for Adverse Events (CTCAE) grading system for clinical symptoms alone has been poorly correlated with endoscopic findings and unable to provide accurate assessments, Dr. Wang said.
“There is a critical and urgent need to develop a new scoring system that could provide accurate and comprehensive assessment for IMC severity to better predict the requirement of more aggressive selective immunosuppressive therapy (SIT), which includes infliximab and vedolizumab,” she said.
Dr. Wang and colleagues conducted a retrospective international study across 14 centers to develop a new comprehensive endoscopic scoring system to assess the severity of IMC and explore its utility in predicting the need for aggressive treatment with SIT. They included 674 adult cancer patients in the United States, United Kingdom, Germany, and Australia with IMC who underwent endoscopic evaluation between 2010 and 2020.
All patients had received immune checkpoint inhibitors, an IMC diagnosis, and endoscopy and histology evaluations for IMC. In addition, all patients had diarrhea, including 92% who had grade 2 diarrhea and higher and 80% who had grade 2 colitis and higher. About 85% were treated with corticosteroids, 31% were treated with infliximab, 10% were treated with vedolizumab, and 5% were treated with both treatment types, corticosteroids and SIT.
Based on endoscopic reports, the research team looked at 10 endoscopic features and assigned one point each for erythema, edema, loss of vasculature, friability, erosions, exudate, any ulcers, large ulcers, deep ulcers, and more than two ulcers. The median IMC endoscopic score was 2.
The scoring system was devised by measuring the specificity of a selected score cutoff in predicting the need for SIT based on clinical consensus from the study group.
The researchers divided the cohort into a training set and a validation set. In the training set, an IMC endoscopy score cutoff of 4 or more had a specificity of 82.8% across all colitis grades and 96.4% among grade 1 colitis to predict SIT use. A cutoff of 5 or more had a specificity of 87.6% across all colitis grades and 98.2% among grade 1. These specificities were comparable to those of the validation sets.
At the same time, the CTCAE score was poorly associated with prediction of future SIT use, with a specificity of 27.4% for clinical colitis grading and 12.3% for diarrhea grading.
In addition, an IMC endoscopic score of 4 or 5 plus ulcer factors had a numerically higher specificity than a Mayo Endoscopic Score of 3. The IMC endoscopic score had a specificity of 85% at a cutoff of 4 and 88.2% at a cutoff of 5, as compared with 74.6% for the Mayo score.
Early endoscopic evaluation in disease course was associated with early SIT use, with a P value of less than .001.
“Implementation of this novel endoscopic scoring system could guide future IMC treatment more precisely,” Dr. Wang said.
The study funding was not disclosed. The authors reported consultant roles, advisory roles, and research support from several pharmaceutical companies.
, according to new research presented at the annual meeting of the American College of Gastroenterology.
An endoscopy score cutoff of 4 or higher had a specificity of 82.8% across all colitis grades, and a cutoff of 5 or higher had a specificity of 87.6%, said Yinghong Wang, MD, PhD, a gastroenterologist at the University of Texas MD Anderson Cancer Center, Houston.
Immune-mediated colitis (IMC) is a common immune-related adverse event associated with immune checkpoint inhibitors. Dr. Wang and colleagues previously reported on endoscopic presentations of IMC, including severe inflammation with deep ulcerated mucosa; moderate to severe inflammation with diffuse erythema, superficial ulcers, exudate, and loss of vasculature; and mild inflammation with patchy erythema, aphtha, edema, or normal mucosa associated with histological inflammation.
Endoscopic scoring systems haven’t been established for IMC, but previous studies have shown benefits from early endoscopic evaluation. The current Common Terminology Criteria for Adverse Events (CTCAE) grading system for clinical symptoms alone has been poorly correlated with endoscopic findings and unable to provide accurate assessments, Dr. Wang said.
“There is a critical and urgent need to develop a new scoring system that could provide accurate and comprehensive assessment for IMC severity to better predict the requirement of more aggressive selective immunosuppressive therapy (SIT), which includes infliximab and vedolizumab,” she said.
Dr. Wang and colleagues conducted a retrospective international study across 14 centers to develop a new comprehensive endoscopic scoring system to assess the severity of IMC and explore its utility in predicting the need for aggressive treatment with SIT. They included 674 adult cancer patients in the United States, United Kingdom, Germany, and Australia with IMC who underwent endoscopic evaluation between 2010 and 2020.
All patients had received immune checkpoint inhibitors, an IMC diagnosis, and endoscopy and histology evaluations for IMC. In addition, all patients had diarrhea, including 92% who had grade 2 diarrhea and higher and 80% who had grade 2 colitis and higher. About 85% were treated with corticosteroids, 31% were treated with infliximab, 10% were treated with vedolizumab, and 5% were treated with both treatment types, corticosteroids and SIT.
Based on endoscopic reports, the research team looked at 10 endoscopic features and assigned one point each for erythema, edema, loss of vasculature, friability, erosions, exudate, any ulcers, large ulcers, deep ulcers, and more than two ulcers. The median IMC endoscopic score was 2.
The scoring system was devised by measuring the specificity of a selected score cutoff in predicting the need for SIT based on clinical consensus from the study group.
The researchers divided the cohort into a training set and a validation set. In the training set, an IMC endoscopy score cutoff of 4 or more had a specificity of 82.8% across all colitis grades and 96.4% among grade 1 colitis to predict SIT use. A cutoff of 5 or more had a specificity of 87.6% across all colitis grades and 98.2% among grade 1. These specificities were comparable to those of the validation sets.
At the same time, the CTCAE score was poorly associated with prediction of future SIT use, with a specificity of 27.4% for clinical colitis grading and 12.3% for diarrhea grading.
In addition, an IMC endoscopic score of 4 or 5 plus ulcer factors had a numerically higher specificity than a Mayo Endoscopic Score of 3. The IMC endoscopic score had a specificity of 85% at a cutoff of 4 and 88.2% at a cutoff of 5, as compared with 74.6% for the Mayo score.
Early endoscopic evaluation in disease course was associated with early SIT use, with a P value of less than .001.
“Implementation of this novel endoscopic scoring system could guide future IMC treatment more precisely,” Dr. Wang said.
The study funding was not disclosed. The authors reported consultant roles, advisory roles, and research support from several pharmaceutical companies.
FROM ACG 2022
Guselkumab induction improves moderate to severe active UC at week 12
findings presented at the annual meeting of the American College of Gastroenterology.
according toThe efficacy of the 200-mg dose and the 400-mg dose was comparable, said David Rubin, MD, a gastroenterologist at the University of Chicago Medicine Inflammatory Bowel Disease Center. Outcomes improved in all patients, with or without a history of inadequate response or intolerance to advanced therapy.
Guselkumab, an interleukin-12 p19 subunit antagonist, is currently being investigated in inflammatory bowel disease.
The QUASAR Induction Study 1 (NCT04033445) is a phase 2b, randomized, double-blind, placebo-controlled study that evaluates guselkumab as induction therapy in patients with moderately to severely active UC. Inclusion criteria specify a demonstrated inadequate response or intolerance to conventional therapy, such as thiopurines or corticosteroids, or to advanced therapy, such as tumor necrosis factor–alpha antagonists, vedolizumab, or tofacitinib. The study didn’t include patients exposed to ustekinumab.
Study participants were age 18 and older with moderately to severely active UC, defined as a modified Mayo score of 5-9 with a Mayo rectal bleeding subscore of 1 or greater and a Mayo endoscopy subscore of 2 or greater at baseline. The groups were randomized 1:1:1 to receive 400 mg of IV guselkumab, 200 mg of guselkumab, or placebo at weeks 0, 4, and 8.
At week 12, the research team looked for several key endpoints. Clinical response was defined as a modified Mayo score decrease of 30% or more and a drop in 2 or more points, with either a 1-point decrease or more in the rectal bleeding subscore or a rectal bleeding subscore of 0 or 1.
Clinical remission was defined as a stool frequency subscore of 0 or 1 that hadn’t increased from baseline, a rectal bleeding subscore of 0, and an endoscopy subscore of 0 or 1 with no friability present on the endoscopy.
In addition, symptomatic remission was defined as a stool subscore of 0 or 1 that hadn’t increased from baseline and rectal bleeding subscore of 0.
Endoscopic improvement was defined as an endoscopy subscore of 0 or 1 with no friability present on the endoscopy. Endoscopic normalization was an endoscopy subscore of 0.
Notably, the research team looked at histoendoscopic mucosal improvement, which includes a combination of endoscopic improvement and histologic improvement (neutrophil infiltration in less than 5% of crypts, no crypt destruction, and no erosions, ulcerations, or granulation tissue, according to the Geboes grading system).
Among the 313 total patients, 47% had a history of inadequate response or intolerance to advanced therapy, and about half of these patients had prior inadequate response or intolerance to two or more advanced therapy classes.
At baseline, about 90% of patients had an endoscopic subscore of 3 (severe). More than half had extensive UC, and the average UC duration was 9 years. About 20% overall had extraintestinal manifestations present, which were noted in 33% of the 400 mg guselkumab treatment arm.
At week 12, clinical response was achieved by a higher proportion of patients treated with guselkumab versus placebo, at 50.5% versus 25.5% for patients with prior inadequate response or intolerance to advanced therapy and 70.3% versus 29.6% for those without prior inadequate response or intolerance to advanced therapy, the authors reported in the abstract.
Compared with placebo, higher proportions of patients treated with guselkumab achieved clinical, endoscopic, and histologic outcomes in both groups with or without inadequate response or intolerance to advanced therapy. Generally, those without a history of inadequate response had higher response rates across all endpoints.
Overall, both the 200-mg and 400-mg doses of guselkumab were statistically superior to the placebo across all endpoints for both groups (with or without inadequate response or intolerance). Although the efficacy was comparable for the two doses, the 400-mg dose was associated with greater histoendoscopic mucosal improvement in both groups.
“It’s of interest to think about how we position and sequence our therapies with this additional data,” Dr. Rubin said.
The study was sponsored by Janssen Research & Development. Several authors are employees for and have stock options with Johnson & Johnson and Janssen. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Janssen.
findings presented at the annual meeting of the American College of Gastroenterology.
according toThe efficacy of the 200-mg dose and the 400-mg dose was comparable, said David Rubin, MD, a gastroenterologist at the University of Chicago Medicine Inflammatory Bowel Disease Center. Outcomes improved in all patients, with or without a history of inadequate response or intolerance to advanced therapy.
Guselkumab, an interleukin-12 p19 subunit antagonist, is currently being investigated in inflammatory bowel disease.
The QUASAR Induction Study 1 (NCT04033445) is a phase 2b, randomized, double-blind, placebo-controlled study that evaluates guselkumab as induction therapy in patients with moderately to severely active UC. Inclusion criteria specify a demonstrated inadequate response or intolerance to conventional therapy, such as thiopurines or corticosteroids, or to advanced therapy, such as tumor necrosis factor–alpha antagonists, vedolizumab, or tofacitinib. The study didn’t include patients exposed to ustekinumab.
Study participants were age 18 and older with moderately to severely active UC, defined as a modified Mayo score of 5-9 with a Mayo rectal bleeding subscore of 1 or greater and a Mayo endoscopy subscore of 2 or greater at baseline. The groups were randomized 1:1:1 to receive 400 mg of IV guselkumab, 200 mg of guselkumab, or placebo at weeks 0, 4, and 8.
At week 12, the research team looked for several key endpoints. Clinical response was defined as a modified Mayo score decrease of 30% or more and a drop in 2 or more points, with either a 1-point decrease or more in the rectal bleeding subscore or a rectal bleeding subscore of 0 or 1.
Clinical remission was defined as a stool frequency subscore of 0 or 1 that hadn’t increased from baseline, a rectal bleeding subscore of 0, and an endoscopy subscore of 0 or 1 with no friability present on the endoscopy.
In addition, symptomatic remission was defined as a stool subscore of 0 or 1 that hadn’t increased from baseline and rectal bleeding subscore of 0.
Endoscopic improvement was defined as an endoscopy subscore of 0 or 1 with no friability present on the endoscopy. Endoscopic normalization was an endoscopy subscore of 0.
Notably, the research team looked at histoendoscopic mucosal improvement, which includes a combination of endoscopic improvement and histologic improvement (neutrophil infiltration in less than 5% of crypts, no crypt destruction, and no erosions, ulcerations, or granulation tissue, according to the Geboes grading system).
Among the 313 total patients, 47% had a history of inadequate response or intolerance to advanced therapy, and about half of these patients had prior inadequate response or intolerance to two or more advanced therapy classes.
At baseline, about 90% of patients had an endoscopic subscore of 3 (severe). More than half had extensive UC, and the average UC duration was 9 years. About 20% overall had extraintestinal manifestations present, which were noted in 33% of the 400 mg guselkumab treatment arm.
At week 12, clinical response was achieved by a higher proportion of patients treated with guselkumab versus placebo, at 50.5% versus 25.5% for patients with prior inadequate response or intolerance to advanced therapy and 70.3% versus 29.6% for those without prior inadequate response or intolerance to advanced therapy, the authors reported in the abstract.
Compared with placebo, higher proportions of patients treated with guselkumab achieved clinical, endoscopic, and histologic outcomes in both groups with or without inadequate response or intolerance to advanced therapy. Generally, those without a history of inadequate response had higher response rates across all endpoints.
Overall, both the 200-mg and 400-mg doses of guselkumab were statistically superior to the placebo across all endpoints for both groups (with or without inadequate response or intolerance). Although the efficacy was comparable for the two doses, the 400-mg dose was associated with greater histoendoscopic mucosal improvement in both groups.
“It’s of interest to think about how we position and sequence our therapies with this additional data,” Dr. Rubin said.
The study was sponsored by Janssen Research & Development. Several authors are employees for and have stock options with Johnson & Johnson and Janssen. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Janssen.
findings presented at the annual meeting of the American College of Gastroenterology.
according toThe efficacy of the 200-mg dose and the 400-mg dose was comparable, said David Rubin, MD, a gastroenterologist at the University of Chicago Medicine Inflammatory Bowel Disease Center. Outcomes improved in all patients, with or without a history of inadequate response or intolerance to advanced therapy.
Guselkumab, an interleukin-12 p19 subunit antagonist, is currently being investigated in inflammatory bowel disease.
The QUASAR Induction Study 1 (NCT04033445) is a phase 2b, randomized, double-blind, placebo-controlled study that evaluates guselkumab as induction therapy in patients with moderately to severely active UC. Inclusion criteria specify a demonstrated inadequate response or intolerance to conventional therapy, such as thiopurines or corticosteroids, or to advanced therapy, such as tumor necrosis factor–alpha antagonists, vedolizumab, or tofacitinib. The study didn’t include patients exposed to ustekinumab.
Study participants were age 18 and older with moderately to severely active UC, defined as a modified Mayo score of 5-9 with a Mayo rectal bleeding subscore of 1 or greater and a Mayo endoscopy subscore of 2 or greater at baseline. The groups were randomized 1:1:1 to receive 400 mg of IV guselkumab, 200 mg of guselkumab, or placebo at weeks 0, 4, and 8.
At week 12, the research team looked for several key endpoints. Clinical response was defined as a modified Mayo score decrease of 30% or more and a drop in 2 or more points, with either a 1-point decrease or more in the rectal bleeding subscore or a rectal bleeding subscore of 0 or 1.
Clinical remission was defined as a stool frequency subscore of 0 or 1 that hadn’t increased from baseline, a rectal bleeding subscore of 0, and an endoscopy subscore of 0 or 1 with no friability present on the endoscopy.
In addition, symptomatic remission was defined as a stool subscore of 0 or 1 that hadn’t increased from baseline and rectal bleeding subscore of 0.
Endoscopic improvement was defined as an endoscopy subscore of 0 or 1 with no friability present on the endoscopy. Endoscopic normalization was an endoscopy subscore of 0.
Notably, the research team looked at histoendoscopic mucosal improvement, which includes a combination of endoscopic improvement and histologic improvement (neutrophil infiltration in less than 5% of crypts, no crypt destruction, and no erosions, ulcerations, or granulation tissue, according to the Geboes grading system).
Among the 313 total patients, 47% had a history of inadequate response or intolerance to advanced therapy, and about half of these patients had prior inadequate response or intolerance to two or more advanced therapy classes.
At baseline, about 90% of patients had an endoscopic subscore of 3 (severe). More than half had extensive UC, and the average UC duration was 9 years. About 20% overall had extraintestinal manifestations present, which were noted in 33% of the 400 mg guselkumab treatment arm.
At week 12, clinical response was achieved by a higher proportion of patients treated with guselkumab versus placebo, at 50.5% versus 25.5% for patients with prior inadequate response or intolerance to advanced therapy and 70.3% versus 29.6% for those without prior inadequate response or intolerance to advanced therapy, the authors reported in the abstract.
Compared with placebo, higher proportions of patients treated with guselkumab achieved clinical, endoscopic, and histologic outcomes in both groups with or without inadequate response or intolerance to advanced therapy. Generally, those without a history of inadequate response had higher response rates across all endpoints.
Overall, both the 200-mg and 400-mg doses of guselkumab were statistically superior to the placebo across all endpoints for both groups (with or without inadequate response or intolerance). Although the efficacy was comparable for the two doses, the 400-mg dose was associated with greater histoendoscopic mucosal improvement in both groups.
“It’s of interest to think about how we position and sequence our therapies with this additional data,” Dr. Rubin said.
The study was sponsored by Janssen Research & Development. Several authors are employees for and have stock options with Johnson & Johnson and Janssen. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Janssen.
FROM ACG 2022
Dupilumab improves eosinophilic esophagitis up to 24 weeks
Dupilumab appears to improve clinical, symptomatic, histologic, and endoscopic aspects of eosinophilic esophagitis (EoE) up to 24 weeks, according to findings presented at the annual meeting of the American College of Gastroenterology.
The drug was also well tolerated, demonstrating consistency with the known dupilumab safety profile, said Evan S. Dellon, MD, a gastroenterologist at the University of North Carolina at Chapel Hill.
In May, the Food and Drug Administration approved dupilumab (Dupixent) for the treatment of EoE in adults and adolescents who are 12 years and older and weigh at least 40 kg (about 88 pounds), based on safety and efficacy data previously presented by Dr. Dellon and colleagues as part of the phase 3 LIBERTY-EoE-TREET study (NCT03633617).
“Dupilumab is now the only medication FDA approved to treat EoE in the U.S.,” Dr. Dellon said. “The findings here are that the pooled efficacy and safety data for parts A and B of the phase 3 trial are consistent with the results of the individual parts of the study that were previously reported, and which led to the drug being approved for EoE.”
EoE is a chronic, progressive, type 2 inflammatory disease of the esophagus, which can lead to symptoms of esophageal dysfunction that affect quality of life. Current treatment options often lack specificity, present adherence challenges, and provide suboptimal long-term disease control, Dr. Dellon said.
Dupilumab, a fully human monoclonal antibody manufactured by Regeneron Pharmaceuticals, blocks the shared receptor component for interleukin-4 and IL-13, which are central drivers of type 2 inflammation in EoE.
Study population difficult to treat
In the three-part, double-blind, placebo-controlled, phase 3 study, dupilumab was administered to 122 patients as 300-mg weekly doses through subcutaneous injection. In parts A and B, dupilumab demonstrated statistically significant and clinically meaningful improvement in adults and adolescents up to 24 weeks. In patients from part A who continued to an extended active treatment period called part C, efficacy was sustained to week 52.
Participants were included if they had EoE that hadn’t responded to high-dose proton pump inhibitors, had baseline esophageal biopsies with a peak intraepithelial eosinophilic count of 15 eosinophils per high-power field (eos/HPF) or higher in two or more esophageal regions, had a history of an average of two or more episodes of dysphagia per week in the 4 weeks prior to screening, had four or more episodes of dysphagia in the 2 weeks prior to randomization with two or more episodes that required liquids or medical attention, and had a baseline Dysphagia Symptom Questionnaire (DSQ) score of 10 or higher.
On the other hand, participants were excluded if they initiated or changed a food-elimination diet regimen or reintroduced a previously eliminated food group in the 6 weeks before screening, had other causes of esophageal eosinophilia, had a history of other inflammatory diseases such as Crohn’s disease or ulcerative colitis, or were treated with swallowed topical corticosteroids within 8 weeks prior to baseline.
Dr. Dellon and colleagues focused on co–primary endpoints: The proportion of patients who achieved peak esophageal intraepithelial eosinophil count of 6 eos/HPF or less, and the absolute change in DSQ score from baseline to week 24.
Key secondary endpoints included percentage change in eos/HPF, absolute change in EoE-Endoscopic Reference Score (EREFS), absolute change in EoE-Histologic Scoring System (EoE-HSS) grade score, and EoE-HSS stage score. Other secondary endpoints included percentage change in DSQ score and proportion of patients achieving less than 15 eos/HPF.
The baseline demographics and clinical characteristics were similar between the treatment and placebo groups. Importantly, about 70% had been treated with topical corticosteroids, and about 40% had a history of esophageal dilation, Dr. Dellon said. The DSQ scores, peak eosinophil counts, and EREFS scores were high, indicating an inflamed, symptomatic, and difficult-to-treat population.
Pooled parts A and B findings
Overall, dupilumab reduced peak esophageal intraepithelial eosinophil counts at week 24. In the dupilumab group, 59% of patients were down to 6 eos/HPF or less, compared with 5.9% in the placebo group. In a secondary endpoint, 77% of dupilumab patients were down to 15 eos/HPF, compared with 7.6% in the placebo group. The dupilumab group saw an 80% drop in baseline change, compared with 1.5% in the placebo group.
Dupilumab also reduced dysphagia symptoms and improved endoscopic features of EoE at week 24. The absolute change in DSQ score was –23.21 in the dupilumab group, compared with –12.69 in the placebo group. The percent change in DSQ score was –65.5% in the dupilumab group, compared with –38.2% in the placebo group. The absolute change in EREFS score was –3.95 in the dupilumab group, compared with –0.41 in the placebo group.
In addition, dupilumab reduced histologic scores at week 24. The absolute change in EoE-HSS grade score was –0.82 in the dupilumab group, compared with –0.1 in the placebo group. The absolute change in EoE-HSS stage score was –0.79 in the dupilumab group, compared with –0.09 in the placebo group.
Dupilumab demonstrated an acceptable safety profile, and no new safety signals were noted, Dr. Dellon said. The most common adverse events was injection-site reaction at 37.5% in the dupilumab group and 33.3% in the placebo group. The severe adverse events were not related to the medication.
“If patients have EoE, dupilumab might be an option for treatment. However, it’s important to realize that, in the phase 3 study, all patients were PPI nonresponders, most had been treated with topical steroids [and many were not responsive], and many had prior esophageal dilation,” Dr. Dellon said. “We don’t have a lot of data in more mild EoE patients, and insurances are currently requiring a series of authorization before patients might be able to get this medication. It’s best to talk to their doctor about whether the medication is a good fit for not.”
The study was sponsored by Sanofi and Regeneron Pharmaceuticals. Three of the authors are employees for and have stock options with Regeneron or Sanofi. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Regeneron and Sanofi.
Dupilumab appears to improve clinical, symptomatic, histologic, and endoscopic aspects of eosinophilic esophagitis (EoE) up to 24 weeks, according to findings presented at the annual meeting of the American College of Gastroenterology.
The drug was also well tolerated, demonstrating consistency with the known dupilumab safety profile, said Evan S. Dellon, MD, a gastroenterologist at the University of North Carolina at Chapel Hill.
In May, the Food and Drug Administration approved dupilumab (Dupixent) for the treatment of EoE in adults and adolescents who are 12 years and older and weigh at least 40 kg (about 88 pounds), based on safety and efficacy data previously presented by Dr. Dellon and colleagues as part of the phase 3 LIBERTY-EoE-TREET study (NCT03633617).
“Dupilumab is now the only medication FDA approved to treat EoE in the U.S.,” Dr. Dellon said. “The findings here are that the pooled efficacy and safety data for parts A and B of the phase 3 trial are consistent with the results of the individual parts of the study that were previously reported, and which led to the drug being approved for EoE.”
EoE is a chronic, progressive, type 2 inflammatory disease of the esophagus, which can lead to symptoms of esophageal dysfunction that affect quality of life. Current treatment options often lack specificity, present adherence challenges, and provide suboptimal long-term disease control, Dr. Dellon said.
Dupilumab, a fully human monoclonal antibody manufactured by Regeneron Pharmaceuticals, blocks the shared receptor component for interleukin-4 and IL-13, which are central drivers of type 2 inflammation in EoE.
Study population difficult to treat
In the three-part, double-blind, placebo-controlled, phase 3 study, dupilumab was administered to 122 patients as 300-mg weekly doses through subcutaneous injection. In parts A and B, dupilumab demonstrated statistically significant and clinically meaningful improvement in adults and adolescents up to 24 weeks. In patients from part A who continued to an extended active treatment period called part C, efficacy was sustained to week 52.
Participants were included if they had EoE that hadn’t responded to high-dose proton pump inhibitors, had baseline esophageal biopsies with a peak intraepithelial eosinophilic count of 15 eosinophils per high-power field (eos/HPF) or higher in two or more esophageal regions, had a history of an average of two or more episodes of dysphagia per week in the 4 weeks prior to screening, had four or more episodes of dysphagia in the 2 weeks prior to randomization with two or more episodes that required liquids or medical attention, and had a baseline Dysphagia Symptom Questionnaire (DSQ) score of 10 or higher.
On the other hand, participants were excluded if they initiated or changed a food-elimination diet regimen or reintroduced a previously eliminated food group in the 6 weeks before screening, had other causes of esophageal eosinophilia, had a history of other inflammatory diseases such as Crohn’s disease or ulcerative colitis, or were treated with swallowed topical corticosteroids within 8 weeks prior to baseline.
Dr. Dellon and colleagues focused on co–primary endpoints: The proportion of patients who achieved peak esophageal intraepithelial eosinophil count of 6 eos/HPF or less, and the absolute change in DSQ score from baseline to week 24.
Key secondary endpoints included percentage change in eos/HPF, absolute change in EoE-Endoscopic Reference Score (EREFS), absolute change in EoE-Histologic Scoring System (EoE-HSS) grade score, and EoE-HSS stage score. Other secondary endpoints included percentage change in DSQ score and proportion of patients achieving less than 15 eos/HPF.
The baseline demographics and clinical characteristics were similar between the treatment and placebo groups. Importantly, about 70% had been treated with topical corticosteroids, and about 40% had a history of esophageal dilation, Dr. Dellon said. The DSQ scores, peak eosinophil counts, and EREFS scores were high, indicating an inflamed, symptomatic, and difficult-to-treat population.
Pooled parts A and B findings
Overall, dupilumab reduced peak esophageal intraepithelial eosinophil counts at week 24. In the dupilumab group, 59% of patients were down to 6 eos/HPF or less, compared with 5.9% in the placebo group. In a secondary endpoint, 77% of dupilumab patients were down to 15 eos/HPF, compared with 7.6% in the placebo group. The dupilumab group saw an 80% drop in baseline change, compared with 1.5% in the placebo group.
Dupilumab also reduced dysphagia symptoms and improved endoscopic features of EoE at week 24. The absolute change in DSQ score was –23.21 in the dupilumab group, compared with –12.69 in the placebo group. The percent change in DSQ score was –65.5% in the dupilumab group, compared with –38.2% in the placebo group. The absolute change in EREFS score was –3.95 in the dupilumab group, compared with –0.41 in the placebo group.
In addition, dupilumab reduced histologic scores at week 24. The absolute change in EoE-HSS grade score was –0.82 in the dupilumab group, compared with –0.1 in the placebo group. The absolute change in EoE-HSS stage score was –0.79 in the dupilumab group, compared with –0.09 in the placebo group.
Dupilumab demonstrated an acceptable safety profile, and no new safety signals were noted, Dr. Dellon said. The most common adverse events was injection-site reaction at 37.5% in the dupilumab group and 33.3% in the placebo group. The severe adverse events were not related to the medication.
“If patients have EoE, dupilumab might be an option for treatment. However, it’s important to realize that, in the phase 3 study, all patients were PPI nonresponders, most had been treated with topical steroids [and many were not responsive], and many had prior esophageal dilation,” Dr. Dellon said. “We don’t have a lot of data in more mild EoE patients, and insurances are currently requiring a series of authorization before patients might be able to get this medication. It’s best to talk to their doctor about whether the medication is a good fit for not.”
The study was sponsored by Sanofi and Regeneron Pharmaceuticals. Three of the authors are employees for and have stock options with Regeneron or Sanofi. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Regeneron and Sanofi.
Dupilumab appears to improve clinical, symptomatic, histologic, and endoscopic aspects of eosinophilic esophagitis (EoE) up to 24 weeks, according to findings presented at the annual meeting of the American College of Gastroenterology.
The drug was also well tolerated, demonstrating consistency with the known dupilumab safety profile, said Evan S. Dellon, MD, a gastroenterologist at the University of North Carolina at Chapel Hill.
In May, the Food and Drug Administration approved dupilumab (Dupixent) for the treatment of EoE in adults and adolescents who are 12 years and older and weigh at least 40 kg (about 88 pounds), based on safety and efficacy data previously presented by Dr. Dellon and colleagues as part of the phase 3 LIBERTY-EoE-TREET study (NCT03633617).
“Dupilumab is now the only medication FDA approved to treat EoE in the U.S.,” Dr. Dellon said. “The findings here are that the pooled efficacy and safety data for parts A and B of the phase 3 trial are consistent with the results of the individual parts of the study that were previously reported, and which led to the drug being approved for EoE.”
EoE is a chronic, progressive, type 2 inflammatory disease of the esophagus, which can lead to symptoms of esophageal dysfunction that affect quality of life. Current treatment options often lack specificity, present adherence challenges, and provide suboptimal long-term disease control, Dr. Dellon said.
Dupilumab, a fully human monoclonal antibody manufactured by Regeneron Pharmaceuticals, blocks the shared receptor component for interleukin-4 and IL-13, which are central drivers of type 2 inflammation in EoE.
Study population difficult to treat
In the three-part, double-blind, placebo-controlled, phase 3 study, dupilumab was administered to 122 patients as 300-mg weekly doses through subcutaneous injection. In parts A and B, dupilumab demonstrated statistically significant and clinically meaningful improvement in adults and adolescents up to 24 weeks. In patients from part A who continued to an extended active treatment period called part C, efficacy was sustained to week 52.
Participants were included if they had EoE that hadn’t responded to high-dose proton pump inhibitors, had baseline esophageal biopsies with a peak intraepithelial eosinophilic count of 15 eosinophils per high-power field (eos/HPF) or higher in two or more esophageal regions, had a history of an average of two or more episodes of dysphagia per week in the 4 weeks prior to screening, had four or more episodes of dysphagia in the 2 weeks prior to randomization with two or more episodes that required liquids or medical attention, and had a baseline Dysphagia Symptom Questionnaire (DSQ) score of 10 or higher.
On the other hand, participants were excluded if they initiated or changed a food-elimination diet regimen or reintroduced a previously eliminated food group in the 6 weeks before screening, had other causes of esophageal eosinophilia, had a history of other inflammatory diseases such as Crohn’s disease or ulcerative colitis, or were treated with swallowed topical corticosteroids within 8 weeks prior to baseline.
Dr. Dellon and colleagues focused on co–primary endpoints: The proportion of patients who achieved peak esophageal intraepithelial eosinophil count of 6 eos/HPF or less, and the absolute change in DSQ score from baseline to week 24.
Key secondary endpoints included percentage change in eos/HPF, absolute change in EoE-Endoscopic Reference Score (EREFS), absolute change in EoE-Histologic Scoring System (EoE-HSS) grade score, and EoE-HSS stage score. Other secondary endpoints included percentage change in DSQ score and proportion of patients achieving less than 15 eos/HPF.
The baseline demographics and clinical characteristics were similar between the treatment and placebo groups. Importantly, about 70% had been treated with topical corticosteroids, and about 40% had a history of esophageal dilation, Dr. Dellon said. The DSQ scores, peak eosinophil counts, and EREFS scores were high, indicating an inflamed, symptomatic, and difficult-to-treat population.
Pooled parts A and B findings
Overall, dupilumab reduced peak esophageal intraepithelial eosinophil counts at week 24. In the dupilumab group, 59% of patients were down to 6 eos/HPF or less, compared with 5.9% in the placebo group. In a secondary endpoint, 77% of dupilumab patients were down to 15 eos/HPF, compared with 7.6% in the placebo group. The dupilumab group saw an 80% drop in baseline change, compared with 1.5% in the placebo group.
Dupilumab also reduced dysphagia symptoms and improved endoscopic features of EoE at week 24. The absolute change in DSQ score was –23.21 in the dupilumab group, compared with –12.69 in the placebo group. The percent change in DSQ score was –65.5% in the dupilumab group, compared with –38.2% in the placebo group. The absolute change in EREFS score was –3.95 in the dupilumab group, compared with –0.41 in the placebo group.
In addition, dupilumab reduced histologic scores at week 24. The absolute change in EoE-HSS grade score was –0.82 in the dupilumab group, compared with –0.1 in the placebo group. The absolute change in EoE-HSS stage score was –0.79 in the dupilumab group, compared with –0.09 in the placebo group.
Dupilumab demonstrated an acceptable safety profile, and no new safety signals were noted, Dr. Dellon said. The most common adverse events was injection-site reaction at 37.5% in the dupilumab group and 33.3% in the placebo group. The severe adverse events were not related to the medication.
“If patients have EoE, dupilumab might be an option for treatment. However, it’s important to realize that, in the phase 3 study, all patients were PPI nonresponders, most had been treated with topical steroids [and many were not responsive], and many had prior esophageal dilation,” Dr. Dellon said. “We don’t have a lot of data in more mild EoE patients, and insurances are currently requiring a series of authorization before patients might be able to get this medication. It’s best to talk to their doctor about whether the medication is a good fit for not.”
The study was sponsored by Sanofi and Regeneron Pharmaceuticals. Three of the authors are employees for and have stock options with Regeneron or Sanofi. The other authors reported consultant roles, advisory roles, and research support from numerous pharmaceutical companies, including Regeneron and Sanofi.
FROM ACG 2022