AI algorithm aids egg retrieval date during fertility treatment cycles

Article Type
Changed
Mon, 11/13/2023 - 06:39

Artificial intelligence can accurately predict the optimal retrieval date in fertility treatment cycles, according to preliminary research presented at the annual meeting of the American Society for Reproductive Medicine. According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.

According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.

“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.

“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.

In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.

The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing. 

To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”

Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.

“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.

According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.   

Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.

“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”

He noted the increasing numbers of young women in the United States undergoing egg freezing.

“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”

“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”

The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Artificial intelligence can accurately predict the optimal retrieval date in fertility treatment cycles, according to preliminary research presented at the annual meeting of the American Society for Reproductive Medicine. According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.

According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.

“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.

“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.

In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.

The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing. 

To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”

Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.

“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.

According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.   

Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.

“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”

He noted the increasing numbers of young women in the United States undergoing egg freezing.

“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”

“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”

The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.

A version of this article appeared on Medscape.com.

Artificial intelligence can accurately predict the optimal retrieval date in fertility treatment cycles, according to preliminary research presented at the annual meeting of the American Society for Reproductive Medicine. According to the researchers, such an algorithm is needed due to the increased demand for fertility treatments, as well as the high day-to-day variability in lab workload.

According to the study investigators, predicting retrieval dates in advance for ongoing cycles is of major importance for both patients and clinicians.

“The population requiring fertility treatments, including genetic testing and fertility preservation, has massively increased, and this causes many more cycles and a high day-to-day variability in IVF activity, especially in the lab workload,” said Rohi Hourvitz, MBA, from FertilAI, an Israeli health care company focused on developing technologies that improve fertility treatments.

“We also need to accommodate and reschedule for non-working days, which causes a big issue with managing the workload in many clinics around the world,” added Mr. Hourvitz, who presented the research highlighting AI’s growing role in reproductive medicine.

In addition, AI has recently emerged as an effective tool for assisting in clinical decision-making in assisted reproductive technology, prompting further research in this space, he said.

The new study used a dataset of 9,550 predictable antagonist cycles (defined as having all necessary data) gathered from one lab with over 50 physicians between August 2018 and October 2022. The data were split into two subsets: one for training the AI model and the other for prospective testing. 

To train and test the AI model, data from nearly 6,000 predictable antagonist cycles were used. Key factors used for each cycle included estrogen levels, mean follicle size, primary follicle size, and various patient demographics. Other features were considered, but Mr. Hourvitz noted that primary follicle size influenced the algorithm most, “because that is what most of us use when we want to trigger.”

Mr. Hourvitz explained that these patient data were run through an algorithm that produced a graph predicting the most probable date for a cycle retrieval.

“We could accurately predict when those ‘peak days’ were going to be happening in the clinic, and we could also give a pretty good estimate on how many cycles you’re going to have every day,” Mr. Hourvitz said, explaining that this information could help clinics more efficiently allocate resources and manage patients.

According to Mr. Hourvitz, the predictions derived from this study could improve various aspects of fertility treatments and related procedures, including better staff planning and caseload management in IVF labs, as well as higher-quality eggs at retrieval. Patients would have a clearer timeline for their treatment cycles.   

Nikica Zaninovic, PhD, MS, director of the embryology lab at Weill Cornell Medical College, New York City, cautioned that the new findings are not yet ready for clinical application but emphasized the importance of more AI research focusing on the quality of oocytes, not only embryos.

“We’re so focused on the end of the process: the embryo,” Dr. Zaninovic, who was not involved in the research, said in an interview. “I think the focus should be on the beginning – the quality of eggs and sperm, not just the quantity – because that’s what the embryos will depend on.”

He noted the increasing numbers of young women in the United States undergoing egg freezing.

“Cornell is the largest academic IVF center in the United States; 20%-30% of all of the patients that we treat are actually freezing their eggs,” he said. “It’s a huge population.”

“When they come to us, they ask how many eggs they’ll need to guarantee one or two children in the future,” Dr. Zaninovic continued. “We don’t have that answer, so we always tell them [we’ll retrieve] as many as we can. That’s not the answer; we need to be more precise. We’re still lacking these tools, and I think that’s where the research will go.”

The study was funded by FertilAI. Mr. Hourvitz is a shareholder and CEO of FertilAI. Dr. Zaninovic is president of the AI Fertility Society.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASRM 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Excellent outcome of Ross procedure after 2 decades

Article Type
Changed
Fri, 11/10/2023 - 11:11

 

TOPLINE:

About 83% of adults with aortic valve disease who underwent the Ross procedure, which uses a valve substitute that preserves mobility of the neo-aortic root, are still alive at 25 years postprocedure, a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.

METHODOLOGY:

  • The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
  • This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
  • The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.

TAKEAWAY:

  • Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
  • At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
  • There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
  • 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).

IN PRACTICE:

This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.

Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.

SOURCE:

The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.

LIMITATIONS:

The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.

 

 

DISCLOSURES:

Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.

Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

About 83% of adults with aortic valve disease who underwent the Ross procedure, which uses a valve substitute that preserves mobility of the neo-aortic root, are still alive at 25 years postprocedure, a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.

METHODOLOGY:

  • The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
  • This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
  • The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.

TAKEAWAY:

  • Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
  • At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
  • There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
  • 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).

IN PRACTICE:

This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.

Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.

SOURCE:

The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.

LIMITATIONS:

The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.

 

 

DISCLOSURES:

Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.

Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.

A version of this article appeared on Medscape.com.

 

TOPLINE:

About 83% of adults with aortic valve disease who underwent the Ross procedure, which uses a valve substitute that preserves mobility of the neo-aortic root, are still alive at 25 years postprocedure, a survival rate equivalent to that of the general population, results of a new study show. The need for reintervention in these patients is low.

METHODOLOGY:

  • The study was a post hoc analysis of a randomized clinical trial that showed superior survival, freedom from reoperation, and quality of life at 10 years for patients who received the Ross procedure, compared with those who got homograft root replacement.
  • This new analysis included 108 patients, median age 38 years and mostly male and of British origin, who underwent the Ross procedure. Of these, 45% had aortic regurgitation (AR) as the main hemodynamic lesion.
  • The primary outcome was long-term survival, compared with an age-, sex-, and country of origin–matched general U.K. population using a novel, patient-level matching strategy. Secondary outcomes included freedom from any valve-related reintervention, autograft reintervention, or homograft reintervention.

TAKEAWAY:

  • Survival at 25 years was 83.0% (95% confidence interval, 75.5%-91.2%), representing a relative survival of 99.1% (95% CI, 91.8%-100%), compared with the matched general population (survival in general population was 83.7%).
  • At 25 years, freedom from any Ross-related reintervention was 71.1% (95% CI, 61.6%-82.0%); freedom from autograft reintervention was 80.3% (95% CI, 71.9%-89.6%); and freedom from homograft reintervention was 86.3% (95% CI, 79.0%-94.3%).
  • There was no increased hazard for autograft deterioration in patients presenting with versus without preoperative AR, an important finding since it has been suggested Ross procedure benefits may not extend fully to patients with preoperative AR, said the authors.
  • 86% of patients had New York Heart Association class I or II status at the latest clinical follow-up (approaching 25 years).

IN PRACTICE:

This study shows the Ross procedure “provided excellent survival into the third decade after surgery,” with the new data further supporting “the unique benefits” of the valve substitute in adults, the authors conclude.

Authors of an accompanying editorial, Tsuyoshi Kaneko, MD, Division of Cardiothoracic Surgery, Washington University School of Medicine, St. Louis, and Maral Ouzounian, MD, PhD, Peter Munk Cardiac Centre, Division of Cardiac Surgery, University Health Network, University of Toronto, write that the new evidence suggests the Ross procedure is “a truly attractive option in younger patients with long life expectancy.” However, they note that aortic regurgitation in the cohort worsened over time, potentially leading to late reinterventions; echocardiographic follow-up was available in only 71% of patients; and generalizing the Ross procedure to a broader group of surgeons is challenging.

SOURCE:

The study was conducted by Maximiliaan L. Notenboom, BSc, department of cardiothoracic surgery, Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues. It was published online in JAMA Cardiology.

LIMITATIONS:

The analysis reflects a single-surgeon experience, so it’s difficult to extrapolate the results, although the operative steps involved in the Ross procedure have now been clearly delineated, making the operation reproducible. The duration of echocardiographic follow-up was shorter and less complete than the clinical follow-up. Outcomes of the cohort that underwent homograft procedures in the randomized clinical trial were not reported, but since that procedure has nearly disappeared from practice, reporting on its long-term outcomes would be of limited clinical significance.

 

 

DISCLOSURES:

Mr. Notenboom has disclosed no relevant financial relationships. Co-author Fabio De Robertis, MD, department of cardiothoracic surgery and transplantation, Royal Brompton & Harefield Hospitals, London, received nonfinancial support from Edwards Lifescience for travel and personal fees from Bristol Myers Squibb for consulting outside the submitted work, and has a service agreement with Medtronic U.K., which paid a fee to the Royal Brompton & Harefield Hospitals Charity Fund.

Editorial co-author Kaneko received personal fees from Edwards Lifesciences, Medtronic, Abbott, and Johnson & Johnson outside the submitted work; Ouzounian received personal fees from Medtronic, Edwards Lifesciences, and Terumo Aortic outside the submitted work.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Women have worse outcomes in cardiogenic shock

Article Type
Changed
Fri, 11/10/2023 - 11:06

Women with heart failure–related cardiogenic shock have worse outcomes and more vascular complications than men, a new analysis of registry data shows.

“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.

The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.

It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.

The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.

Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.

“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.

“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.

Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.

“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”

For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.

Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).

Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).

“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.

For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”

Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.

“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.

“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.

Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).

But Dr. Kapur warned that the propensity-matched analysis had some caveats.

“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.

“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”

Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.

“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”

Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.

“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”

He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”

Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.

He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.

“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
 

 

 

Gender-related inequality

In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.

“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.

“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Women with heart failure–related cardiogenic shock have worse outcomes and more vascular complications than men, a new analysis of registry data shows.

“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.

The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.

It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.

The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.

Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.

“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.

“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.

Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.

“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”

For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.

Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).

Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).

“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.

For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”

Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.

“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.

“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.

Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).

But Dr. Kapur warned that the propensity-matched analysis had some caveats.

“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.

“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”

Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.

“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”

Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.

“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”

He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”

Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.

He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.

“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
 

 

 

Gender-related inequality

In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.

“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.

“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.

A version of this article appeared on Medscape.com.

Women with heart failure–related cardiogenic shock have worse outcomes and more vascular complications than men, a new analysis of registry data shows.

“These data identify the need for us to continue working to identify barriers in terms of diagnosis, management, and technological innovations for women in cardiogenic shock to resolve these issues and improve outcomes,” the senior author of the study, Navin Kapur, MD, Tufts Medical Center, Boston, said in an interview.

The study is said to be the one of the largest contemporary analyses of real-world registry data on the characteristics and outcomes of women in comparison with men with cardiogenic shock.

It showed sex-specific differences in outcomes that were primarily driven by differences in heart failure–related cardiogenic shock. Women with heart failure–related cardiogenic shock had more severe cardiogenic shock, worse survival at discharge, and more vascular complications than men. Outcomes in cardiogenic shock related to MI were similar for men and women.

The study, which will be presented at the upcoming annual meeting of the American Heart Association, was published online in JACC: Heart Failure.

Dr. Kapur founded the Cardiogenic Shock Working Group in 2017 to collect quality data on the condition.

“We realized our patients were dying, and we didn’t have enough data on how best to manage them. So, we started this registry, and now have detailed data on close to 9,000 patients with cardiogenic shock from 45 hospitals in the U.S., Mexico, Australia, and Japan,” he explained.

“The primary goal is to try to investigate the questions related to cardiogenic shock that can inform management, and one of the key questions that came up was differences in how men and women present with cardiogenic shock and what their outcomes may be. This is what we are reporting in this paper,” he added.

Cardiogenic shock is defined as having a low cardiac output most commonly because of MI or an episode of acute heart failure, Dr. Kapur said. Patients with cardiogenic shock are identified by their low blood pressure or hypoperfusion evidenced by clinical exam or biomarkers, such as elevated lactate levels.

“In this analysis, we’re looking at patients presenting with cardiogenic shock, so were not looking at the incidence of the condition in men versus women,” Dr. Kapur noted. “However, we believe that cardiogenic shock is probably more underrepresented in women, who may present with an MI or acute heart failure and may or may not be identified as having low cardiac output states until quite late. The likelihood is that the incidence is similar in men and women, but women are more often undiagnosed.”

For the current study, the authors analyzed data on 5,083 patients with cardiogenic shock in the registry, of whom 1,522 (30%) were women. Compared with men, women had slightly higher body mass index (BMI) and smaller body surface area.

Results showed that women with heart failure–related cardiogenic shock had worse survival at discharge than men (69.9% vs. 74.4%) and a higher rate of refractory shock (SCAI stage E; 26% vs. 21%). Women were also less likely to undergo pulmonary artery catheterization (52.9% vs. 54.6%), heart transplantation (6.5% vs. 10.3%), or left ventricular assist device implantation (7.8% vs. 10%).

Regardless of cardiogenic shock etiology, women had more vascular complications (8.8% vs. 5.7%), bleeding (7.1% vs. 5.2%), and limb ischemia (6.8% vs. 4.5%).

“This analysis is quite revealing. We identified some important distinctions between men and women,” Dr. Kapur commented.

For many patients who present with MI-related cardiogenic shock, many of the baseline characteristics in men and women were quite similar, he said. “But in heart failure–related cardiogenic shock, we saw more differences, with typical comorbidities associated with cardiogenic shock [e.g., diabetes, chronic kidney disease, hypertension] being less common in women than in men. This suggests there may be phenotypic differences as to why women present with heart failure shock versus men.”

Dr. Kapur pointed out that differences in BMI or body surface area between men and women may play into some of the management decision-making.

“Women having a smaller stature may lead to a selection bias where we don’t want to use large-bore pumps or devices because we’re worried about causing complications. We found in the analysis that vascular complications such as bleeding or ischemia of the lower extremity where these devices typically go were more frequent in women,” he noted.

“We also found that women were less likely to receive invasive therapies in general, including pulmonary artery catheters, temporary mechanical support, and heart replacements, such as LVAD or transplants,” he added.

Further results showed that, after propensity score matching, some of the gender differences disappeared, but women continued to have a higher rate of vascular complications (10.4% women vs. 7.4% men).

But Dr. Kapur warned that the propensity-matched analysis had some caveats.

“Essentially what we are doing with propensity matching is creating two populations that are as similar as possible, and this reduced the number of patients in the analysis down to 25% of the original population,” he said. “One of the things we had to match was body surface area, and in doing this, we are taking out one of the most important differences between men and women, and as a result, a lot of the differences in outcomes go away.

“In this respect, propensity matching can be a bit of a double-edge sword,” he added. “I think the non–propensity-matched results are more interesting, as they are more of a reflection of the real world.”

Dr. Kapur concluded that these findings are compelling enough to suggest that there are important differences between women and men with cardiogenic shock in terms of outcomes as well as complication rates.

“Our decision-making around women seems to be different to that around men. I think this paper should start to trigger more awareness of that.”

Dr. Kapur also emphasized the importance of paying attention to vascular complications in women.

“The higher rates of bleeding and limb ischemia issues in women may explain the rationale for being less aggressive with invasive therapies in women,” he said. “But we need to come up with better solutions or technologies so they can be used more effectively in women. This could include adapting technology for smaller vascular sizes, which should lead to better outcome and fewer complications in women.”

He added that further granular data on this issue are needed. “We have very limited datasets in cardiogenic shock. There are few randomized controlled trials, and women are not well represented in such trials. We need to make sure we enroll women in randomized trials.”

Dr. Kapur said more women physicians who treat cardiogenic shock are also required, which would include cardiologists, critical care specialists, cardiac surgeons, and anesthesia personnel.

He pointed out that the two first authors of the current study are women – Van-Khue Ton, MD, Massachusetts General Hospital, Boston, and Manreet Kanwar, MD, Allegheny Health Network, Pittsburgh.

“We worked hard to involve women as principal investigators. They led the effort. These are investigations led by women, on women, to advance the care of women,” he commented.
 

 

 

Gender-related inequality

In an editorial accompanying publication of the study, Sara Kalantari, MD, and Jonathan Grinstein, MD, University of Chicago, and Robert O. Roswell, MD, Hofstra University, Hempstead, N.Y., said these results “provide valuable information about gender-related inequality in care and outcomes in the management of cardiogenic shock, although the exact mechanisms driving these observed differences still need to be elucidated.

“Broadly speaking, barriers in the care of women with heart failure and cardiogenic shock include a reduced awareness among both patients and providers, a deficiency of sex-specific objective criteria for guiding therapy, and unfavorable temporary mechanical circulatory support devices with higher rates of hemocompatibility-related complications in women,” they added.

“In the era of the multidisciplinary shock team and shock pathways with protocolized management algorithms, it is imperative that we still allow for personalization of care to match the physiologic needs of the patient in order for us to continue to close the gender gap in the care of patients presenting with cardiogenic shock,” the editorialists concluded.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AHA 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The challenges of palmoplantar pustulosis and other acral psoriatic disease

Article Type
Changed
Mon, 11/13/2023 - 06:40

The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. But when it comes to pustular disease localized to the hands and feet – palmoplantar pustulosis – treatment options have only modest efficacy, and spesolimab appears not to work, according to speakers at the annual research symposium of the National Psoriasis Foundation.

“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.

Dr. Megan H. Noe

Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.

IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.

Dr. Hawkes
Dr. Jason Ezra Hawke


In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
 

Palmoplantar pustulosis, and a word on generalized disease

Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.

TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.

Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”

A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.

The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.

And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.

Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.

An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.

 

 



Palmoplantar psoriasis

Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.

The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?

What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.



Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.

Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.

There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.

Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. But when it comes to pustular disease localized to the hands and feet – palmoplantar pustulosis – treatment options have only modest efficacy, and spesolimab appears not to work, according to speakers at the annual research symposium of the National Psoriasis Foundation.

“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.

Dr. Megan H. Noe

Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.

IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.

Dr. Hawkes
Dr. Jason Ezra Hawke


In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
 

Palmoplantar pustulosis, and a word on generalized disease

Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.

TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.

Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”

A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.

The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.

And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.

Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.

An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.

 

 



Palmoplantar psoriasis

Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.

The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?

What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.



Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.

Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.

There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.

Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.

The approval last year of the interleukin (IL)-36 receptor antagonist spesolimab for treating generalized pustular psoriasis flares brightened the treatment landscape for this rare condition, and a recently published phase 2 study suggests a potential role of spesolimab for flare prevention. But when it comes to pustular disease localized to the hands and feet – palmoplantar pustulosis – treatment options have only modest efficacy, and spesolimab appears not to work, according to speakers at the annual research symposium of the National Psoriasis Foundation.

“The IL-36 receptor antagonists don’t seem to be quite the answer for [palmoplantar pustulosis] that they are for generalized pustular psoriasis [GPP],” Megan H. Noe, MD, MPH, assistant professor of dermatology at Harvard Medical School and a dermatologist at Brigham and Women’s Hospital, Boston, said at the meeting.

Dr. Megan H. Noe

Psoriasis affecting the hands and feet – both pustular and nonpustular – has a higher impact on quality of life and higher functional disability than does non-acral psoriasis, is less responsive to treatment, and has a “very confusing nomenclature” that complicates research and thus management, said Jason Ezra Hawkes, MD, a dermatologist in Rocklin, Calif., and former faculty member of several departments of dermatology. Both he and Dr. Noe spoke during a tough-to-treat session at the NPF meeting.

IL-17 and IL-23 blockade, as well as tumor necrosis factor (TNF) inhibition, are effective overall for palmoplantar psoriasis (nonpustular), but in general, responses are lower than for plaque psoriasis. Apremilast (Otezla), a phosphodiesterase-4 inhibitor, has some efficacy for pustular variants, but for hyperkeratotic variants it “does not perform as well as more selective inhibition of IL-17 and IL-23 blockade,” he said.

Dr. Hawkes
Dr. Jason Ezra Hawke


In general, ”what’s happening in the acral sites is different from an immune perspective than what’s happening in the non-acral sites,” and more research utilizing a clearer, descriptive nomenclature is needed to tease out differing immunophenotypes, explained Dr. Hawkes, who has led multiple clinical trials of treatments for psoriasis and other inflammatory skin conditions.
 

Palmoplantar pustulosis, and a word on generalized disease

Dermatologists are using a variety of treatments for palmoplantar pustulosis, with no clear first-line choices, Dr. Noe said. In a case series of almost 200 patients with palmoplantar pustulosis across 20 dermatology practices, published in JAMA Dermatology, 35% of patients received a systemic therapy prescription at their initial encounter – most commonly acitretin, followed by methotrexate and phototherapy. “Biologics were used, but use was varied and not as often as with oral agents,” said Dr. Noe, a coauthor of the study.

TNF blockers led to improvements ranging from 57% to 84%, depending on the agent, in a 2020 retrospective study of patients with palmoplantar pustulosis or acrodermatitis continua of Hallopeau, Dr. Noe noted. However, rates of complete clearance were only 20%-29%.

Apremilast showed modest efficacy after 5 months of treatment, with 62% of patients achieving at least a 50% improvement in the Palmoplantar Pustulosis Psoriasis Area and Severity Index (PPPASI) in a 2021 open-label, phase 2 study involving 21 patients. “This may represent a potential treatment option,” Dr. Noe said. “It’s something, but not what we’re used to seeing in our plaque psoriasis patients.”

A 2021 phase 2a, double-blind, randomized, placebo-controlled study of spesolimab in patients with palmoplantar pustulosis, meanwhile, failed to meet its primary endpoint, with only 32% of patients achieving a 50% improvement at 16 weeks, compared with 24% of patients in the placebo arm. And a recently published network meta-analysis found that none of the five drugs studied in seven randomized controlled trials – biologic or oral – was more effective than placebo for clearance or improvement of palmoplantar pustulosis.

The spesolimab (Spevigo) results have been disappointing considering the biologic’s newfound efficacy and role as the first Food and Drug Administration–approved therapy for generalized pustular disease, according to Dr. Noe. The ability of a single 900-mg intravenous dose of the IL-36 receptor antagonist to completely clear pustules at 1 week in 54% of patients with generalized disease, compared with 6% of the placebo group, was “groundbreaking,” she said, referring to results of the pivotal trial published in the New England Journal of Medicine.

And given that “preventing GPP flares is ultimately what we want,” she said, more good news was reported this year in The Lancet: The finding from an international, randomized, placebo-controlled study that high-dose subcutaneous spesolimab significantly reduced the risk of a flare over 48 weeks. “There are lots of ongoing studies right now to understand the best way to dose spesolimab,” she said.

Moreover, another IL-36 receptor antagonist, imsidolimab, is being investigated in a phase 3 trial for generalized pustular disease, she noted. A phase 2, open-label study of patients with GPP found that “more than half of patients were very much improved at 4 weeks, and some patients started showing improvement at day 3,” Dr. Noe said.

An area of research she is interested in is the potential for Janus kinase (JAK) inhibitors as a treatment for palmoplantar pustulosis. For pustulosis on the hands and feet, recent case reports describing the efficacy of JAK inhibitors have caught her eye. “Right now, all we have is this case report data, mostly with tofacitinib, but I think it’s exciting,” she said, noting a recently published report in the British Journal of Dermatology.

 

 



Palmoplantar psoriasis

Pustular psoriatic disease can be localized to the hand and/or feet only, or can co-occur with generalized pustular disease, just as palmoplantar psoriasis can be localized to the hands and/or feet or, more commonly, can co-occur with widespread plaque psoriasis. Research has shown, Dr. Hawkes said, that with both types of acral disease, many patients have or have had plaque psoriasis outside of acral sites.

The nomenclature and acronyms for palmoplantar psoriatic disease have complicated patient education, communication, and research, Dr. Hawkes said. Does PPP refer to palmoplantar psoriasis, or palmoplantar pustulosis, for instance? What is the difference between palmoplantar pustulosis (coined PPP) and palmoplantar pustular psoriasis (referred to as PPPP)?

What if disease is only on the hands, only on the feet, or only on the backs of the hands? And at what point is disease not classified as palmoplantar psoriasis, but plaque psoriasis with involvement of the hands and feet? Inconsistencies and lack of clarification lead to “confusing” literature, he said.



Heterogeneity in populations across trials resulting from “inconsistent categorization and phenotype inclusion” may partly account for the recalcitrance to treatment reported in the literature, he said. Misdiagnosis as psoriasis in cases of localized disease (confusion with eczema, for instance), and the fact that hands and feet are subject to increased trauma and injury, compared with non-acral sites, are also at play.

Trials may also allow insufficient time for improvement, compared with non-acral sites. “What we’ve learned about the hands and feet is that it takes a much longer time for disease to improve,” Dr. Hawkes said, so primary endpoints must take this into account.

There is unique immunologic signaling in palmoplantar disease that differs from the predominant signaling in traditional plaque psoriasis, he emphasized, and “mixed immunophenotypes” that need to be unraveled.

Dr. Hawkes disclosed ties with AbbVie, Arcutis, Bristol-Myers Squibb, Boehringer Ingelheim, Janssen, LEO, Lilly, Novartis, Pfizer, Regeneron, Sanofi, Sun Pharma, and UCB. Dr. Noe disclosed ties to Bristol-Myers Squibb and Boehringer Ingelheim.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE NPF RESEARCH SYMPOSIUM 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Meta-analysis of postcancer use of immunosuppressive therapies shows no increase in cancer recurrence risk

Article Type
Changed
Wed, 11/15/2023 - 14:57

Patients with immune-mediated diseases and a history of malignancy had similar rates of cancer recurrence whether or not they were receiving immunosuppressive treatments, shows a newly published systematic review and meta-analysis that covered approximately 24,000 patients and 86,000 person-years of follow-up.

The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.

And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.

Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.

The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
 

Similar levels of risk

The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).

Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).

Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).

Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).

Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
 

 

 

Timing of therapy

The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.

The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.

“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”

Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.

It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.

The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.

Publications
Topics
Sections

Patients with immune-mediated diseases and a history of malignancy had similar rates of cancer recurrence whether or not they were receiving immunosuppressive treatments, shows a newly published systematic review and meta-analysis that covered approximately 24,000 patients and 86,000 person-years of follow-up.

The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.

And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.

Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.

The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
 

Similar levels of risk

The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).

Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).

Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).

Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).

Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
 

 

 

Timing of therapy

The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.

The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.

“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”

Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.

It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.

The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.

Patients with immune-mediated diseases and a history of malignancy had similar rates of cancer recurrence whether or not they were receiving immunosuppressive treatments, shows a newly published systematic review and meta-analysis that covered approximately 24,000 patients and 86,000 person-years of follow-up.

The findings could “help guide clinical decision making,” providing “reassurance that it remains safe to use conventional immunomodulators, anti-TNF [tumor necrosis factor] agents, or newer biologics in individuals with [immune-mediated diseases] with a prior malignancy consistent with recent guidelines,” Akshita Gupta, MD, of Massachusetts General Hospital, Boston, and coinvestigators wrote in Clinical Gastroenterology and Hepatology.

And because a stratification of studies by the timing of immunosuppression therapy initiation found no increased risk when treatment was started within 5 years of a cancer diagnosis compared to later on, the meta-analysis could “potentially reduce the time to initiation of immunosuppressive treatment,” the authors wrote, noting a continued need for individualized decision-making.

Ustekinumab, a monoclonal antibody targeting interleukin-12 and IL-23, and vedolizumab, a monoclonal antibody that binds to alpha4beta7 integrin, were covered in the meta-analysis, but investigators found no studies on the use of upadacitinib or other Janus kinase (JAK) inhibitors, or the use of S1P modulators, in patients with prior malignancies.

The analysis included 31 observational studies, 17 of which involved patients with inflammatory bowel disease (IBD). (Of the other studies, 14 involved patients with rheumatoid arthritis, 2 covered psoriasis, and 1 covered ankylosing spondylitis.)
 

Similar levels of risk

The incidence rate of new or recurrent cancers among individuals not receiving any immunosuppressive therapy for IBD or other immune-mediated diseases after an index cancer was 35 per 1,000 patient-years (95% confidence interval, 27-43 per 1,000 patient-years; 1,627 incident cancers among 12,238 patients, 43,765 patient-years), and the rate among anti-TNF users was similar at 32 per 1,000 patient-years (95% CI, 25-38 per 1,000 patient-years; 571 cancers among 3,939 patients, 17,772 patient-years).

Among patients on conventional immunomodulator therapy (thiopurines, methotrexate), the incidence rate was numerically higher at 46 per 1,000 patient-years (95% CI, 31-61; 1,104 incident cancers among 5,930 patients; 17,018 patient-years), but was not statistically different from anti-TNF (P = .92) or no immunosuppression (P = .98).

Patients on combination immunosuppression also had numerically higher rates of new or recurrent cancers at 56 per 1,000 patient-years (95% CI, 31-81; 179 incident cancers, 2,659 patient-years), but these rates were not statistically different from immunomodulator use alone (P = .19), anti-TNF alone (P = .06) or no immunosuppressive therapy (P = .14).

Patients on ustekinumab and vedolizumab similarly had numerically lower rates of cancer recurrence, compared with other treatment groups: 21 per 1,000 patient-years (95% CI, 0-44; 5 cancers among 41 patients, 213 patient-years) and 16 per 1,000 patient-years (95% CI, 5-26; 37 cancers among 281 patients, 1,951 patient-years). However, the difference was statistically significant only for vedolizumab (P = .03 vs. immunomodulators and P = .04 vs. anti-TNF agents).

Subgroup analyses for new primary cancers, recurrence of a prior cancer, and type of index cancer (skin cancer vs. other cancers) similarly found no statistically significant differences between treatment arms. Results were similar in patients with IBD and RA.
 

 

 

Timing of therapy

The new meta-analysis confirms and expands a previous meta-analysis published in Gastroenterology in 2016 that showed no impact of treatment – primarily IMM or anti-TNF treatment – on cancer recurrence in patients with immune-mediated diseases, Dr. Gupta and coauthors wrote.

The 2016 meta-analysis reported similar cancer recurrence rates with IMMs and anti-TNFs when immunosuppression was introduced before or after 6 years of cancer diagnosis. In the new meta-analysis – with twice the number of patients, a longer duration of follow-up, and the inclusion of other biologic therapies – a stratification of results at the median interval of therapy initiation similarly found no increased risk before 5 years, compared with after 5 years.

“Although several existing guidelines recommend avoiding immunosuppression for 5 years after the index cancer, our results indicate that it may be safe to initiate these agents earlier than 5 years, at least in some patients,” Dr. Gupta and coauthors wrote, mentioning the possible impact of selection bias and surveillance bias in the study. Ongoing registries “may help answer this question more definitively with prospectively collected data, but inherently may suffer from this selection bias as well.”

Assessment of the newer biologics ustekinumab and vedolizumab is limited by the low number of studies (four and five, respectively) and by limited duration of follow-up. “Longer-term evaluation after these treatments is essential but it is reassuring that in the early analysis we did not observe an increase and in fact noted numerically lower rates of cancers,” they wrote.

It is also “critically important” to generate more data on JAK inhibitors, and to further study the safety of combining systemic chemotherapy and the continuation of IBD therapy in the setting of a new cancer diagnosis, they wrote.

The study was funded in part by grants from the Crohn’s and Colitis Foundation, and the Chleck Family Foundation. Dr. Gupta disclosed no conflicts. One coauthor disclosed consulting for Abbvie, Amgen, Biogen, and other companies, and receiving grants from several companies. Another coauthor disclosed serving on the scientific advisory boards for AbbVie and other companies, and receiving research support from Pfizer.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA publishes CPU for AI in colon polyp diagnosis and management

Article Type
Changed
Fri, 11/10/2023 - 09:07

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

Publications
Topics
Sections

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The steep costs of disrupting gut-barrier harmony

Article Type
Changed
Thu, 11/09/2023 - 16:23

An interview with Elena Ivanina, DO, MPH

From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.

To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.

What is the role of the gut barrier in overall health and disease?

The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.

The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.

Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.



Is disruption of this barrier what is usually referred to as “leaky gut”?

Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.



Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?

Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.

In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.

In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
 

 

 

The many possible sources of gut-barrier disruption

What causes leaky gut, and when should physicians and patients be suspicious if they have it?

There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.

Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.



What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?

The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.

With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.

A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
 

Emerging evidence on causality

New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?

Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.

Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.

In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.

In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.

These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.



What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?

There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.



Is there an association between alterations in the gut microbiome and gut-barrier disruption?

There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.

The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.

Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
 

 

 

Practical advice for clinicians and patients

How do you advise patients to avoid gut-barrier disruption?

It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.

Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?

Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oilglutaminequercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.

Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.

What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?

It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
 

Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

An interview with Elena Ivanina, DO, MPH

An interview with Elena Ivanina, DO, MPH

From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.

To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.

What is the role of the gut barrier in overall health and disease?

The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.

The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.

Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.



Is disruption of this barrier what is usually referred to as “leaky gut”?

Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.



Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?

Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.

In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.

In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
 

 

 

The many possible sources of gut-barrier disruption

What causes leaky gut, and when should physicians and patients be suspicious if they have it?

There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.

Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.



What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?

The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.

With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.

A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
 

Emerging evidence on causality

New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?

Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.

Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.

In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.

In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.

These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.



What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?

There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.



Is there an association between alterations in the gut microbiome and gut-barrier disruption?

There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.

The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.

Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
 

 

 

Practical advice for clinicians and patients

How do you advise patients to avoid gut-barrier disruption?

It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.

Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?

Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oilglutaminequercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.

Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.

What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?

It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
 

Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.

A version of this article appeared on Medscape.com.

From Ayurveda to the teachings of Hippocrates, medicine’s earliest traditions advanced a belief that the gut was the foundation of all health and disease. It wasn’t until recently, however, that Western medicine has adopted the notion of gut-barrier dysfunction as a pathologic phenomenon critical to not only digestive health but also chronic allergic, inflammatory, and autoimmune disease.

To learn more, Medscape contributor Akash Goel, MD, interviewed Elena Ivanina, DO, MPH, an integrative gastroenterologist, on the role of the gut barrier. Dr. Ivanina is the founder of the Center for Integrative Gut Health and the former director of Neurogastroenterology and Motility at Lenox Hill Hospital in New York. She runs the educational platform for all things gut health, gutlove.com.

What is the role of the gut barrier in overall health and disease?

The gut contains the human body’s largest interface between a person and their external environment. The actual interface is at the gut barrier, where there needs to be an ideal homeostasis and selectivity mechanism to allow the absorption of healthy nutrients, but on the other hand prevent the penetration of harmful microbes, food antigens, and other proinflammatory factors and toxins.

The gut barrier is made up of the mucus layer, gut microbiome, epithelial cells, and immune cells in the lamina propria. When this apparatus is disrupted by factors such as infection, low-fiber diet, antibiotics, and alcohol, then it cannot function normally to selectively keep out the harmful intraluminal substances.

Gut-barrier disruption leads to translocation of dangerous intraluminal components, such as bacteria and their components, into the gut wall and, most importantly, exposes the immune system to them. This causes improper immune activation and dysregulation, which has been shown to lead to various diseases, including gastrointestinal inflammatory disorders such as inflammatory bowel disease (IBD) and celiac disease, systemic autoimmune diseases such as multiple sclerosis and rheumatoid arthritis, and metabolic diseases such as obesity and diabetes.



Is disruption of this barrier what is usually referred to as “leaky gut”?

Leaky gut is a colloquial term for increased intestinal permeability or intestinal hyperpermeability. In a 2019 review article, Dr. Michael Camilleri exposes leaky gut as a term that can be misleading and confusing to the general population. It calls upon clinicians to have an increased awareness of the potential of barrier dysfunction in diseases, and to consider the barrier as a target for treatment.



Is leaky gut more of a mechanism of underlying chronic disease or is it a disease of its own?

Intestinal permeability is a pathophysiologic process in the gut with certain risk factors that in some conditions has been shown to precede chronic disease. There has not been any convincing evidence that it can be diagnosed and treated as its own entity, but research is ongoing.

In IBD, the Crohn’s and Colitis Canada Genetic, Environmental, Microbial Project research consortium has been studying individuals at increased risk for Crohn’s disease because of a first-degree family member with Crohn’s disease. They found an increased abundance of Ruminococcus torques in the microbiomes of at-risk individuals who went on to develop the disease. R. torques are mucin degraders that induce an increase in other mucin-using bacteria, which can contribute to gut-barrier compromise.

In other studies, patients have been found to have asymptomatic intestinal hyperpermeability years before their diagnosis of Crohn’s disease. This supports understanding more about the potential of intestinal hyperpermeability as its own diagnosis that, if addressed, could possibly prevent disease development.
 

 

 

The many possible sources of gut-barrier disruption

What causes leaky gut, and when should physicians and patients be suspicious if they have it?

There are many risk factors that have been associated with leaky gut in both human studies and animal studies, including acrolein (food toxin), aging, alcohol, antacid drugs, antibiotics, burn injury, chemotherapy, circadian rhythm disruption, corticosteroids, emulsifiers (food additives), strenuous exercise (≥ 2 hours) at 60% VO2 max, starvation, fructose, fructans, gliadin (wheat protein), high-fat diet, high-salt diet, high-sugar diet, hyperglycemia, low-fiber diet, nonsteroidal anti-inflammatory drugs, pesticide, proinflammatory cytokines, psychological stress, radiation, sleep deprivation, smoking, and sweeteners.

Patients may be completely asymptomatic with leaky gut. Physicians should be suspicious if there is a genetic predisposition to chronic disease or if any risk factors are unveiled after assessing diet and lifestyle exposures.



What is the role of the Western diet and processed food consumption in driving disruptions of the gut barrier?

The Western diet reduces gut-barrier mucus thickness, leading to increased gut permeability. People who consume a Western diet typically eat less than 15 grams of fiber per day, which is significantly less than many other cultures, including the hunter-gatherers of Tanzania (Hadza), who get 100 or more grams of fiber a day in their food.

With a fiber-depleted diet, gut microbiota that normally feed on fiber gradually disappear and other commensals shift their metabolism to degrade the gut-barrier mucus layer.

A low-fiber diet also decreases short-chain fatty acid production, which reduces production of mucus and affects tight junction regulation.
 

Emerging evidence on causality

New evidence is demonstrating that previous functional conditions of the gastrointestinal tract, like functional dyspepsia, are associated with abnormalities to the intestinal barrier. What is the association between conditions like functional dyspepsia and irritable bowel syndrome (IBS) with gut-barrier disruption?

Conditions such as functional dyspepsia and IBS are similar in that their pathophysiology is incompletely understood and likely attributable to contributions from many different underlying mechanisms. This makes it difficult for clinicians to explain the condition to patients and often to treat without specific therapeutic targets.

Emerging evidence with new diagnostic tools, such as confocal laser endomicroscopy, has demonstrated altered mucosal barrier function in both conditions.

In patients with IBS who have a suspected food intolerance, studies looking at exposure to the food antigens found that the food caused immediate breaks, increased intervillous spaces, and increased inflammatory cells in the gut mucosa. These changes were associated with patient responses to exclusion diets.

In functional dyspepsia, another study, using confocal laser endomicroscopy, has shown that affected patients have significantly greater epithelial gap density in the duodenum, compared with healthy controls. There was also impaired duodenal-epithelial barrier integrity and evidence of increased cellular pyroptosis in the duodenal mucosa.

These findings suggest that while IBS and functional dyspepsia are still likely multifactorial, there may be a common preclinical state that can be further investigated as far as preventing its development and using it as a therapeutic target.



What diagnostic testing are you using to determine whether patients have disruptions to the gut barrier? Are they validated or more experimental?

There are various testing strategies that have been used in research to diagnose intestinal hyperpermeability. In a 2021 analysis, Dr. Michael Camilleri found that the optimal probes for measuring small intestinal and colonic permeability are the mass excreted of 13C-mannitol at 0-2 hours and lactulose during 2-8 hours or sucralose during 8-24 hours. Studies looking at postinfectious IBS have incorporated elevated urinary lactulose/mannitol ratios. Dr. Alessio Fasano and others have looked at using zonulin as a biomarker of impaired gut-barrier function. These tests are still considered experimental.



Is there an association between alterations in the gut microbiome and gut-barrier disruption?

There is an integral relationship between the gut microbiome and gut-barrier function, and dysbiosis can disrupt gut-barrier functionality.

The microbiota produce a variety of metabolites in close proximity to the gut epithelium, impacting gut-barrier function and immune response. For example, short-chain fatty acids produced by Bifidobacterium, Bacteroides, Enterobacter, Faecalibacterium, and Roseburia species impact host immune cell differentiation and metabolism as well as influence susceptibility to pathogens.

Studies have shown that sodium butyrate significantly improves epithelial-barrier function. Other experiments have used transplantation of the intestinal microbiota to show that introduction of certain microbial phenotypes can significantly increase gut permeability.
 

 

 

Practical advice for clinicians and patients

How do you advise patients to avoid gut-barrier disruption?

It is important to educate and counsel patients about the long list of risk factors, many of which are closely related to a Western diet and lifestyle, which can increase their risk for leaky gut.

Once one has it, can it be repaired? Can you share a bit about your protocols in general terms?

Many interventions have been shown to improve intestinal permeability. They include berberine, butyrate, caloric restriction and fasting, curcumin, dietary fiber (prebiotics), moderate exercise, fermented food, fish oilglutaminequercetin, probiotics, vagus nerve stimulation, vitamin D, and zinc.

Protocols have to be tailored to patients and their risk factors, diet, and lifestyle.

What are some tips from a nutrition and lifestyle standpoint that patients can follow to ensure a robust gut barrier?

It is important to emphasize a high-fiber diet with naturally fermented food, incorporating time-restricted eating, such as eating an early dinner and nothing else before bedtime, a moderate exercise routine, and gut-brain modulation with techniques such as acupuncture that can incorporate vagus nerve stimulation. Limited safe precision supplementation can be discussed on an individual basis based on the patient’s interest, additional testing, and other existing health conditions.
 

Dr. Akash Goel is a clinical assistant professor of medicine at Weill Cornell in gastroenterology and hepatology. He has disclosed no relevant financial relationships. His work has appeared on networks and publications such as CNN, The New York Times, Time Magazine, and Financial Times. He has a deep interest in nutrition, food as medicine, and the intersection between the gut microbiome and human health.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Depression: Differential Diagnosis

Article Type
Changed
Thu, 11/09/2023 - 15:49

Publications
Topics
Sections

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 11/09/2023 - 15:45
Un-Gate On Date
Thu, 11/09/2023 - 15:45
Use ProPublica
CFC Schedule Remove Status
Thu, 11/09/2023 - 15:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Risk calculator for early-stage CKD may soon enter U.S. market

Article Type
Changed
Thu, 11/09/2023 - 15:10

A proprietary formula showed good performance stratifying the risk of adults with early-stage chronic kidney disease (CKD) advancing to more severe kidney dysfunction and increased health care needs. The analyses offer the possibility of focusing intensified medical management of early-stage CKD on those patients who could potentially receive the most benefit.

The Klinrisk model predicts the risk of an adult with early-stage CKD developing either a 40% or greater drop in estimated glomerular filtration rate or kidney failure. It calculates risk based on 20 lab-measured variables that include serum creatinine, urine albumin-to-creatinine ratio, and other values taken from routinely ordered tests such as complete blood cell counts, chemistry panels, comprehensive metabolic panels, and urinalysis.

Mitchel L. Zoler/MDedge News
Dr. Navdeep Tangri

In the most recent and largest external validation study using data from 4.6 million American adults enrolled in commercial and Medicare insurance plans, the results showed Klinrisk correctly predicted CKD progression in 80%-83% of individuals over 2 years and in 78%-83% of individuals over 5 years, depending on the insurance provider, Navdeep Tangri, MD, PhD, reported at the annual meeting of the American Society of Nephrology. When urinalysis data were available, the model correctly predicted CKD progression in 81%-87% of individuals over 2 years and in 80%-87% of individuals over 5 years. These results follow prior reports of several other successful validations of Klinrisk.
 

‘Ready to implement’

“The Klinrisk model is ready to implement by any payer, health system, or clinic where the needed lab data are available,” said Dr. Tangri, a nephrologist and professor at the University of Manitoba, Winnipeg, and founder of Klinrisk Inc., the company developing and commercializing the Klinrisk assessment tool.

For the time being, Dr. Tangri sees Klinrisk as a population health device that can allow insurers and health systems to track management quality and quality improvement and to target patients who stand to benefit most from relatively expensive resources. This includes prescriptions for finerenone (Kerendia, Bayer) for people who also have type 2 diabetes, and agents from the class of sodium-glucose cotransporter 2 (SGLT2) inhibitors such as dapagliflozin (Farxiga, AstraZeneca) and empagliflozin (Jardiance, Boehringer Ingelheim and Lilly).



He has also begun discussions with the Food and Drug Administration about the data the agency will need to consider Klinrisk for potential approval as a new medical device, perhaps in 2025. That’s how he envisions getting a Klinrisk assessment into the hands of caregivers that they could use with individual patients to create an appropriate treatment plan.

Results from his new analysis showed that “all the kidney disease action is in the 10%-20% of people with the highest risk on Klinrisk, while not much happens in those in the bottom half,” Dr. Tangri said during his presentation.

“We’re trying to find the patients who get the largest [absolute] benefit from intensified treatment,” he added in an interview. “Klinrisk finds people with high-risk kidney disease early on, when kidney function is still normal or near normal. High-risk patients are often completely unrecognized. Risk-based management” that identifies the early-stage CKD patients who would benefit most from treatment with an SGLT2 inhibitor, finerenone, and other foundational treatments to slow CKD progression “is better than the free-for-all that occurs today.”

 

 

Simplified data collection

“Klinrisk is very effective,” but requires follow-up by clinicians and health systems to implement its findings, commented Josef Coresh, MD, a professor of clinical epidemiology at Johns Hopkins Bloomberg, Baltimore. Dr. Coresh compared it with a free equation that estimates a person’s risk for a 40% drop in kidney function over the next 3 years developed by Dr. Tangri, Dr. Coresh, and many collaborators led by Morgan C. Grams, MD, PhD, of New York University that they published in 2022, and posted on a website of the CKD Prognosis Consortium.

Mitchel L. Zoler/MDedge News
Dr. Josef Coresh

The CKD Prognosis Consortium formula “takes a different approach” from Klinrisk. The commercial formula “is simpler, only using lab measures, and avoids inputs taken from physical examination such as systolic blood pressure and body mass index and health history data such as smoking, noted Dr. Coresh. He also speculated that “a commercial formula that must be paid for may counterintuitively result in better follow-up for making management changes if it uses some of the resources for education and system changes.”

Using data from multiple sources, like the CKD Prognosis Consortium equation, can create implementation challenges, said Dr. Tangri. “Lab results don’t vary much,” which makes Klinrisk “quite an improvement for implementation. It’s easier to implement.”

Other findings from the newest validation study that Dr. Tangri presented were that the people studied with Klinrisk scores in the top 10% had, over the next 2 years of follow-up and compared with people in the bottom half for Klinrisk staging, a 3- to 5-fold higher rate of all-cause medical costs, a 13-30-fold increase in CKD-related costs, and a 5- to 10-fold increase in hospitalizations and ED visits.

Early identification of CKD and early initiation of intensified treatment for high-risk patients can reduce the rate of progression to dialysis, reduce hospitalizations for heart failure, and lower the cost of care, Dr. Tangri said.

The validation study in 4.6 million Americans was sponsored by Boehringer Ingelheim. Dr. Tangri founded and has an ownership interest in Klinrisk. He has also received honoraria from, has ownership interests in, and has been a consultant to multiple pharmaceutical companies. Dr. Coresh had no disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A proprietary formula showed good performance stratifying the risk of adults with early-stage chronic kidney disease (CKD) advancing to more severe kidney dysfunction and increased health care needs. The analyses offer the possibility of focusing intensified medical management of early-stage CKD on those patients who could potentially receive the most benefit.

The Klinrisk model predicts the risk of an adult with early-stage CKD developing either a 40% or greater drop in estimated glomerular filtration rate or kidney failure. It calculates risk based on 20 lab-measured variables that include serum creatinine, urine albumin-to-creatinine ratio, and other values taken from routinely ordered tests such as complete blood cell counts, chemistry panels, comprehensive metabolic panels, and urinalysis.

Mitchel L. Zoler/MDedge News
Dr. Navdeep Tangri

In the most recent and largest external validation study using data from 4.6 million American adults enrolled in commercial and Medicare insurance plans, the results showed Klinrisk correctly predicted CKD progression in 80%-83% of individuals over 2 years and in 78%-83% of individuals over 5 years, depending on the insurance provider, Navdeep Tangri, MD, PhD, reported at the annual meeting of the American Society of Nephrology. When urinalysis data were available, the model correctly predicted CKD progression in 81%-87% of individuals over 2 years and in 80%-87% of individuals over 5 years. These results follow prior reports of several other successful validations of Klinrisk.
 

‘Ready to implement’

“The Klinrisk model is ready to implement by any payer, health system, or clinic where the needed lab data are available,” said Dr. Tangri, a nephrologist and professor at the University of Manitoba, Winnipeg, and founder of Klinrisk Inc., the company developing and commercializing the Klinrisk assessment tool.

For the time being, Dr. Tangri sees Klinrisk as a population health device that can allow insurers and health systems to track management quality and quality improvement and to target patients who stand to benefit most from relatively expensive resources. This includes prescriptions for finerenone (Kerendia, Bayer) for people who also have type 2 diabetes, and agents from the class of sodium-glucose cotransporter 2 (SGLT2) inhibitors such as dapagliflozin (Farxiga, AstraZeneca) and empagliflozin (Jardiance, Boehringer Ingelheim and Lilly).



He has also begun discussions with the Food and Drug Administration about the data the agency will need to consider Klinrisk for potential approval as a new medical device, perhaps in 2025. That’s how he envisions getting a Klinrisk assessment into the hands of caregivers that they could use with individual patients to create an appropriate treatment plan.

Results from his new analysis showed that “all the kidney disease action is in the 10%-20% of people with the highest risk on Klinrisk, while not much happens in those in the bottom half,” Dr. Tangri said during his presentation.

“We’re trying to find the patients who get the largest [absolute] benefit from intensified treatment,” he added in an interview. “Klinrisk finds people with high-risk kidney disease early on, when kidney function is still normal or near normal. High-risk patients are often completely unrecognized. Risk-based management” that identifies the early-stage CKD patients who would benefit most from treatment with an SGLT2 inhibitor, finerenone, and other foundational treatments to slow CKD progression “is better than the free-for-all that occurs today.”

 

 

Simplified data collection

“Klinrisk is very effective,” but requires follow-up by clinicians and health systems to implement its findings, commented Josef Coresh, MD, a professor of clinical epidemiology at Johns Hopkins Bloomberg, Baltimore. Dr. Coresh compared it with a free equation that estimates a person’s risk for a 40% drop in kidney function over the next 3 years developed by Dr. Tangri, Dr. Coresh, and many collaborators led by Morgan C. Grams, MD, PhD, of New York University that they published in 2022, and posted on a website of the CKD Prognosis Consortium.

Mitchel L. Zoler/MDedge News
Dr. Josef Coresh

The CKD Prognosis Consortium formula “takes a different approach” from Klinrisk. The commercial formula “is simpler, only using lab measures, and avoids inputs taken from physical examination such as systolic blood pressure and body mass index and health history data such as smoking, noted Dr. Coresh. He also speculated that “a commercial formula that must be paid for may counterintuitively result in better follow-up for making management changes if it uses some of the resources for education and system changes.”

Using data from multiple sources, like the CKD Prognosis Consortium equation, can create implementation challenges, said Dr. Tangri. “Lab results don’t vary much,” which makes Klinrisk “quite an improvement for implementation. It’s easier to implement.”

Other findings from the newest validation study that Dr. Tangri presented were that the people studied with Klinrisk scores in the top 10% had, over the next 2 years of follow-up and compared with people in the bottom half for Klinrisk staging, a 3- to 5-fold higher rate of all-cause medical costs, a 13-30-fold increase in CKD-related costs, and a 5- to 10-fold increase in hospitalizations and ED visits.

Early identification of CKD and early initiation of intensified treatment for high-risk patients can reduce the rate of progression to dialysis, reduce hospitalizations for heart failure, and lower the cost of care, Dr. Tangri said.

The validation study in 4.6 million Americans was sponsored by Boehringer Ingelheim. Dr. Tangri founded and has an ownership interest in Klinrisk. He has also received honoraria from, has ownership interests in, and has been a consultant to multiple pharmaceutical companies. Dr. Coresh had no disclosures.

A proprietary formula showed good performance stratifying the risk of adults with early-stage chronic kidney disease (CKD) advancing to more severe kidney dysfunction and increased health care needs. The analyses offer the possibility of focusing intensified medical management of early-stage CKD on those patients who could potentially receive the most benefit.

The Klinrisk model predicts the risk of an adult with early-stage CKD developing either a 40% or greater drop in estimated glomerular filtration rate or kidney failure. It calculates risk based on 20 lab-measured variables that include serum creatinine, urine albumin-to-creatinine ratio, and other values taken from routinely ordered tests such as complete blood cell counts, chemistry panels, comprehensive metabolic panels, and urinalysis.

Mitchel L. Zoler/MDedge News
Dr. Navdeep Tangri

In the most recent and largest external validation study using data from 4.6 million American adults enrolled in commercial and Medicare insurance plans, the results showed Klinrisk correctly predicted CKD progression in 80%-83% of individuals over 2 years and in 78%-83% of individuals over 5 years, depending on the insurance provider, Navdeep Tangri, MD, PhD, reported at the annual meeting of the American Society of Nephrology. When urinalysis data were available, the model correctly predicted CKD progression in 81%-87% of individuals over 2 years and in 80%-87% of individuals over 5 years. These results follow prior reports of several other successful validations of Klinrisk.
 

‘Ready to implement’

“The Klinrisk model is ready to implement by any payer, health system, or clinic where the needed lab data are available,” said Dr. Tangri, a nephrologist and professor at the University of Manitoba, Winnipeg, and founder of Klinrisk Inc., the company developing and commercializing the Klinrisk assessment tool.

For the time being, Dr. Tangri sees Klinrisk as a population health device that can allow insurers and health systems to track management quality and quality improvement and to target patients who stand to benefit most from relatively expensive resources. This includes prescriptions for finerenone (Kerendia, Bayer) for people who also have type 2 diabetes, and agents from the class of sodium-glucose cotransporter 2 (SGLT2) inhibitors such as dapagliflozin (Farxiga, AstraZeneca) and empagliflozin (Jardiance, Boehringer Ingelheim and Lilly).



He has also begun discussions with the Food and Drug Administration about the data the agency will need to consider Klinrisk for potential approval as a new medical device, perhaps in 2025. That’s how he envisions getting a Klinrisk assessment into the hands of caregivers that they could use with individual patients to create an appropriate treatment plan.

Results from his new analysis showed that “all the kidney disease action is in the 10%-20% of people with the highest risk on Klinrisk, while not much happens in those in the bottom half,” Dr. Tangri said during his presentation.

“We’re trying to find the patients who get the largest [absolute] benefit from intensified treatment,” he added in an interview. “Klinrisk finds people with high-risk kidney disease early on, when kidney function is still normal or near normal. High-risk patients are often completely unrecognized. Risk-based management” that identifies the early-stage CKD patients who would benefit most from treatment with an SGLT2 inhibitor, finerenone, and other foundational treatments to slow CKD progression “is better than the free-for-all that occurs today.”

 

 

Simplified data collection

“Klinrisk is very effective,” but requires follow-up by clinicians and health systems to implement its findings, commented Josef Coresh, MD, a professor of clinical epidemiology at Johns Hopkins Bloomberg, Baltimore. Dr. Coresh compared it with a free equation that estimates a person’s risk for a 40% drop in kidney function over the next 3 years developed by Dr. Tangri, Dr. Coresh, and many collaborators led by Morgan C. Grams, MD, PhD, of New York University that they published in 2022, and posted on a website of the CKD Prognosis Consortium.

Mitchel L. Zoler/MDedge News
Dr. Josef Coresh

The CKD Prognosis Consortium formula “takes a different approach” from Klinrisk. The commercial formula “is simpler, only using lab measures, and avoids inputs taken from physical examination such as systolic blood pressure and body mass index and health history data such as smoking, noted Dr. Coresh. He also speculated that “a commercial formula that must be paid for may counterintuitively result in better follow-up for making management changes if it uses some of the resources for education and system changes.”

Using data from multiple sources, like the CKD Prognosis Consortium equation, can create implementation challenges, said Dr. Tangri. “Lab results don’t vary much,” which makes Klinrisk “quite an improvement for implementation. It’s easier to implement.”

Other findings from the newest validation study that Dr. Tangri presented were that the people studied with Klinrisk scores in the top 10% had, over the next 2 years of follow-up and compared with people in the bottom half for Klinrisk staging, a 3- to 5-fold higher rate of all-cause medical costs, a 13-30-fold increase in CKD-related costs, and a 5- to 10-fold increase in hospitalizations and ED visits.

Early identification of CKD and early initiation of intensified treatment for high-risk patients can reduce the rate of progression to dialysis, reduce hospitalizations for heart failure, and lower the cost of care, Dr. Tangri said.

The validation study in 4.6 million Americans was sponsored by Boehringer Ingelheim. Dr. Tangri founded and has an ownership interest in Klinrisk. He has also received honoraria from, has ownership interests in, and has been a consultant to multiple pharmaceutical companies. Dr. Coresh had no disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT KIDNEY WEEK 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

MASLD, MASH projected to grow by 23% in the U.S. through 2050

Article Type
Changed
Thu, 11/09/2023 - 13:35

– The nomenclature may have changed, but the steady rise in the most common form of liver disease – metabolic dysfunction–associated steatotic liver disease (MASLD, formerly known as NAFLD) – is predicted to continue into the middle of this century.

That’s according to Phuc Le, PhD, MPH, and colleagues at the Cleveland Clinic. They created a mathematical model incorporating data on the growth of the U.S. population and the natural history of MASLD/NAFLD. The model projected a relative 23% increase in MASLD among U.S. adults from 2020 to 2050.

Cleveland Clinic
Dr. Phuc Le

“Our model forecasts a substantial clinical burden of NAFLD over the next 3 decades. In the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant,” Dr. Le said in a media briefing held on Nov. 7 prior to her presentation of the data at the annual meeting of the American Association for the Study of Liver Diseases.

The estimated worldwide prevalence of MASLD is 38%. In the United States, an estimated 27.8% of adults had MASLD as of 2020.

Dr. Le and colleagues wanted to get a clearer picture of the expected increase in the clinical burden of MASLD in the coming decades. The researchers used data from the medical literature to create an individual-level state transition model. They took into account projections of the growth of the U.S. population and the progression of MASLD and metabolic dysfunction–associated steatohepatitis (MASH, formerly NASH) through stages of fibrosis to decompensation, hepatocellular carcinoma (HCC), transplant, and liver-related death as a proportion of all-cause mortality.
 

Validated model

They validated the model by testing it against liver outcomes from 2000 through 2018 and published data on the U.S. population. The model closely matched trends in MASLD prevalence, MASH proportion, HCC and liver transplant incidences, and overall survival rates for patients with MASLD.

As noted, the model predicted a steady increase in MASLD prevalence, from 27.8% in 2020 to 34.3% by 2050, a relative increase of about 23%. The model also predicted a slight uptick in the proportion of MASH among patients with MASLD, from 20% to 21.8%.

The investigators said that the prevalence of MASLD/MASH would likely remain relatively stable among people aged 18-29 years but would increase significantly for all other age groups.

In addition, the model predicted an increase in the proportion of cirrhosis in patients with MASLD from 1.9% to 3.1%, as well as a rise in liver-related deaths from 0.4% of all deaths in 2020 to 1% by 2050.

The investigators also foresaw a rise in HCC cases, from 10,400 annually to 19,300 by 2050 and a more than twofold increase in liver transplants, from 1,700 in 2020 to 4,200 in 2050.
 

A “tsunami” of liver disease

In the question-and-answer portion of the briefing, Norah Terrault, MD, AASLD president and chief of gastroenterology and hepatology at the University of Southern California, Los Angeles, commented on the study findings and “the frightening trajectory in terms of disease burden.

Dr. Norah Terrault

“I’m thinking to myself there’s no way we’re going to be able to transplant our way out of this tsunami of disease that’s coming our way,” she said, and asked Dr. Le what policy or societal approaches might be implemented to help stem the tide.

“This is a really huge question,” Dr. Le acknowledged. The study only provides estimates of what the future burden of disease might be if there are no changes in clinical care for patients with MASLD or if the trajectory of contributing factors, such as obesity, diabetes, and other metabolic diseases, continued to increase, she cautioned.

Raising awareness of MASLD/MASH and working to improve collaboration among liver specialists and general practitioners could help to flatten the curve, she suggested.

The study was supported by a grant from the Agency for Healthcare Research and Quality. Dr. Le and Dr. Terrault have disclosed no relevant financial relations.


A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– The nomenclature may have changed, but the steady rise in the most common form of liver disease – metabolic dysfunction–associated steatotic liver disease (MASLD, formerly known as NAFLD) – is predicted to continue into the middle of this century.

That’s according to Phuc Le, PhD, MPH, and colleagues at the Cleveland Clinic. They created a mathematical model incorporating data on the growth of the U.S. population and the natural history of MASLD/NAFLD. The model projected a relative 23% increase in MASLD among U.S. adults from 2020 to 2050.

Cleveland Clinic
Dr. Phuc Le

“Our model forecasts a substantial clinical burden of NAFLD over the next 3 decades. In the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant,” Dr. Le said in a media briefing held on Nov. 7 prior to her presentation of the data at the annual meeting of the American Association for the Study of Liver Diseases.

The estimated worldwide prevalence of MASLD is 38%. In the United States, an estimated 27.8% of adults had MASLD as of 2020.

Dr. Le and colleagues wanted to get a clearer picture of the expected increase in the clinical burden of MASLD in the coming decades. The researchers used data from the medical literature to create an individual-level state transition model. They took into account projections of the growth of the U.S. population and the progression of MASLD and metabolic dysfunction–associated steatohepatitis (MASH, formerly NASH) through stages of fibrosis to decompensation, hepatocellular carcinoma (HCC), transplant, and liver-related death as a proportion of all-cause mortality.
 

Validated model

They validated the model by testing it against liver outcomes from 2000 through 2018 and published data on the U.S. population. The model closely matched trends in MASLD prevalence, MASH proportion, HCC and liver transplant incidences, and overall survival rates for patients with MASLD.

As noted, the model predicted a steady increase in MASLD prevalence, from 27.8% in 2020 to 34.3% by 2050, a relative increase of about 23%. The model also predicted a slight uptick in the proportion of MASH among patients with MASLD, from 20% to 21.8%.

The investigators said that the prevalence of MASLD/MASH would likely remain relatively stable among people aged 18-29 years but would increase significantly for all other age groups.

In addition, the model predicted an increase in the proportion of cirrhosis in patients with MASLD from 1.9% to 3.1%, as well as a rise in liver-related deaths from 0.4% of all deaths in 2020 to 1% by 2050.

The investigators also foresaw a rise in HCC cases, from 10,400 annually to 19,300 by 2050 and a more than twofold increase in liver transplants, from 1,700 in 2020 to 4,200 in 2050.
 

A “tsunami” of liver disease

In the question-and-answer portion of the briefing, Norah Terrault, MD, AASLD president and chief of gastroenterology and hepatology at the University of Southern California, Los Angeles, commented on the study findings and “the frightening trajectory in terms of disease burden.

Dr. Norah Terrault

“I’m thinking to myself there’s no way we’re going to be able to transplant our way out of this tsunami of disease that’s coming our way,” she said, and asked Dr. Le what policy or societal approaches might be implemented to help stem the tide.

“This is a really huge question,” Dr. Le acknowledged. The study only provides estimates of what the future burden of disease might be if there are no changes in clinical care for patients with MASLD or if the trajectory of contributing factors, such as obesity, diabetes, and other metabolic diseases, continued to increase, she cautioned.

Raising awareness of MASLD/MASH and working to improve collaboration among liver specialists and general practitioners could help to flatten the curve, she suggested.

The study was supported by a grant from the Agency for Healthcare Research and Quality. Dr. Le and Dr. Terrault have disclosed no relevant financial relations.


A version of this article first appeared on Medscape.com.

– The nomenclature may have changed, but the steady rise in the most common form of liver disease – metabolic dysfunction–associated steatotic liver disease (MASLD, formerly known as NAFLD) – is predicted to continue into the middle of this century.

That’s according to Phuc Le, PhD, MPH, and colleagues at the Cleveland Clinic. They created a mathematical model incorporating data on the growth of the U.S. population and the natural history of MASLD/NAFLD. The model projected a relative 23% increase in MASLD among U.S. adults from 2020 to 2050.

Cleveland Clinic
Dr. Phuc Le

“Our model forecasts a substantial clinical burden of NAFLD over the next 3 decades. In the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant,” Dr. Le said in a media briefing held on Nov. 7 prior to her presentation of the data at the annual meeting of the American Association for the Study of Liver Diseases.

The estimated worldwide prevalence of MASLD is 38%. In the United States, an estimated 27.8% of adults had MASLD as of 2020.

Dr. Le and colleagues wanted to get a clearer picture of the expected increase in the clinical burden of MASLD in the coming decades. The researchers used data from the medical literature to create an individual-level state transition model. They took into account projections of the growth of the U.S. population and the progression of MASLD and metabolic dysfunction–associated steatohepatitis (MASH, formerly NASH) through stages of fibrosis to decompensation, hepatocellular carcinoma (HCC), transplant, and liver-related death as a proportion of all-cause mortality.
 

Validated model

They validated the model by testing it against liver outcomes from 2000 through 2018 and published data on the U.S. population. The model closely matched trends in MASLD prevalence, MASH proportion, HCC and liver transplant incidences, and overall survival rates for patients with MASLD.

As noted, the model predicted a steady increase in MASLD prevalence, from 27.8% in 2020 to 34.3% by 2050, a relative increase of about 23%. The model also predicted a slight uptick in the proportion of MASH among patients with MASLD, from 20% to 21.8%.

The investigators said that the prevalence of MASLD/MASH would likely remain relatively stable among people aged 18-29 years but would increase significantly for all other age groups.

In addition, the model predicted an increase in the proportion of cirrhosis in patients with MASLD from 1.9% to 3.1%, as well as a rise in liver-related deaths from 0.4% of all deaths in 2020 to 1% by 2050.

The investigators also foresaw a rise in HCC cases, from 10,400 annually to 19,300 by 2050 and a more than twofold increase in liver transplants, from 1,700 in 2020 to 4,200 in 2050.
 

A “tsunami” of liver disease

In the question-and-answer portion of the briefing, Norah Terrault, MD, AASLD president and chief of gastroenterology and hepatology at the University of Southern California, Los Angeles, commented on the study findings and “the frightening trajectory in terms of disease burden.

Dr. Norah Terrault

“I’m thinking to myself there’s no way we’re going to be able to transplant our way out of this tsunami of disease that’s coming our way,” she said, and asked Dr. Le what policy or societal approaches might be implemented to help stem the tide.

“This is a really huge question,” Dr. Le acknowledged. The study only provides estimates of what the future burden of disease might be if there are no changes in clinical care for patients with MASLD or if the trajectory of contributing factors, such as obesity, diabetes, and other metabolic diseases, continued to increase, she cautioned.

Raising awareness of MASLD/MASH and working to improve collaboration among liver specialists and general practitioners could help to flatten the curve, she suggested.

The study was supported by a grant from the Agency for Healthcare Research and Quality. Dr. Le and Dr. Terrault have disclosed no relevant financial relations.


A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE LIVER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article