Irregular sleep tied to markers of atherosclerosis

Article Type
Changed
Mon, 02/27/2023 - 10:47

Irregular sleep – such as inconsistent sleep duration or sleep timing – may increase the risk of developing atherosclerosis among adults older than age 45, a new report suggests.

In particular, variation in sleep duration of more than 2 hours per night in the same week was tied to higher rates of atherosclerosis.

“Poor sleep is linked with several cardiovascular conditions, including heart disease, hypertension, and type 2 diabetes,” lead author Kelsie M. Full, PhD, MPH, assistant professor of medicine at Vanderbilt University Medical Center, Nashville, Tenn., said in an interview.

“Overall, we found that participants who slept varying amounts of hours throughout the week (meaning that one night they slept less, one night they slept more) were more likely to have atherosclerosis than participants who slept about the same amount of time each night,” she said.

The study was published online in the Journal of the American Heart Association.
 

Analyzing associations

Dr. Full and colleagues examined data from 2032 participants in the Multi-Ethnic Study of Atherosclerosis Sleep Ancillary Study, which included adults aged between 45 and 84 years in six U.S. communities who completed 7-day wrist actigraphy assessment and kept a sleep diary between 2010 and 2013.

For subclinical markers of cardiovascular disease, participants underwent assessments of coronary artery calcium, carotid plaque presence, carotid intima-media thickness, and ankle-brachial index.

The research team assessed sleep duration, or the total number of minutes of sleep in a night, and sleep timing regularity, which was determined on the basis of the time someone initially fell asleep each night. They adjusted for cardiovascular disease risk factors and sleep characteristics, such as obstructive sleep apnea, sleep duration, and sleep fragmentation.

The average age of the participants was 68.6 years, and 53.6% were women. About 37.9% identified as White, 27.6% as Black or African American, 23.4% as Hispanic American, and 11.1% as Chinese American.

During the 7-day period, about 38% of participants experienced a change in sleep duration of more than 90 minutes, and 18% experienced a sleep duration change of more than 120 minutes. Those who had irregular sleep were more likely to be non-White, current smokers, have lower average annual incomes, have work shift schedules or did not work, and have a higher average body mass index.

For the study, sleep duration irregularity was defined as a standard deviation of more than 120 minutes. Those participants who had a greater degree of sleep irregularity were more likely to have high coronary artery calcium burden than those whose sleep duration was more more regular, defined as an SD of 60 minutes or less (> 300; prevalence ratio, 1.33; 95% confidence interval, 1.03-1.71), as well as abnormal ankle-brachial index (< 0.9, prevalence ratio, 1.75;95% CI, 1.03-2.95).

Further, those with irregular sleep timing (SD > 90 minutes) were more likely to have a high coronary artery calcium burden (prevalence ratio, 1.39; 95% CI, 1.07-1.82) in comparison with those with more regular sleep timing (SD < 30 minutes).

“The biggest surprise to me was that 30% of the participants in the study had total sleep times that varied by more than 90 minutes over the course of the week,” Dr. Full said. “This is consistent with prior studies that suggest that a large proportion of the general public have irregular sleep patterns, not just shift workers.”
 

 

 

Investigating next steps

In additional analyses, Dr. Full and colleagues found that sleep duration regularity continued to be associated with high coronary artery calcium burden and abnormal ankle-brachial index when accounting for severe obstructive sleep apnea, average nightly sleep duration, and average sleep fragmentation.

Notably, when sleep duration was added, all participants with more irregular sleep durations (SD > 60 minutes) were more likely to have a high coronary artery calcium burden, compared with those with regular sleep durations (SD < 60 minutes). The results remained when participants who reported shift work, including night shift work, were excluded.

Additional studies are needed to understand the mechanisms, the study authors wrote. Night-to-night variability in sleep duration and sleep timing can cause desynchronization in the sleep-wake timing and circadian disruption.

“A key issue highlighted in this study is that sleep irregularity itself, independent of how much sleep people were getting, was related to heart health. Sleep is a naturally recurring phenomenon, and maintaining regularity helps provide stability and predictability to the body,” Michael Grandner, PhD, associate professor of psychiatry and director of the sleep and health research program at the University of Arizona, Tucson, said in an interview.

Dr. Grandner, who wasn’t involved with this study, has researched sleep irregularity and associations with cardiovascular disease, diabetes, obesity, and many other adverse outcomes.

“When people have very irregular sleep schedules, it may make it harder for the body to optimally make good use of the sleep it is getting, since it such a moving target,” he said. “The unique angle here is the ability to focus on regularity of sleep.”

The study was supported by the National Heart, Lung, and Blood Institute and the National Center for Advancing Translational Sciences of the National Institutes of Health. One author received grants and consulting fees from pharmaceutical companies unrelated to the research. The other authors and Dr. Grandner disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Irregular sleep – such as inconsistent sleep duration or sleep timing – may increase the risk of developing atherosclerosis among adults older than age 45, a new report suggests.

In particular, variation in sleep duration of more than 2 hours per night in the same week was tied to higher rates of atherosclerosis.

“Poor sleep is linked with several cardiovascular conditions, including heart disease, hypertension, and type 2 diabetes,” lead author Kelsie M. Full, PhD, MPH, assistant professor of medicine at Vanderbilt University Medical Center, Nashville, Tenn., said in an interview.

“Overall, we found that participants who slept varying amounts of hours throughout the week (meaning that one night they slept less, one night they slept more) were more likely to have atherosclerosis than participants who slept about the same amount of time each night,” she said.

The study was published online in the Journal of the American Heart Association.
 

Analyzing associations

Dr. Full and colleagues examined data from 2032 participants in the Multi-Ethnic Study of Atherosclerosis Sleep Ancillary Study, which included adults aged between 45 and 84 years in six U.S. communities who completed 7-day wrist actigraphy assessment and kept a sleep diary between 2010 and 2013.

For subclinical markers of cardiovascular disease, participants underwent assessments of coronary artery calcium, carotid plaque presence, carotid intima-media thickness, and ankle-brachial index.

The research team assessed sleep duration, or the total number of minutes of sleep in a night, and sleep timing regularity, which was determined on the basis of the time someone initially fell asleep each night. They adjusted for cardiovascular disease risk factors and sleep characteristics, such as obstructive sleep apnea, sleep duration, and sleep fragmentation.

The average age of the participants was 68.6 years, and 53.6% were women. About 37.9% identified as White, 27.6% as Black or African American, 23.4% as Hispanic American, and 11.1% as Chinese American.

During the 7-day period, about 38% of participants experienced a change in sleep duration of more than 90 minutes, and 18% experienced a sleep duration change of more than 120 minutes. Those who had irregular sleep were more likely to be non-White, current smokers, have lower average annual incomes, have work shift schedules or did not work, and have a higher average body mass index.

For the study, sleep duration irregularity was defined as a standard deviation of more than 120 minutes. Those participants who had a greater degree of sleep irregularity were more likely to have high coronary artery calcium burden than those whose sleep duration was more more regular, defined as an SD of 60 minutes or less (> 300; prevalence ratio, 1.33; 95% confidence interval, 1.03-1.71), as well as abnormal ankle-brachial index (< 0.9, prevalence ratio, 1.75;95% CI, 1.03-2.95).

Further, those with irregular sleep timing (SD > 90 minutes) were more likely to have a high coronary artery calcium burden (prevalence ratio, 1.39; 95% CI, 1.07-1.82) in comparison with those with more regular sleep timing (SD < 30 minutes).

“The biggest surprise to me was that 30% of the participants in the study had total sleep times that varied by more than 90 minutes over the course of the week,” Dr. Full said. “This is consistent with prior studies that suggest that a large proportion of the general public have irregular sleep patterns, not just shift workers.”
 

 

 

Investigating next steps

In additional analyses, Dr. Full and colleagues found that sleep duration regularity continued to be associated with high coronary artery calcium burden and abnormal ankle-brachial index when accounting for severe obstructive sleep apnea, average nightly sleep duration, and average sleep fragmentation.

Notably, when sleep duration was added, all participants with more irregular sleep durations (SD > 60 minutes) were more likely to have a high coronary artery calcium burden, compared with those with regular sleep durations (SD < 60 minutes). The results remained when participants who reported shift work, including night shift work, were excluded.

Additional studies are needed to understand the mechanisms, the study authors wrote. Night-to-night variability in sleep duration and sleep timing can cause desynchronization in the sleep-wake timing and circadian disruption.

“A key issue highlighted in this study is that sleep irregularity itself, independent of how much sleep people were getting, was related to heart health. Sleep is a naturally recurring phenomenon, and maintaining regularity helps provide stability and predictability to the body,” Michael Grandner, PhD, associate professor of psychiatry and director of the sleep and health research program at the University of Arizona, Tucson, said in an interview.

Dr. Grandner, who wasn’t involved with this study, has researched sleep irregularity and associations with cardiovascular disease, diabetes, obesity, and many other adverse outcomes.

“When people have very irregular sleep schedules, it may make it harder for the body to optimally make good use of the sleep it is getting, since it such a moving target,” he said. “The unique angle here is the ability to focus on regularity of sleep.”

The study was supported by the National Heart, Lung, and Blood Institute and the National Center for Advancing Translational Sciences of the National Institutes of Health. One author received grants and consulting fees from pharmaceutical companies unrelated to the research. The other authors and Dr. Grandner disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Irregular sleep – such as inconsistent sleep duration or sleep timing – may increase the risk of developing atherosclerosis among adults older than age 45, a new report suggests.

In particular, variation in sleep duration of more than 2 hours per night in the same week was tied to higher rates of atherosclerosis.

“Poor sleep is linked with several cardiovascular conditions, including heart disease, hypertension, and type 2 diabetes,” lead author Kelsie M. Full, PhD, MPH, assistant professor of medicine at Vanderbilt University Medical Center, Nashville, Tenn., said in an interview.

“Overall, we found that participants who slept varying amounts of hours throughout the week (meaning that one night they slept less, one night they slept more) were more likely to have atherosclerosis than participants who slept about the same amount of time each night,” she said.

The study was published online in the Journal of the American Heart Association.
 

Analyzing associations

Dr. Full and colleagues examined data from 2032 participants in the Multi-Ethnic Study of Atherosclerosis Sleep Ancillary Study, which included adults aged between 45 and 84 years in six U.S. communities who completed 7-day wrist actigraphy assessment and kept a sleep diary between 2010 and 2013.

For subclinical markers of cardiovascular disease, participants underwent assessments of coronary artery calcium, carotid plaque presence, carotid intima-media thickness, and ankle-brachial index.

The research team assessed sleep duration, or the total number of minutes of sleep in a night, and sleep timing regularity, which was determined on the basis of the time someone initially fell asleep each night. They adjusted for cardiovascular disease risk factors and sleep characteristics, such as obstructive sleep apnea, sleep duration, and sleep fragmentation.

The average age of the participants was 68.6 years, and 53.6% were women. About 37.9% identified as White, 27.6% as Black or African American, 23.4% as Hispanic American, and 11.1% as Chinese American.

During the 7-day period, about 38% of participants experienced a change in sleep duration of more than 90 minutes, and 18% experienced a sleep duration change of more than 120 minutes. Those who had irregular sleep were more likely to be non-White, current smokers, have lower average annual incomes, have work shift schedules or did not work, and have a higher average body mass index.

For the study, sleep duration irregularity was defined as a standard deviation of more than 120 minutes. Those participants who had a greater degree of sleep irregularity were more likely to have high coronary artery calcium burden than those whose sleep duration was more more regular, defined as an SD of 60 minutes or less (> 300; prevalence ratio, 1.33; 95% confidence interval, 1.03-1.71), as well as abnormal ankle-brachial index (< 0.9, prevalence ratio, 1.75;95% CI, 1.03-2.95).

Further, those with irregular sleep timing (SD > 90 minutes) were more likely to have a high coronary artery calcium burden (prevalence ratio, 1.39; 95% CI, 1.07-1.82) in comparison with those with more regular sleep timing (SD < 30 minutes).

“The biggest surprise to me was that 30% of the participants in the study had total sleep times that varied by more than 90 minutes over the course of the week,” Dr. Full said. “This is consistent with prior studies that suggest that a large proportion of the general public have irregular sleep patterns, not just shift workers.”
 

 

 

Investigating next steps

In additional analyses, Dr. Full and colleagues found that sleep duration regularity continued to be associated with high coronary artery calcium burden and abnormal ankle-brachial index when accounting for severe obstructive sleep apnea, average nightly sleep duration, and average sleep fragmentation.

Notably, when sleep duration was added, all participants with more irregular sleep durations (SD > 60 minutes) were more likely to have a high coronary artery calcium burden, compared with those with regular sleep durations (SD < 60 minutes). The results remained when participants who reported shift work, including night shift work, were excluded.

Additional studies are needed to understand the mechanisms, the study authors wrote. Night-to-night variability in sleep duration and sleep timing can cause desynchronization in the sleep-wake timing and circadian disruption.

“A key issue highlighted in this study is that sleep irregularity itself, independent of how much sleep people were getting, was related to heart health. Sleep is a naturally recurring phenomenon, and maintaining regularity helps provide stability and predictability to the body,” Michael Grandner, PhD, associate professor of psychiatry and director of the sleep and health research program at the University of Arizona, Tucson, said in an interview.

Dr. Grandner, who wasn’t involved with this study, has researched sleep irregularity and associations with cardiovascular disease, diabetes, obesity, and many other adverse outcomes.

“When people have very irregular sleep schedules, it may make it harder for the body to optimally make good use of the sleep it is getting, since it such a moving target,” he said. “The unique angle here is the ability to focus on regularity of sleep.”

The study was supported by the National Heart, Lung, and Blood Institute and the National Center for Advancing Translational Sciences of the National Institutes of Health. One author received grants and consulting fees from pharmaceutical companies unrelated to the research. The other authors and Dr. Grandner disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Advanced imaging technology could help predict lung cancer progression after surgery

Article Type
Changed
Thu, 03/02/2023 - 12:17

Advanced imaging technology that uses artificial intelligence can potentially predict which patients with lung cancer are likely to experience cancer progression after surgery, according to new data.

The technology, known as highly multiplexed imaging mass cytometry (IMC), can provide cellular-level detail of the tumor immune microenvironment, which may allow clinicians to identify patients who need additional treatment, as well as those who don’t.

“It is well known that the frequency of certain cell populations within the tumor microenvironment correlates with clinical outcomes. These observations help us understand the biology underlying cancer progression,” senior author Logan Walsh, PhD, assistant professor of human genetics and the Rosalind Goodman Chair in Lung Cancer Research at McGill University’s Rosalind and Morris Goodman Cancer Institute, Montreal, said in an interview.

“We wanted to test whether using completely unbiased AI could find and use the spatial topography of the tumor microenvironment from IMC data to predict clinical outcomes,” he said. “It turns out the answer is yes! AI can predict clinical outcomes when combined with IMC with extremely high accuracy from a single 1-mm2 tumor core.”

The study was published on in Nature.
 

The immune landscape

Lung cancer is the leading cause of cancer-related death in Canada, surpassing breast, colon, and prostate cancer deaths combined, the study authors write.

Lung adenocarcinoma, a non–small cell lung cancer, is the most common subtype and is characterized by distinct cellular and molecular features. The tumor immune microenvironment influences disease progression and therapy response, the authors write. Understanding the spatial landscape of the microenvironment could provide insight into disease progression, therapeutic vulnerabilities, and biomarkers of response to existing treatments.

In a collaborative study, Dr. Walsh and colleagues from McGill University and Université Laval profiled the cellular composition and spatial organization of the tumor immune microenvironment in tumors from 416 patients with lung adenocarcinoma across five histologic patterns. They used IMC to assess at samples from the universities’ biobanks that patients had provided for research purposes.

The research team detected more than 1.6 million cells, which allowed spatial analysis of immune lineages and activation states with distinct clinical correlates, including survival. They used a supervised lineage assignment approach to classify 14 distinct immune cell populations, along with tumor cells and endothelial cells.

High-grade solid tumors had the greatest immune infiltrate (44.6%), compared with micropapillary (37%), acinar (39.7%), papillary (32.8%), and lepidic architectures (32.7%). Macrophages were the most frequent cell population in the tumor immune microenvironment, representing 12.3% of total cells and 34.1% of immune cells.

The prevalence of CD163+ macrophages was strongly correlated with FOXP3+ immunoregulatory T cells in the solid pattern. This relationship was less pronounced in low-grade lepidic and papillary architectures. This finding could suggest an interplay between macrophage and T-cell populations in the tumor immune microenvironment across lung adenocarcinoma patterns.

Using a deep neural network model, the researchers also analyzed the relationship between immune populations and clinical or pathologic variables by examining the frequency of individual cell types as a percentage of total cells in each image. Each image was cross-referenced with clinical data from patients, including sex, age, body mass index, smoking status, stage, progression, survival, and histologic subtype.

Overall, the researchers found that various clinical outcomes, including cancer progression, could be predicted with high accuracy using a single 1-mm2 tumor core. For instance, they could predict progression in stage IA and IB resected lung cancer with 95.9% accuracy.
 

 

 

Additional applications

“We were not surprised that AI was able to predict clinical outcomes, but we were surprised that it was able to do so with such high accuracy and precision,” said Dr. Walsh. “We were also surprised to learn that our predictions were equally accurate using only six-plex data, compared with 35-plex. This hinted to us that we could potentially scale down the number of markers to a practical number that would be amenable to technologies available in routine pathology labs.”

Dr. Walsh and colleagues are now validating the predictive tool using a lower-plex technology. In addition, they are investigating the immune landscapes of primary and metastatic brain tumors.

“This study is important, as it helps us to understand and appreciate the biological and mechanistic factors that may influence treatment outcomes. Our standard clinical predictors for predicting risk of recurrence and probability of response to therapy are not optimal,” Yee Ung, MD, an associate professor of radiation oncology at Sunnybrook Health Sciences Centre, Toronto, said in an interview.

Dr. Ung, who wasn’t involved with this study, has researched noninvasive hypoxia imaging and targeting in lung cancer. Ideally, he said, future studies should incorporate the use of noninvasive imaging predictive factors, in addition to the tumor immune microenvironment and clinical factors, to predict outcomes and provide personalized treatment.

“As we begin to investigate and understand more about cancer biology down to the cellular and molecular level, we need to strategically use AI methodologies in the processing and analysis of data,” he said.

The study was supported by the McGill Interdisciplinary Initiative in Infection and Immunity, the Brain Tumour Funders’ Collaborative, the Canadian Institutes of Health Research, and the Canadian Foundation for Innovation. Dr. Walsh and Dr. Ung have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Advanced imaging technology that uses artificial intelligence can potentially predict which patients with lung cancer are likely to experience cancer progression after surgery, according to new data.

The technology, known as highly multiplexed imaging mass cytometry (IMC), can provide cellular-level detail of the tumor immune microenvironment, which may allow clinicians to identify patients who need additional treatment, as well as those who don’t.

“It is well known that the frequency of certain cell populations within the tumor microenvironment correlates with clinical outcomes. These observations help us understand the biology underlying cancer progression,” senior author Logan Walsh, PhD, assistant professor of human genetics and the Rosalind Goodman Chair in Lung Cancer Research at McGill University’s Rosalind and Morris Goodman Cancer Institute, Montreal, said in an interview.

“We wanted to test whether using completely unbiased AI could find and use the spatial topography of the tumor microenvironment from IMC data to predict clinical outcomes,” he said. “It turns out the answer is yes! AI can predict clinical outcomes when combined with IMC with extremely high accuracy from a single 1-mm2 tumor core.”

The study was published on in Nature.
 

The immune landscape

Lung cancer is the leading cause of cancer-related death in Canada, surpassing breast, colon, and prostate cancer deaths combined, the study authors write.

Lung adenocarcinoma, a non–small cell lung cancer, is the most common subtype and is characterized by distinct cellular and molecular features. The tumor immune microenvironment influences disease progression and therapy response, the authors write. Understanding the spatial landscape of the microenvironment could provide insight into disease progression, therapeutic vulnerabilities, and biomarkers of response to existing treatments.

In a collaborative study, Dr. Walsh and colleagues from McGill University and Université Laval profiled the cellular composition and spatial organization of the tumor immune microenvironment in tumors from 416 patients with lung adenocarcinoma across five histologic patterns. They used IMC to assess at samples from the universities’ biobanks that patients had provided for research purposes.

The research team detected more than 1.6 million cells, which allowed spatial analysis of immune lineages and activation states with distinct clinical correlates, including survival. They used a supervised lineage assignment approach to classify 14 distinct immune cell populations, along with tumor cells and endothelial cells.

High-grade solid tumors had the greatest immune infiltrate (44.6%), compared with micropapillary (37%), acinar (39.7%), papillary (32.8%), and lepidic architectures (32.7%). Macrophages were the most frequent cell population in the tumor immune microenvironment, representing 12.3% of total cells and 34.1% of immune cells.

The prevalence of CD163+ macrophages was strongly correlated with FOXP3+ immunoregulatory T cells in the solid pattern. This relationship was less pronounced in low-grade lepidic and papillary architectures. This finding could suggest an interplay between macrophage and T-cell populations in the tumor immune microenvironment across lung adenocarcinoma patterns.

Using a deep neural network model, the researchers also analyzed the relationship between immune populations and clinical or pathologic variables by examining the frequency of individual cell types as a percentage of total cells in each image. Each image was cross-referenced with clinical data from patients, including sex, age, body mass index, smoking status, stage, progression, survival, and histologic subtype.

Overall, the researchers found that various clinical outcomes, including cancer progression, could be predicted with high accuracy using a single 1-mm2 tumor core. For instance, they could predict progression in stage IA and IB resected lung cancer with 95.9% accuracy.
 

 

 

Additional applications

“We were not surprised that AI was able to predict clinical outcomes, but we were surprised that it was able to do so with such high accuracy and precision,” said Dr. Walsh. “We were also surprised to learn that our predictions were equally accurate using only six-plex data, compared with 35-plex. This hinted to us that we could potentially scale down the number of markers to a practical number that would be amenable to technologies available in routine pathology labs.”

Dr. Walsh and colleagues are now validating the predictive tool using a lower-plex technology. In addition, they are investigating the immune landscapes of primary and metastatic brain tumors.

“This study is important, as it helps us to understand and appreciate the biological and mechanistic factors that may influence treatment outcomes. Our standard clinical predictors for predicting risk of recurrence and probability of response to therapy are not optimal,” Yee Ung, MD, an associate professor of radiation oncology at Sunnybrook Health Sciences Centre, Toronto, said in an interview.

Dr. Ung, who wasn’t involved with this study, has researched noninvasive hypoxia imaging and targeting in lung cancer. Ideally, he said, future studies should incorporate the use of noninvasive imaging predictive factors, in addition to the tumor immune microenvironment and clinical factors, to predict outcomes and provide personalized treatment.

“As we begin to investigate and understand more about cancer biology down to the cellular and molecular level, we need to strategically use AI methodologies in the processing and analysis of data,” he said.

The study was supported by the McGill Interdisciplinary Initiative in Infection and Immunity, the Brain Tumour Funders’ Collaborative, the Canadian Institutes of Health Research, and the Canadian Foundation for Innovation. Dr. Walsh and Dr. Ung have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Advanced imaging technology that uses artificial intelligence can potentially predict which patients with lung cancer are likely to experience cancer progression after surgery, according to new data.

The technology, known as highly multiplexed imaging mass cytometry (IMC), can provide cellular-level detail of the tumor immune microenvironment, which may allow clinicians to identify patients who need additional treatment, as well as those who don’t.

“It is well known that the frequency of certain cell populations within the tumor microenvironment correlates with clinical outcomes. These observations help us understand the biology underlying cancer progression,” senior author Logan Walsh, PhD, assistant professor of human genetics and the Rosalind Goodman Chair in Lung Cancer Research at McGill University’s Rosalind and Morris Goodman Cancer Institute, Montreal, said in an interview.

“We wanted to test whether using completely unbiased AI could find and use the spatial topography of the tumor microenvironment from IMC data to predict clinical outcomes,” he said. “It turns out the answer is yes! AI can predict clinical outcomes when combined with IMC with extremely high accuracy from a single 1-mm2 tumor core.”

The study was published on in Nature.
 

The immune landscape

Lung cancer is the leading cause of cancer-related death in Canada, surpassing breast, colon, and prostate cancer deaths combined, the study authors write.

Lung adenocarcinoma, a non–small cell lung cancer, is the most common subtype and is characterized by distinct cellular and molecular features. The tumor immune microenvironment influences disease progression and therapy response, the authors write. Understanding the spatial landscape of the microenvironment could provide insight into disease progression, therapeutic vulnerabilities, and biomarkers of response to existing treatments.

In a collaborative study, Dr. Walsh and colleagues from McGill University and Université Laval profiled the cellular composition and spatial organization of the tumor immune microenvironment in tumors from 416 patients with lung adenocarcinoma across five histologic patterns. They used IMC to assess at samples from the universities’ biobanks that patients had provided for research purposes.

The research team detected more than 1.6 million cells, which allowed spatial analysis of immune lineages and activation states with distinct clinical correlates, including survival. They used a supervised lineage assignment approach to classify 14 distinct immune cell populations, along with tumor cells and endothelial cells.

High-grade solid tumors had the greatest immune infiltrate (44.6%), compared with micropapillary (37%), acinar (39.7%), papillary (32.8%), and lepidic architectures (32.7%). Macrophages were the most frequent cell population in the tumor immune microenvironment, representing 12.3% of total cells and 34.1% of immune cells.

The prevalence of CD163+ macrophages was strongly correlated with FOXP3+ immunoregulatory T cells in the solid pattern. This relationship was less pronounced in low-grade lepidic and papillary architectures. This finding could suggest an interplay between macrophage and T-cell populations in the tumor immune microenvironment across lung adenocarcinoma patterns.

Using a deep neural network model, the researchers also analyzed the relationship between immune populations and clinical or pathologic variables by examining the frequency of individual cell types as a percentage of total cells in each image. Each image was cross-referenced with clinical data from patients, including sex, age, body mass index, smoking status, stage, progression, survival, and histologic subtype.

Overall, the researchers found that various clinical outcomes, including cancer progression, could be predicted with high accuracy using a single 1-mm2 tumor core. For instance, they could predict progression in stage IA and IB resected lung cancer with 95.9% accuracy.
 

 

 

Additional applications

“We were not surprised that AI was able to predict clinical outcomes, but we were surprised that it was able to do so with such high accuracy and precision,” said Dr. Walsh. “We were also surprised to learn that our predictions were equally accurate using only six-plex data, compared with 35-plex. This hinted to us that we could potentially scale down the number of markers to a practical number that would be amenable to technologies available in routine pathology labs.”

Dr. Walsh and colleagues are now validating the predictive tool using a lower-plex technology. In addition, they are investigating the immune landscapes of primary and metastatic brain tumors.

“This study is important, as it helps us to understand and appreciate the biological and mechanistic factors that may influence treatment outcomes. Our standard clinical predictors for predicting risk of recurrence and probability of response to therapy are not optimal,” Yee Ung, MD, an associate professor of radiation oncology at Sunnybrook Health Sciences Centre, Toronto, said in an interview.

Dr. Ung, who wasn’t involved with this study, has researched noninvasive hypoxia imaging and targeting in lung cancer. Ideally, he said, future studies should incorporate the use of noninvasive imaging predictive factors, in addition to the tumor immune microenvironment and clinical factors, to predict outcomes and provide personalized treatment.

“As we begin to investigate and understand more about cancer biology down to the cellular and molecular level, we need to strategically use AI methodologies in the processing and analysis of data,” he said.

The study was supported by the McGill Interdisciplinary Initiative in Infection and Immunity, the Brain Tumour Funders’ Collaborative, the Canadian Institutes of Health Research, and the Canadian Foundation for Innovation. Dr. Walsh and Dr. Ung have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Novel celery seed–derived drug may improve stroke outcomes

Article Type
Changed
Wed, 02/22/2023 - 15:19

Butylphthalide, a medication derived from celery seed, may improve outcomes after an acute ischemic stroke when given in addition to thrombolysis or endovascular treatment, a new report suggests.

Patients treated with butylphthalide had fewer severe neurologic symptoms and better function 90 days after the stroke, compared with those receiving placebo.

Butylphthalide is approved and available for use in China, where the study was conducted. However, the medication hasn’t been approved for use by the U.S. Food and Drug Administration.

“Patients who received butylphthalide had less severe neurological symptoms and a better living status at 90 days post stroke, compared to those who received the placebo,” said coauthor Baixue Jia, MD, an attending physician in interventional neuroradiology at the Beijing Tiantan Hospital of Capital Medical University and a faculty member at the China National Clinical Research Center for Neurological Diseases in Beijing. “If the results are confirmed in other trials, this may lead to more options to treat strokes caused by clots.”

The study was presented at the International Stroke Conference presented by the American Stroke Association, a division of the American Heart Association.
 

Studying stroke outcomes

The researchers described butylphthalide as a cerebroprotective drug that was originally extracted from seeds of Apium graveolens. In China, previous studies have shown that the drug has cerebroprotective effects in animal models of ischemia-reperfusion, they noted.

In this randomized, double-blind, placebo-controlled trial, Dr. Jia and colleagues evaluated whether treatment with butylphthalide could improve 90-day outcomes for adults with acute ischemic stroke who received intravenous recombinant tissue plasminogen activator (tPA), endovascular treatment, or both.

The participants were treated at one of 59 medical centers in China between July 2018 and February 2022. Those who had minimal stroke symptoms on their initial exam, defined as a score of 0-3 on the National Institutes of Health Stroke Scale, or had severe stroke symptoms, defined as having a score of 26 or higher on the NIHSS, were excluded from the study.

Along with an initial revascularization intervention chosen by their physician, participants were randomly selected to receive either butylphthalide or a placebo daily for 90 days. The drug was administered through daily intravenous injections for the first 14 days, after which patients received oral capsules for 76 days.

The research team defined the outcomes as “favorable” if a patient fell into one of the following categories 90 days after the stroke: an initially mild to moderate stroke (NIHSS, 4-7) and no symptoms after treatment, defined as a score of 0 on the Modified Rankin Scale (mRS), which measures disability and dependence; an initially moderate to serious stroke (NIHSS, 8-14) and no residual symptoms or mild symptoms that don’t impair the ability to perform routine activities of daily living without assistance (mRS, 0-1); or an initially serious to severe stroke (NIHSS, 15-25) and no remaining symptoms or a slight disability that impairs some activities but allows one to conduct daily living without assistance (mRS, 0-2).

Secondary outcomes included symptomatic intracranial hemorrhage, recurrent stroke, and mortality.

Among the 1,216 participants, 607 were assigned to the treatment group, and 609 were assigned to the placebo group. The average age was 66 years, and 68% were men.

Overall, participants in the butylphthalide group were 70% more likely to have a favorable 90-day outcome, compared with the placebo group. Favorable outcomes occurred in 344 patients (56.7%) in the butylphthalide group, compared with 268 patients (44%) in the placebo group (odds ratio, 1.70; 95% confidence interval, 1.35-2.14; P < .001).

In addition, butylphthalide improved function equally well for the patients who initially received tPA, those who received endovascular treatment, and those who received both tPA and endovascular treatment.

Secondary events, such as recurrent stroke and intracranial hemorrhage, weren’t significantly different between the butylphthalide and placebo groups.
 

 

 

Ongoing questions

Dr. Jia and colleagues noted the need to understand how butylphthalide works in the brain. Animal studies have suggested several possible mechanisms, but it remains unclear.

“The next step should be investigating the exact mechanisms of butylphthalide in humans,” Dr. Jia said.

Additional research should assess the medication in other populations, the authors noted, particularly because the study involved participants who received initial treatment with tPA, endovascular treatment, or both. The results may not be generalizable to stroke patients who receive other treatments or to populations outside of China.

“While these are interesting results, this is only one relatively small study on a fairly select population in China. Butylphthalide, a medication initially compounded from celery seed, is not ready for use in standard stroke treatment,” said Daniel Lackland, DrPH, professor of neurology and director of the division of translational neurosciences and population studies at the Medical University of South Carolina, Charleston.

Dr. Lackland, who wasn’t involved with the study, is a member of the American Stroke Association’s Stroke Council. Although butylphthalide was originally extracted from seeds, he noted, it’s not what patients would find commercially available.

“The medication used in this study is not the same as celery seed or celery seed extract supplements,” he said. “Stroke survivors should always consult with their neurologist or healthcare professional regarding diet after a stroke.”

The study was funded by the National Key Technology Research and Development Program of the Ministry of Science and Technology of the People’s Republic of China and Shijiazhuang Pharmaceutical Group dl-3-butylphthalide Pharmaceutical. Several authors are employed with Beijing Tiantan Hospital and the Beijing Institute of Brain Disorders. Dr. Lackland reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Butylphthalide, a medication derived from celery seed, may improve outcomes after an acute ischemic stroke when given in addition to thrombolysis or endovascular treatment, a new report suggests.

Patients treated with butylphthalide had fewer severe neurologic symptoms and better function 90 days after the stroke, compared with those receiving placebo.

Butylphthalide is approved and available for use in China, where the study was conducted. However, the medication hasn’t been approved for use by the U.S. Food and Drug Administration.

“Patients who received butylphthalide had less severe neurological symptoms and a better living status at 90 days post stroke, compared to those who received the placebo,” said coauthor Baixue Jia, MD, an attending physician in interventional neuroradiology at the Beijing Tiantan Hospital of Capital Medical University and a faculty member at the China National Clinical Research Center for Neurological Diseases in Beijing. “If the results are confirmed in other trials, this may lead to more options to treat strokes caused by clots.”

The study was presented at the International Stroke Conference presented by the American Stroke Association, a division of the American Heart Association.
 

Studying stroke outcomes

The researchers described butylphthalide as a cerebroprotective drug that was originally extracted from seeds of Apium graveolens. In China, previous studies have shown that the drug has cerebroprotective effects in animal models of ischemia-reperfusion, they noted.

In this randomized, double-blind, placebo-controlled trial, Dr. Jia and colleagues evaluated whether treatment with butylphthalide could improve 90-day outcomes for adults with acute ischemic stroke who received intravenous recombinant tissue plasminogen activator (tPA), endovascular treatment, or both.

The participants were treated at one of 59 medical centers in China between July 2018 and February 2022. Those who had minimal stroke symptoms on their initial exam, defined as a score of 0-3 on the National Institutes of Health Stroke Scale, or had severe stroke symptoms, defined as having a score of 26 or higher on the NIHSS, were excluded from the study.

Along with an initial revascularization intervention chosen by their physician, participants were randomly selected to receive either butylphthalide or a placebo daily for 90 days. The drug was administered through daily intravenous injections for the first 14 days, after which patients received oral capsules for 76 days.

The research team defined the outcomes as “favorable” if a patient fell into one of the following categories 90 days after the stroke: an initially mild to moderate stroke (NIHSS, 4-7) and no symptoms after treatment, defined as a score of 0 on the Modified Rankin Scale (mRS), which measures disability and dependence; an initially moderate to serious stroke (NIHSS, 8-14) and no residual symptoms or mild symptoms that don’t impair the ability to perform routine activities of daily living without assistance (mRS, 0-1); or an initially serious to severe stroke (NIHSS, 15-25) and no remaining symptoms or a slight disability that impairs some activities but allows one to conduct daily living without assistance (mRS, 0-2).

Secondary outcomes included symptomatic intracranial hemorrhage, recurrent stroke, and mortality.

Among the 1,216 participants, 607 were assigned to the treatment group, and 609 were assigned to the placebo group. The average age was 66 years, and 68% were men.

Overall, participants in the butylphthalide group were 70% more likely to have a favorable 90-day outcome, compared with the placebo group. Favorable outcomes occurred in 344 patients (56.7%) in the butylphthalide group, compared with 268 patients (44%) in the placebo group (odds ratio, 1.70; 95% confidence interval, 1.35-2.14; P < .001).

In addition, butylphthalide improved function equally well for the patients who initially received tPA, those who received endovascular treatment, and those who received both tPA and endovascular treatment.

Secondary events, such as recurrent stroke and intracranial hemorrhage, weren’t significantly different between the butylphthalide and placebo groups.
 

 

 

Ongoing questions

Dr. Jia and colleagues noted the need to understand how butylphthalide works in the brain. Animal studies have suggested several possible mechanisms, but it remains unclear.

“The next step should be investigating the exact mechanisms of butylphthalide in humans,” Dr. Jia said.

Additional research should assess the medication in other populations, the authors noted, particularly because the study involved participants who received initial treatment with tPA, endovascular treatment, or both. The results may not be generalizable to stroke patients who receive other treatments or to populations outside of China.

“While these are interesting results, this is only one relatively small study on a fairly select population in China. Butylphthalide, a medication initially compounded from celery seed, is not ready for use in standard stroke treatment,” said Daniel Lackland, DrPH, professor of neurology and director of the division of translational neurosciences and population studies at the Medical University of South Carolina, Charleston.

Dr. Lackland, who wasn’t involved with the study, is a member of the American Stroke Association’s Stroke Council. Although butylphthalide was originally extracted from seeds, he noted, it’s not what patients would find commercially available.

“The medication used in this study is not the same as celery seed or celery seed extract supplements,” he said. “Stroke survivors should always consult with their neurologist or healthcare professional regarding diet after a stroke.”

The study was funded by the National Key Technology Research and Development Program of the Ministry of Science and Technology of the People’s Republic of China and Shijiazhuang Pharmaceutical Group dl-3-butylphthalide Pharmaceutical. Several authors are employed with Beijing Tiantan Hospital and the Beijing Institute of Brain Disorders. Dr. Lackland reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Butylphthalide, a medication derived from celery seed, may improve outcomes after an acute ischemic stroke when given in addition to thrombolysis or endovascular treatment, a new report suggests.

Patients treated with butylphthalide had fewer severe neurologic symptoms and better function 90 days after the stroke, compared with those receiving placebo.

Butylphthalide is approved and available for use in China, where the study was conducted. However, the medication hasn’t been approved for use by the U.S. Food and Drug Administration.

“Patients who received butylphthalide had less severe neurological symptoms and a better living status at 90 days post stroke, compared to those who received the placebo,” said coauthor Baixue Jia, MD, an attending physician in interventional neuroradiology at the Beijing Tiantan Hospital of Capital Medical University and a faculty member at the China National Clinical Research Center for Neurological Diseases in Beijing. “If the results are confirmed in other trials, this may lead to more options to treat strokes caused by clots.”

The study was presented at the International Stroke Conference presented by the American Stroke Association, a division of the American Heart Association.
 

Studying stroke outcomes

The researchers described butylphthalide as a cerebroprotective drug that was originally extracted from seeds of Apium graveolens. In China, previous studies have shown that the drug has cerebroprotective effects in animal models of ischemia-reperfusion, they noted.

In this randomized, double-blind, placebo-controlled trial, Dr. Jia and colleagues evaluated whether treatment with butylphthalide could improve 90-day outcomes for adults with acute ischemic stroke who received intravenous recombinant tissue plasminogen activator (tPA), endovascular treatment, or both.

The participants were treated at one of 59 medical centers in China between July 2018 and February 2022. Those who had minimal stroke symptoms on their initial exam, defined as a score of 0-3 on the National Institutes of Health Stroke Scale, or had severe stroke symptoms, defined as having a score of 26 or higher on the NIHSS, were excluded from the study.

Along with an initial revascularization intervention chosen by their physician, participants were randomly selected to receive either butylphthalide or a placebo daily for 90 days. The drug was administered through daily intravenous injections for the first 14 days, after which patients received oral capsules for 76 days.

The research team defined the outcomes as “favorable” if a patient fell into one of the following categories 90 days after the stroke: an initially mild to moderate stroke (NIHSS, 4-7) and no symptoms after treatment, defined as a score of 0 on the Modified Rankin Scale (mRS), which measures disability and dependence; an initially moderate to serious stroke (NIHSS, 8-14) and no residual symptoms or mild symptoms that don’t impair the ability to perform routine activities of daily living without assistance (mRS, 0-1); or an initially serious to severe stroke (NIHSS, 15-25) and no remaining symptoms or a slight disability that impairs some activities but allows one to conduct daily living without assistance (mRS, 0-2).

Secondary outcomes included symptomatic intracranial hemorrhage, recurrent stroke, and mortality.

Among the 1,216 participants, 607 were assigned to the treatment group, and 609 were assigned to the placebo group. The average age was 66 years, and 68% were men.

Overall, participants in the butylphthalide group were 70% more likely to have a favorable 90-day outcome, compared with the placebo group. Favorable outcomes occurred in 344 patients (56.7%) in the butylphthalide group, compared with 268 patients (44%) in the placebo group (odds ratio, 1.70; 95% confidence interval, 1.35-2.14; P < .001).

In addition, butylphthalide improved function equally well for the patients who initially received tPA, those who received endovascular treatment, and those who received both tPA and endovascular treatment.

Secondary events, such as recurrent stroke and intracranial hemorrhage, weren’t significantly different between the butylphthalide and placebo groups.
 

 

 

Ongoing questions

Dr. Jia and colleagues noted the need to understand how butylphthalide works in the brain. Animal studies have suggested several possible mechanisms, but it remains unclear.

“The next step should be investigating the exact mechanisms of butylphthalide in humans,” Dr. Jia said.

Additional research should assess the medication in other populations, the authors noted, particularly because the study involved participants who received initial treatment with tPA, endovascular treatment, or both. The results may not be generalizable to stroke patients who receive other treatments or to populations outside of China.

“While these are interesting results, this is only one relatively small study on a fairly select population in China. Butylphthalide, a medication initially compounded from celery seed, is not ready for use in standard stroke treatment,” said Daniel Lackland, DrPH, professor of neurology and director of the division of translational neurosciences and population studies at the Medical University of South Carolina, Charleston.

Dr. Lackland, who wasn’t involved with the study, is a member of the American Stroke Association’s Stroke Council. Although butylphthalide was originally extracted from seeds, he noted, it’s not what patients would find commercially available.

“The medication used in this study is not the same as celery seed or celery seed extract supplements,” he said. “Stroke survivors should always consult with their neurologist or healthcare professional regarding diet after a stroke.”

The study was funded by the National Key Technology Research and Development Program of the Ministry of Science and Technology of the People’s Republic of China and Shijiazhuang Pharmaceutical Group dl-3-butylphthalide Pharmaceutical. Several authors are employed with Beijing Tiantan Hospital and the Beijing Institute of Brain Disorders. Dr. Lackland reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ISC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Exercise training reduces liver fat in patients with NAFLD, even without weight loss

Article Type
Changed
Wed, 02/15/2023 - 10:12

Exercise training is 3.5 times more likely to result in a clinically meaningful response in liver fat, compared with standard clinical care, for patients with nonalcoholic fatty liver disease (NAFLD), according to a new systematic review and meta-analysis.

An exercise dose of 750 metabolic equivalents of task (MET)–minutes per week – or 150 minutes per week of brisk walking – was required to achieve a treatment response, independently of weight loss.

“In the absence of a regulatory agency–approved drug treatment or a cure, lifestyle modification with dietary change and increased exercise is recommended for all patients with NAFLD,” first author Jonathan Stine, MD, an associate professor of medicine and public health sciences and director of the fatty liver program at the Penn State Health Milton S. Hershey Medical Center, Hershey, said in an interview.

“With that said, there are many key unanswered questions about how to best prescribe exercise as medicine to our patients with NAFLD, including whether the liver-specific benefit of exercise can be seen without any body weight loss,” Dr. Stine said. “And if found, what dose of exercise is required in order to achieve clinically meaningful benefit?” He noted that this analysis is a step toward helping to answer these questions.

The study by Dr. Stine and colleagues was published online in The American Journal of Gastroenterology.
 

Analyzing studies

Exercise training, which includes planned and structured physical activity intended to improve physical fitness, has been shown to provide multiple benefits for patients with NAFLD, the study authors wrote. The gains include improvements in liver fat, physical fitness, body composition, vascular biology, and health-related quality of life.

However, it has been unclear whether exercise training achieves a 30% or more relative reduction in liver fat, which is considered the minimal clinically important difference and is a surrogate for histologic response or improvement in liver fibrosis.

In their systematic review and meta-analysis, Dr. Stine and colleagues analyzed the evidence for MRI-measured liver reduction in response to exercise training across different doses, with a 30% or more relative reduction serving as the primary outcome. They included randomized controlled trials in adults with NAFLD who participated in exercise training programs.

The 14 studies included a total of 551 participants. The average age of the participants was 53 years, and the average body mass index was 31 kg/mg2. The duration of the interventions ranged from 4 to 52 weeks and included different types of exercise, such as aerobic, high-intensity interval, resistance, and aerobic plus resistance training.

No study yielded the clinically significant weight loss required for histologic response (7%-10%). The average weight loss was about 2.8% among those who participated in exercise training.

Overall, seven studies with 152 participants had data for the 30% or more relative reduction in MRI-measured liver fat. The pooled rate was 34% for exercise training and 13% for the control condition.

In general, those who participated in exercise training were 3.5 times more likely to achieve a 30% or more relative reduction in MRI-measured liver fat than those in the control condition.

Among all participants, the mean change in absolute liver fat was –6.7% for the 338 participants enrolled in exercise training, compared with –0.8% for the 213 participants under the control condition. The pooled mean difference in absolute change in MRI-measured liver fat for exercise training versus the control was –5.8%.

For relative change in MRI-measured liver fat, researchers analyzed nine studies with 195 participants – 118 participants in exercise training, and 77 control participants. The mean relative change was –24.1% among the exercise training group and 7.3% among the control group. The pooled mean difference in relative change for exercise training versus the control was –26.4%.

For all 14 studies, an exercise dose of 750 or more MET-minutes per week resulted in a significant treatment response. This equates to 150 minutes per week of moderate-intensity exercise, such as brisk walking, or 75 minutes per week of vigorous-intensity exercise, such as jogging or cycling.

Among participants who had 750 MET-minutes per week, there was a –8% absolute and –28.9% relative mean difference in MRI-measured liver fat, compared with –4.1% and –22.8%, respectively, among those who had fewer than 750 MET-minutes per week.

An exercise dose of 750 or more MET-minutes per week led to a 30% or more relative reduction in MRI-measured liver fat in 39.3% of participants, compared with 25.7% who had fewer than that threshold.

The treatment response was independent of clinically significant body weight loss of more than 5%.

“Prior to our study, it was felt that body weight loss of at least 5% was required in order to significantly improve liver histology,” Dr. Stine said. “Our findings challenge this thought in that exercise training achieved rates of clinically significant liver fat reduction.”
 

 

 

Ongoing research

Dr. Stine and colleagues are continuing their research and are directly comparing exercise doses of 750 MET-minutes per week and 1,000 MET-minutes per week to standard clinical care in adults with biopsy-proven nonalcoholic steatohepatitis, or the progressive type of NAFLD.

“Importantly, this new study we’re undertaking is designed to mimic a real-world setting in which people’s daily schedules are highly variable,” he said. “Our experienced team of exercise professionals may vary frequency and time of exercise in a week so long as our study participant achieves the prescribed dose of exercise.”

Currently, leading professional societies have not reached consensus regarding the optimal physical activity program for patients with NAFLD, the study authors wrote. However, most clinical guidelines support at least 150 minutes per week of moderate-intensity aerobic activity.

Although more head-to-head clinical trials are needed, exercise training appears to reduce liver fat and provides other benefits, such as cardiorespiratory fitness, body composition changes, and improvements in vascular biology, they wrote.

“The important piece here is that this review shows that there does not have to be weight loss for improvements in fatty liver,” Jill Kanaley, PhD, a professor of nutrition and exercise physiology at University of Missouri–Columbia, said in an interview.

Dr. Kanaley, who wasn’t involved with this study, has researched exercise training among patients with NAFLD. She and her colleagues have found that moderate-and high-intensity exercise can decrease intrahepatic lipid content and NAFLD risk factors, independently of abdominal fat or body mass reductions.

“So often, people get frustrated with exercise if they do not see weight loss,” she said. “But in this case, there seems to be benefits of the exercise, even without weight loss.”

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have received research funding and have had consultant roles with numerous pharmaceutical companies. Dr. Kanaley reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Exercise training is 3.5 times more likely to result in a clinically meaningful response in liver fat, compared with standard clinical care, for patients with nonalcoholic fatty liver disease (NAFLD), according to a new systematic review and meta-analysis.

An exercise dose of 750 metabolic equivalents of task (MET)–minutes per week – or 150 minutes per week of brisk walking – was required to achieve a treatment response, independently of weight loss.

“In the absence of a regulatory agency–approved drug treatment or a cure, lifestyle modification with dietary change and increased exercise is recommended for all patients with NAFLD,” first author Jonathan Stine, MD, an associate professor of medicine and public health sciences and director of the fatty liver program at the Penn State Health Milton S. Hershey Medical Center, Hershey, said in an interview.

“With that said, there are many key unanswered questions about how to best prescribe exercise as medicine to our patients with NAFLD, including whether the liver-specific benefit of exercise can be seen without any body weight loss,” Dr. Stine said. “And if found, what dose of exercise is required in order to achieve clinically meaningful benefit?” He noted that this analysis is a step toward helping to answer these questions.

The study by Dr. Stine and colleagues was published online in The American Journal of Gastroenterology.
 

Analyzing studies

Exercise training, which includes planned and structured physical activity intended to improve physical fitness, has been shown to provide multiple benefits for patients with NAFLD, the study authors wrote. The gains include improvements in liver fat, physical fitness, body composition, vascular biology, and health-related quality of life.

However, it has been unclear whether exercise training achieves a 30% or more relative reduction in liver fat, which is considered the minimal clinically important difference and is a surrogate for histologic response or improvement in liver fibrosis.

In their systematic review and meta-analysis, Dr. Stine and colleagues analyzed the evidence for MRI-measured liver reduction in response to exercise training across different doses, with a 30% or more relative reduction serving as the primary outcome. They included randomized controlled trials in adults with NAFLD who participated in exercise training programs.

The 14 studies included a total of 551 participants. The average age of the participants was 53 years, and the average body mass index was 31 kg/mg2. The duration of the interventions ranged from 4 to 52 weeks and included different types of exercise, such as aerobic, high-intensity interval, resistance, and aerobic plus resistance training.

No study yielded the clinically significant weight loss required for histologic response (7%-10%). The average weight loss was about 2.8% among those who participated in exercise training.

Overall, seven studies with 152 participants had data for the 30% or more relative reduction in MRI-measured liver fat. The pooled rate was 34% for exercise training and 13% for the control condition.

In general, those who participated in exercise training were 3.5 times more likely to achieve a 30% or more relative reduction in MRI-measured liver fat than those in the control condition.

Among all participants, the mean change in absolute liver fat was –6.7% for the 338 participants enrolled in exercise training, compared with –0.8% for the 213 participants under the control condition. The pooled mean difference in absolute change in MRI-measured liver fat for exercise training versus the control was –5.8%.

For relative change in MRI-measured liver fat, researchers analyzed nine studies with 195 participants – 118 participants in exercise training, and 77 control participants. The mean relative change was –24.1% among the exercise training group and 7.3% among the control group. The pooled mean difference in relative change for exercise training versus the control was –26.4%.

For all 14 studies, an exercise dose of 750 or more MET-minutes per week resulted in a significant treatment response. This equates to 150 minutes per week of moderate-intensity exercise, such as brisk walking, or 75 minutes per week of vigorous-intensity exercise, such as jogging or cycling.

Among participants who had 750 MET-minutes per week, there was a –8% absolute and –28.9% relative mean difference in MRI-measured liver fat, compared with –4.1% and –22.8%, respectively, among those who had fewer than 750 MET-minutes per week.

An exercise dose of 750 or more MET-minutes per week led to a 30% or more relative reduction in MRI-measured liver fat in 39.3% of participants, compared with 25.7% who had fewer than that threshold.

The treatment response was independent of clinically significant body weight loss of more than 5%.

“Prior to our study, it was felt that body weight loss of at least 5% was required in order to significantly improve liver histology,” Dr. Stine said. “Our findings challenge this thought in that exercise training achieved rates of clinically significant liver fat reduction.”
 

 

 

Ongoing research

Dr. Stine and colleagues are continuing their research and are directly comparing exercise doses of 750 MET-minutes per week and 1,000 MET-minutes per week to standard clinical care in adults with biopsy-proven nonalcoholic steatohepatitis, or the progressive type of NAFLD.

“Importantly, this new study we’re undertaking is designed to mimic a real-world setting in which people’s daily schedules are highly variable,” he said. “Our experienced team of exercise professionals may vary frequency and time of exercise in a week so long as our study participant achieves the prescribed dose of exercise.”

Currently, leading professional societies have not reached consensus regarding the optimal physical activity program for patients with NAFLD, the study authors wrote. However, most clinical guidelines support at least 150 minutes per week of moderate-intensity aerobic activity.

Although more head-to-head clinical trials are needed, exercise training appears to reduce liver fat and provides other benefits, such as cardiorespiratory fitness, body composition changes, and improvements in vascular biology, they wrote.

“The important piece here is that this review shows that there does not have to be weight loss for improvements in fatty liver,” Jill Kanaley, PhD, a professor of nutrition and exercise physiology at University of Missouri–Columbia, said in an interview.

Dr. Kanaley, who wasn’t involved with this study, has researched exercise training among patients with NAFLD. She and her colleagues have found that moderate-and high-intensity exercise can decrease intrahepatic lipid content and NAFLD risk factors, independently of abdominal fat or body mass reductions.

“So often, people get frustrated with exercise if they do not see weight loss,” she said. “But in this case, there seems to be benefits of the exercise, even without weight loss.”

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have received research funding and have had consultant roles with numerous pharmaceutical companies. Dr. Kanaley reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Exercise training is 3.5 times more likely to result in a clinically meaningful response in liver fat, compared with standard clinical care, for patients with nonalcoholic fatty liver disease (NAFLD), according to a new systematic review and meta-analysis.

An exercise dose of 750 metabolic equivalents of task (MET)–minutes per week – or 150 minutes per week of brisk walking – was required to achieve a treatment response, independently of weight loss.

“In the absence of a regulatory agency–approved drug treatment or a cure, lifestyle modification with dietary change and increased exercise is recommended for all patients with NAFLD,” first author Jonathan Stine, MD, an associate professor of medicine and public health sciences and director of the fatty liver program at the Penn State Health Milton S. Hershey Medical Center, Hershey, said in an interview.

“With that said, there are many key unanswered questions about how to best prescribe exercise as medicine to our patients with NAFLD, including whether the liver-specific benefit of exercise can be seen without any body weight loss,” Dr. Stine said. “And if found, what dose of exercise is required in order to achieve clinically meaningful benefit?” He noted that this analysis is a step toward helping to answer these questions.

The study by Dr. Stine and colleagues was published online in The American Journal of Gastroenterology.
 

Analyzing studies

Exercise training, which includes planned and structured physical activity intended to improve physical fitness, has been shown to provide multiple benefits for patients with NAFLD, the study authors wrote. The gains include improvements in liver fat, physical fitness, body composition, vascular biology, and health-related quality of life.

However, it has been unclear whether exercise training achieves a 30% or more relative reduction in liver fat, which is considered the minimal clinically important difference and is a surrogate for histologic response or improvement in liver fibrosis.

In their systematic review and meta-analysis, Dr. Stine and colleagues analyzed the evidence for MRI-measured liver reduction in response to exercise training across different doses, with a 30% or more relative reduction serving as the primary outcome. They included randomized controlled trials in adults with NAFLD who participated in exercise training programs.

The 14 studies included a total of 551 participants. The average age of the participants was 53 years, and the average body mass index was 31 kg/mg2. The duration of the interventions ranged from 4 to 52 weeks and included different types of exercise, such as aerobic, high-intensity interval, resistance, and aerobic plus resistance training.

No study yielded the clinically significant weight loss required for histologic response (7%-10%). The average weight loss was about 2.8% among those who participated in exercise training.

Overall, seven studies with 152 participants had data for the 30% or more relative reduction in MRI-measured liver fat. The pooled rate was 34% for exercise training and 13% for the control condition.

In general, those who participated in exercise training were 3.5 times more likely to achieve a 30% or more relative reduction in MRI-measured liver fat than those in the control condition.

Among all participants, the mean change in absolute liver fat was –6.7% for the 338 participants enrolled in exercise training, compared with –0.8% for the 213 participants under the control condition. The pooled mean difference in absolute change in MRI-measured liver fat for exercise training versus the control was –5.8%.

For relative change in MRI-measured liver fat, researchers analyzed nine studies with 195 participants – 118 participants in exercise training, and 77 control participants. The mean relative change was –24.1% among the exercise training group and 7.3% among the control group. The pooled mean difference in relative change for exercise training versus the control was –26.4%.

For all 14 studies, an exercise dose of 750 or more MET-minutes per week resulted in a significant treatment response. This equates to 150 minutes per week of moderate-intensity exercise, such as brisk walking, or 75 minutes per week of vigorous-intensity exercise, such as jogging or cycling.

Among participants who had 750 MET-minutes per week, there was a –8% absolute and –28.9% relative mean difference in MRI-measured liver fat, compared with –4.1% and –22.8%, respectively, among those who had fewer than 750 MET-minutes per week.

An exercise dose of 750 or more MET-minutes per week led to a 30% or more relative reduction in MRI-measured liver fat in 39.3% of participants, compared with 25.7% who had fewer than that threshold.

The treatment response was independent of clinically significant body weight loss of more than 5%.

“Prior to our study, it was felt that body weight loss of at least 5% was required in order to significantly improve liver histology,” Dr. Stine said. “Our findings challenge this thought in that exercise training achieved rates of clinically significant liver fat reduction.”
 

 

 

Ongoing research

Dr. Stine and colleagues are continuing their research and are directly comparing exercise doses of 750 MET-minutes per week and 1,000 MET-minutes per week to standard clinical care in adults with biopsy-proven nonalcoholic steatohepatitis, or the progressive type of NAFLD.

“Importantly, this new study we’re undertaking is designed to mimic a real-world setting in which people’s daily schedules are highly variable,” he said. “Our experienced team of exercise professionals may vary frequency and time of exercise in a week so long as our study participant achieves the prescribed dose of exercise.”

Currently, leading professional societies have not reached consensus regarding the optimal physical activity program for patients with NAFLD, the study authors wrote. However, most clinical guidelines support at least 150 minutes per week of moderate-intensity aerobic activity.

Although more head-to-head clinical trials are needed, exercise training appears to reduce liver fat and provides other benefits, such as cardiorespiratory fitness, body composition changes, and improvements in vascular biology, they wrote.

“The important piece here is that this review shows that there does not have to be weight loss for improvements in fatty liver,” Jill Kanaley, PhD, a professor of nutrition and exercise physiology at University of Missouri–Columbia, said in an interview.

Dr. Kanaley, who wasn’t involved with this study, has researched exercise training among patients with NAFLD. She and her colleagues have found that moderate-and high-intensity exercise can decrease intrahepatic lipid content and NAFLD risk factors, independently of abdominal fat or body mass reductions.

“So often, people get frustrated with exercise if they do not see weight loss,” she said. “But in this case, there seems to be benefits of the exercise, even without weight loss.”

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have received research funding and have had consultant roles with numerous pharmaceutical companies. Dr. Kanaley reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Be aware of hepatic encephalopathy, dementia overlap in older patients with cirrhosis

Article Type
Changed
Wed, 04/19/2023 - 10:41

Dementia frequently coexists with hepatic encephalopathy (HE) in patients with cirrhosis but doesn’t correlate with other decompensating events, according to a new study involving U.S. veterans.

The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.

“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.

“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.

“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.

Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.

Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.

Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.

In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.

In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.

When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.

“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.

“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
 

 

 

Additional research needed

Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.

“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.

Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.

The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.

Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.

“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”

The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”

Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.

“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”

The study received no financial support. The authors reported no potential competing interests.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Dementia frequently coexists with hepatic encephalopathy (HE) in patients with cirrhosis but doesn’t correlate with other decompensating events, according to a new study involving U.S. veterans.

The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.

“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.

“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.

“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.

Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.

Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.

Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.

In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.

In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.

When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.

“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.

“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
 

 

 

Additional research needed

Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.

“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.

Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.

The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.

Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.

“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”

The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”

Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.

“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”

The study received no financial support. The authors reported no potential competing interests.

A version of this article first appeared on Medscape.com.

Dementia frequently coexists with hepatic encephalopathy (HE) in patients with cirrhosis but doesn’t correlate with other decompensating events, according to a new study involving U.S. veterans.

The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.

“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.

“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.

“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.

The study was published online in The American Journal of Gastroenterology.
 

Analyzing associations

HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.

Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.

Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.

Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.

In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.

In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.

When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.

“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.

“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
 

 

 

Additional research needed

Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.

“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.

Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.

The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.

Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.

“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”

The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”

Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.

“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”

The study received no financial support. The authors reported no potential competing interests.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

USPSTF backs screening for hypertensive disorders of pregnancy

Article Type
Changed
Thu, 02/09/2023 - 17:14

The U.S. Preventive Services Task Force (USPSTF) recommends that clinicians screen for hypertensive disorders of pregnancy, which can cause serious and fatal complications, according to a new draft statement.

All pregnant people should have their blood pressure measured at each prenatal visit to identify and prevent serious health problems. The grade B recommendation expands on the task force’s 2017 recommendation on screening for preeclampsia to include all hypertensive disorders of pregnancy.

“Hypertensive disorders of pregnancy are some of the leading causes of serious complications and death for pregnant people,” Esa Davis, MD, a USPSTF member and associate professor of medicine and clinical and translational science at the University of Pittsburgh School of Medicine, told this news organization.

In the U.S., the rate of hypertensive disorders of pregnancy has increased in recent decades, jumping from about 500 cases per 10,000 deliveries in the early 1990s to more than 1,000 cases per 10,000 deliveries in the mid-2010s.

“The U.S. Preventive Services Task Force wants to help save the lives of pregnant people and their babies by ensuring that clinicians have the most up-to-date guidance on how to find these conditions early,” she said.

The draft recommendation statement was published online .
 

Screening recommendation

Hypertensive disorders of pregnancy, including gestational hypertension, preeclampsia, eclampsia, and chronic hypertension with and without superimposed preeclampsia, are marked by elevated blood pressure during pregnancy.

The disorders can lead to complications for the pregnant person, such as stroke, retinal detachment, organ damage or failure, and seizures, as well as for the baby, including restricted growth, low birth weight, and stillbirth. Many complications can lead to early induction of labor, cesarean delivery, and preterm birth.

After commissioning a systematic evidence review, the USPSTF provided a grade B recommendation for clinicians to offer or provide screening for hypertensive disorders of pregnancy. The recommendation concludes with “moderate certainty” that screening with blood pressure measurements has “substantial net benefit.”

The task force notes that it is “essential” for all pregnant women and pregnant people of all genders to be screened and that those who screen positive receive evidence-based management of their condition.

Risk factors include a history of eclampsia or preeclampsia, a family history of preeclampsia, a previous adverse pregnancy outcome, having gestational diabetes or chronic hypertension, being pregnant with more than one baby, having a first pregnancy, having a high body mass index prior to pregnancy, and being 35 years of age or older.

In addition, Black, American Indian, and Alaska Native people face higher risks and are more likely both to have and to die from a hypertensive disorder of pregnancy. In particular, Black people experience higher rates of maternal and infant morbidity and perinatal mortality than other racial and ethnic groups, and hypertensive disorders of pregnancy account for a larger proportion of these outcomes.

Although measuring blood pressure throughout pregnancy is an important first step, it’s not enough to improve inequities in health outcomes, the task force notes. Identifying hypertensive disorders of pregnancy requires adequate prenatal follow-up visits, surveillance, and evidence-based care, which can be a barrier for some pregnant people.

Follow-up visits with health care providers such as nurses, nurse midwives, pediatricians, and lactation consultants could help, as well as screening and monitoring during the postpartum period. Other approaches include telehealth, connections to community resources during the perinatal period, collaborative care provided in medical homes, and multilevel interventions to address underlying health inequities that increase health risks during pregnancy.

“Since screening is not enough to address the health disparities experienced by Black, American Indian, and Alaska Native people, health care professionals should also do what they can to help address these inequities,” Dr. Davis said. “For example, the task force identified a few promising approaches, including using standardized clinical bundles of best practices for disease management to help ensure that all pregnant persons receive appropriate, equitable care.”
 

 

 

Additional considerations

The USPSTF looked at the evidence on additional methods of screening but continued to find that measuring blood pressure at each prenatal visit is the best approach. Other evaluations, such as testing for proteinuria when preeclampsia is suspected, have low accuracy for detecting proteinuria in pregnancy.

Although there is no currently available treatment for preeclampsia except delivery, management strategies for diagnosed hypertensive disorders of pregnancy include close fetal and maternal monitoring, antihypertension medications, and magnesium sulfate for seizure prophylaxis when indicated.

Previously, the USPSTF also recommended that pregnant Black people be considered for treatment with low-dose aspirin to prevent preeclampsia, with aspirin use recommended for those with at least one additional moderate risk factor. Clinicians should also be aware of the complications of poor health outcomes among populations who face higher risks.

The USPSTF noted several gaps for future research, including the best approaches for blood pressure monitoring during pregnancy and the postpartum period, how to address health inequities through multilevel interventions, how to increase access to care through telehealth services, and how to mitigate cardiovascular complications later in life in patients diagnosed with hypertensive disorders of pregnancy.

“Continued research is needed in these promising areas,” Dr. Davis said. “We hope all clinicians will join us in helping ensure that all parents and babies have access to the care they need to be as healthy as possible.”

The draft recommendation statement and draft evidence review were posted for public comment on the USPSTF website. Comments can be submitted until March 6.

No relevant financial relationships have been disclosed.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The U.S. Preventive Services Task Force (USPSTF) recommends that clinicians screen for hypertensive disorders of pregnancy, which can cause serious and fatal complications, according to a new draft statement.

All pregnant people should have their blood pressure measured at each prenatal visit to identify and prevent serious health problems. The grade B recommendation expands on the task force’s 2017 recommendation on screening for preeclampsia to include all hypertensive disorders of pregnancy.

“Hypertensive disorders of pregnancy are some of the leading causes of serious complications and death for pregnant people,” Esa Davis, MD, a USPSTF member and associate professor of medicine and clinical and translational science at the University of Pittsburgh School of Medicine, told this news organization.

In the U.S., the rate of hypertensive disorders of pregnancy has increased in recent decades, jumping from about 500 cases per 10,000 deliveries in the early 1990s to more than 1,000 cases per 10,000 deliveries in the mid-2010s.

“The U.S. Preventive Services Task Force wants to help save the lives of pregnant people and their babies by ensuring that clinicians have the most up-to-date guidance on how to find these conditions early,” she said.

The draft recommendation statement was published online .
 

Screening recommendation

Hypertensive disorders of pregnancy, including gestational hypertension, preeclampsia, eclampsia, and chronic hypertension with and without superimposed preeclampsia, are marked by elevated blood pressure during pregnancy.

The disorders can lead to complications for the pregnant person, such as stroke, retinal detachment, organ damage or failure, and seizures, as well as for the baby, including restricted growth, low birth weight, and stillbirth. Many complications can lead to early induction of labor, cesarean delivery, and preterm birth.

After commissioning a systematic evidence review, the USPSTF provided a grade B recommendation for clinicians to offer or provide screening for hypertensive disorders of pregnancy. The recommendation concludes with “moderate certainty” that screening with blood pressure measurements has “substantial net benefit.”

The task force notes that it is “essential” for all pregnant women and pregnant people of all genders to be screened and that those who screen positive receive evidence-based management of their condition.

Risk factors include a history of eclampsia or preeclampsia, a family history of preeclampsia, a previous adverse pregnancy outcome, having gestational diabetes or chronic hypertension, being pregnant with more than one baby, having a first pregnancy, having a high body mass index prior to pregnancy, and being 35 years of age or older.

In addition, Black, American Indian, and Alaska Native people face higher risks and are more likely both to have and to die from a hypertensive disorder of pregnancy. In particular, Black people experience higher rates of maternal and infant morbidity and perinatal mortality than other racial and ethnic groups, and hypertensive disorders of pregnancy account for a larger proportion of these outcomes.

Although measuring blood pressure throughout pregnancy is an important first step, it’s not enough to improve inequities in health outcomes, the task force notes. Identifying hypertensive disorders of pregnancy requires adequate prenatal follow-up visits, surveillance, and evidence-based care, which can be a barrier for some pregnant people.

Follow-up visits with health care providers such as nurses, nurse midwives, pediatricians, and lactation consultants could help, as well as screening and monitoring during the postpartum period. Other approaches include telehealth, connections to community resources during the perinatal period, collaborative care provided in medical homes, and multilevel interventions to address underlying health inequities that increase health risks during pregnancy.

“Since screening is not enough to address the health disparities experienced by Black, American Indian, and Alaska Native people, health care professionals should also do what they can to help address these inequities,” Dr. Davis said. “For example, the task force identified a few promising approaches, including using standardized clinical bundles of best practices for disease management to help ensure that all pregnant persons receive appropriate, equitable care.”
 

 

 

Additional considerations

The USPSTF looked at the evidence on additional methods of screening but continued to find that measuring blood pressure at each prenatal visit is the best approach. Other evaluations, such as testing for proteinuria when preeclampsia is suspected, have low accuracy for detecting proteinuria in pregnancy.

Although there is no currently available treatment for preeclampsia except delivery, management strategies for diagnosed hypertensive disorders of pregnancy include close fetal and maternal monitoring, antihypertension medications, and magnesium sulfate for seizure prophylaxis when indicated.

Previously, the USPSTF also recommended that pregnant Black people be considered for treatment with low-dose aspirin to prevent preeclampsia, with aspirin use recommended for those with at least one additional moderate risk factor. Clinicians should also be aware of the complications of poor health outcomes among populations who face higher risks.

The USPSTF noted several gaps for future research, including the best approaches for blood pressure monitoring during pregnancy and the postpartum period, how to address health inequities through multilevel interventions, how to increase access to care through telehealth services, and how to mitigate cardiovascular complications later in life in patients diagnosed with hypertensive disorders of pregnancy.

“Continued research is needed in these promising areas,” Dr. Davis said. “We hope all clinicians will join us in helping ensure that all parents and babies have access to the care they need to be as healthy as possible.”

The draft recommendation statement and draft evidence review were posted for public comment on the USPSTF website. Comments can be submitted until March 6.

No relevant financial relationships have been disclosed.

A version of this article originally appeared on Medscape.com.

The U.S. Preventive Services Task Force (USPSTF) recommends that clinicians screen for hypertensive disorders of pregnancy, which can cause serious and fatal complications, according to a new draft statement.

All pregnant people should have their blood pressure measured at each prenatal visit to identify and prevent serious health problems. The grade B recommendation expands on the task force’s 2017 recommendation on screening for preeclampsia to include all hypertensive disorders of pregnancy.

“Hypertensive disorders of pregnancy are some of the leading causes of serious complications and death for pregnant people,” Esa Davis, MD, a USPSTF member and associate professor of medicine and clinical and translational science at the University of Pittsburgh School of Medicine, told this news organization.

In the U.S., the rate of hypertensive disorders of pregnancy has increased in recent decades, jumping from about 500 cases per 10,000 deliveries in the early 1990s to more than 1,000 cases per 10,000 deliveries in the mid-2010s.

“The U.S. Preventive Services Task Force wants to help save the lives of pregnant people and their babies by ensuring that clinicians have the most up-to-date guidance on how to find these conditions early,” she said.

The draft recommendation statement was published online .
 

Screening recommendation

Hypertensive disorders of pregnancy, including gestational hypertension, preeclampsia, eclampsia, and chronic hypertension with and without superimposed preeclampsia, are marked by elevated blood pressure during pregnancy.

The disorders can lead to complications for the pregnant person, such as stroke, retinal detachment, organ damage or failure, and seizures, as well as for the baby, including restricted growth, low birth weight, and stillbirth. Many complications can lead to early induction of labor, cesarean delivery, and preterm birth.

After commissioning a systematic evidence review, the USPSTF provided a grade B recommendation for clinicians to offer or provide screening for hypertensive disorders of pregnancy. The recommendation concludes with “moderate certainty” that screening with blood pressure measurements has “substantial net benefit.”

The task force notes that it is “essential” for all pregnant women and pregnant people of all genders to be screened and that those who screen positive receive evidence-based management of their condition.

Risk factors include a history of eclampsia or preeclampsia, a family history of preeclampsia, a previous adverse pregnancy outcome, having gestational diabetes or chronic hypertension, being pregnant with more than one baby, having a first pregnancy, having a high body mass index prior to pregnancy, and being 35 years of age or older.

In addition, Black, American Indian, and Alaska Native people face higher risks and are more likely both to have and to die from a hypertensive disorder of pregnancy. In particular, Black people experience higher rates of maternal and infant morbidity and perinatal mortality than other racial and ethnic groups, and hypertensive disorders of pregnancy account for a larger proportion of these outcomes.

Although measuring blood pressure throughout pregnancy is an important first step, it’s not enough to improve inequities in health outcomes, the task force notes. Identifying hypertensive disorders of pregnancy requires adequate prenatal follow-up visits, surveillance, and evidence-based care, which can be a barrier for some pregnant people.

Follow-up visits with health care providers such as nurses, nurse midwives, pediatricians, and lactation consultants could help, as well as screening and monitoring during the postpartum period. Other approaches include telehealth, connections to community resources during the perinatal period, collaborative care provided in medical homes, and multilevel interventions to address underlying health inequities that increase health risks during pregnancy.

“Since screening is not enough to address the health disparities experienced by Black, American Indian, and Alaska Native people, health care professionals should also do what they can to help address these inequities,” Dr. Davis said. “For example, the task force identified a few promising approaches, including using standardized clinical bundles of best practices for disease management to help ensure that all pregnant persons receive appropriate, equitable care.”
 

 

 

Additional considerations

The USPSTF looked at the evidence on additional methods of screening but continued to find that measuring blood pressure at each prenatal visit is the best approach. Other evaluations, such as testing for proteinuria when preeclampsia is suspected, have low accuracy for detecting proteinuria in pregnancy.

Although there is no currently available treatment for preeclampsia except delivery, management strategies for diagnosed hypertensive disorders of pregnancy include close fetal and maternal monitoring, antihypertension medications, and magnesium sulfate for seizure prophylaxis when indicated.

Previously, the USPSTF also recommended that pregnant Black people be considered for treatment with low-dose aspirin to prevent preeclampsia, with aspirin use recommended for those with at least one additional moderate risk factor. Clinicians should also be aware of the complications of poor health outcomes among populations who face higher risks.

The USPSTF noted several gaps for future research, including the best approaches for blood pressure monitoring during pregnancy and the postpartum period, how to address health inequities through multilevel interventions, how to increase access to care through telehealth services, and how to mitigate cardiovascular complications later in life in patients diagnosed with hypertensive disorders of pregnancy.

“Continued research is needed in these promising areas,” Dr. Davis said. “We hope all clinicians will join us in helping ensure that all parents and babies have access to the care they need to be as healthy as possible.”

The draft recommendation statement and draft evidence review were posted for public comment on the USPSTF website. Comments can be submitted until March 6.

No relevant financial relationships have been disclosed.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Physician opinions vary on surveillance colonoscopies in older adults with prior adenomas, survey finds

Article Type
Changed
Thu, 02/09/2023 - 17:44

Physician recommendations for surveillance colonoscopies in older adults with prior adenomas vary based on several factors, including patient age, health, adenoma risk, and physician specialty, according to a national survey.

In general, physicians were more likely to recommend surveillance for patients at a younger age, with better health, and with prior high-risk adenomas. Additionally, a large proportion of physicians reported uncertainty about whether the benefits of continued surveillance outweighed the risk of harm in older adults.

“There are no existing surveillance colonoscopy guidelines that integrate patient age, health, and adenoma risk, and physicians report significant decisional uncertainty,” Nancy Schoenborn, MD, MHS, associate professor of medicine at Johns Hopkins University, Baltimore, and colleagues wrote.

“Developing the evidence base to evaluate the risks and benefits of surveillance colonoscopy in older adults and decisional support tools that help physicians and patients incorporate available data and weigh risks and benefits are needed to address current gaps in care for older adults with prior adenomas,” the authors wrote.

The study was published online in the American Journal of Gastroenterology.
 

Surveying physicians

National guidelines recommend surveillance colonoscopy after adenoma removal at more frequent intervals than screening colonoscopy because of a higher risk of colorectal cancer among patients with adenomas. The high quality of screening colonoscopies coupled with an aging population means that many older adults have a history of adenomas and continue to undergo surveillance colonoscopies, the authors wrote.

The benefit-harm balance becomes uncertain as potential harms from the procedure increase with age. However, there is no clear guidance on when to stop surveillance in older adults following adenoma detection, they wrote.

Dr. Schoenborn and colleagues conducted a national cross-sectional survey of 1,800 primary care physicians and 600 gastroenterologists between April and November 2021. The primary care group included internal medicine, family medicine, general practice, and geriatric medicine physicians.

The research team asked whether physicians would recommend surveillance colonoscopy in a series of 12 vignettes that varied by patient age (75 or 85), patient health (good, medium, or poor), and prior adenoma risk (low or high).

Good health was described as well-controlled hypertension and living independently, whereas moderate health was described as moderate heart failure and has difficulty walking, and poor health was described as severe chronic obstructive pulmonary disease on oxygenand requires help with self-care.

For prior adenomas, high risk involved five tubular adenomas, one of which was 15 mm, and low risk involved two tubular adenomas, both of which were less than 10 mm. The survey also noted that the recommended surveillance intervals were 3 years in the high-risk scenario and 7 years in the low-risk scenario.

Researchers mailed 2,400 surveys and received 1,040 responses. They included 874 in the analysis because the physician respondents provided care to patients ages 65 and older and spent time seeing patients in clinic. Decisions about surveillance colonoscopies for adenomas in the absence of symptoms almost always occur in the outpatient setting, rather than acute or urgent care, the authors wrote.
 

Large variations found

Overall, physicians were less likely to recommend surveillance colonoscopies if the patient was older, had poor health, and had lower-risk adenomas. Patient age and health had larger effects on decision-making than adenoma risk, with health status having the largest effect.

About 20.6% of physicians recommended surveillance if the patient was 85, compared with 49.8% if the patient was 75. In addition, 7.1% of physicians recommended surveillance if the patient was in poor health, compared with 28.8% for those in moderate health, and 67.7% for patients in good health.

If the prior adenoma was low risk, 29.7% of physicians recommended surveillance, compared with 41.6% if the prior adenoma was high risk.

In general, family medicine and general practice physicians were most likely to recommend surveillance, at 40%, and gastroenterologists were least likely to recommend surveillance, at 30.9%. Patient age and health had larger effects among gastroenterologists than among primary care physicians, and adenoma risk had similar effects between the two groups.

“The importance of patient age and health status found in our study mirrors study results on physician decision-making regarding screening colonoscopies in older adults and makes intuitive sense,” the authors wrote. “Whether the priorities reflected in our findings are supported by evidence is not clear, and our results highlight important knowledge gaps in the field that warrant future research.”
 

Physician uncertainty

Additional guidance would be helpful, the authors wrote. In the survey, about 52.3% of primary care physicians and 35.4% of gastroenterologists reported uncertainty about the benefit–harm balance of surveillance in older adults.

“Current guidelines on surveillance colonoscopies are solely based on prior adenoma characteristics,” the authors wrote. “Guidelines need to incorporate guidance that considers patient age and health status, as well as adenoma risk, and explicitly considers when surveillance should stop in older adults.”

In addition, most physicians in the survey – 85.9% of primary care physicians and 77% of gastroenterologists – said they would find a decision support tool helpful. At the same time, 32.8% of primary care physicians and 71.5% of gastroenterologists perceived it as the gastroenterologist’s role to decide about surveillance colonoscopies.

“Developing patient-facing materials, communication tools for clinicians, and tools to support shared decision-making about surveillance colonoscopies that engage both physicians and patients are all important next steps,” the authors wrote. “To our knowledge, there is no existing patient decision aid about surveillance colonoscopies; developing such a tool may be valuable.”

The study was supported by Dr. Schoenborn’s career development award from the National Institute on Aging. The authors reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Physician recommendations for surveillance colonoscopies in older adults with prior adenomas vary based on several factors, including patient age, health, adenoma risk, and physician specialty, according to a national survey.

In general, physicians were more likely to recommend surveillance for patients at a younger age, with better health, and with prior high-risk adenomas. Additionally, a large proportion of physicians reported uncertainty about whether the benefits of continued surveillance outweighed the risk of harm in older adults.

“There are no existing surveillance colonoscopy guidelines that integrate patient age, health, and adenoma risk, and physicians report significant decisional uncertainty,” Nancy Schoenborn, MD, MHS, associate professor of medicine at Johns Hopkins University, Baltimore, and colleagues wrote.

“Developing the evidence base to evaluate the risks and benefits of surveillance colonoscopy in older adults and decisional support tools that help physicians and patients incorporate available data and weigh risks and benefits are needed to address current gaps in care for older adults with prior adenomas,” the authors wrote.

The study was published online in the American Journal of Gastroenterology.
 

Surveying physicians

National guidelines recommend surveillance colonoscopy after adenoma removal at more frequent intervals than screening colonoscopy because of a higher risk of colorectal cancer among patients with adenomas. The high quality of screening colonoscopies coupled with an aging population means that many older adults have a history of adenomas and continue to undergo surveillance colonoscopies, the authors wrote.

The benefit-harm balance becomes uncertain as potential harms from the procedure increase with age. However, there is no clear guidance on when to stop surveillance in older adults following adenoma detection, they wrote.

Dr. Schoenborn and colleagues conducted a national cross-sectional survey of 1,800 primary care physicians and 600 gastroenterologists between April and November 2021. The primary care group included internal medicine, family medicine, general practice, and geriatric medicine physicians.

The research team asked whether physicians would recommend surveillance colonoscopy in a series of 12 vignettes that varied by patient age (75 or 85), patient health (good, medium, or poor), and prior adenoma risk (low or high).

Good health was described as well-controlled hypertension and living independently, whereas moderate health was described as moderate heart failure and has difficulty walking, and poor health was described as severe chronic obstructive pulmonary disease on oxygenand requires help with self-care.

For prior adenomas, high risk involved five tubular adenomas, one of which was 15 mm, and low risk involved two tubular adenomas, both of which were less than 10 mm. The survey also noted that the recommended surveillance intervals were 3 years in the high-risk scenario and 7 years in the low-risk scenario.

Researchers mailed 2,400 surveys and received 1,040 responses. They included 874 in the analysis because the physician respondents provided care to patients ages 65 and older and spent time seeing patients in clinic. Decisions about surveillance colonoscopies for adenomas in the absence of symptoms almost always occur in the outpatient setting, rather than acute or urgent care, the authors wrote.
 

Large variations found

Overall, physicians were less likely to recommend surveillance colonoscopies if the patient was older, had poor health, and had lower-risk adenomas. Patient age and health had larger effects on decision-making than adenoma risk, with health status having the largest effect.

About 20.6% of physicians recommended surveillance if the patient was 85, compared with 49.8% if the patient was 75. In addition, 7.1% of physicians recommended surveillance if the patient was in poor health, compared with 28.8% for those in moderate health, and 67.7% for patients in good health.

If the prior adenoma was low risk, 29.7% of physicians recommended surveillance, compared with 41.6% if the prior adenoma was high risk.

In general, family medicine and general practice physicians were most likely to recommend surveillance, at 40%, and gastroenterologists were least likely to recommend surveillance, at 30.9%. Patient age and health had larger effects among gastroenterologists than among primary care physicians, and adenoma risk had similar effects between the two groups.

“The importance of patient age and health status found in our study mirrors study results on physician decision-making regarding screening colonoscopies in older adults and makes intuitive sense,” the authors wrote. “Whether the priorities reflected in our findings are supported by evidence is not clear, and our results highlight important knowledge gaps in the field that warrant future research.”
 

Physician uncertainty

Additional guidance would be helpful, the authors wrote. In the survey, about 52.3% of primary care physicians and 35.4% of gastroenterologists reported uncertainty about the benefit–harm balance of surveillance in older adults.

“Current guidelines on surveillance colonoscopies are solely based on prior adenoma characteristics,” the authors wrote. “Guidelines need to incorporate guidance that considers patient age and health status, as well as adenoma risk, and explicitly considers when surveillance should stop in older adults.”

In addition, most physicians in the survey – 85.9% of primary care physicians and 77% of gastroenterologists – said they would find a decision support tool helpful. At the same time, 32.8% of primary care physicians and 71.5% of gastroenterologists perceived it as the gastroenterologist’s role to decide about surveillance colonoscopies.

“Developing patient-facing materials, communication tools for clinicians, and tools to support shared decision-making about surveillance colonoscopies that engage both physicians and patients are all important next steps,” the authors wrote. “To our knowledge, there is no existing patient decision aid about surveillance colonoscopies; developing such a tool may be valuable.”

The study was supported by Dr. Schoenborn’s career development award from the National Institute on Aging. The authors reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Physician recommendations for surveillance colonoscopies in older adults with prior adenomas vary based on several factors, including patient age, health, adenoma risk, and physician specialty, according to a national survey.

In general, physicians were more likely to recommend surveillance for patients at a younger age, with better health, and with prior high-risk adenomas. Additionally, a large proportion of physicians reported uncertainty about whether the benefits of continued surveillance outweighed the risk of harm in older adults.

“There are no existing surveillance colonoscopy guidelines that integrate patient age, health, and adenoma risk, and physicians report significant decisional uncertainty,” Nancy Schoenborn, MD, MHS, associate professor of medicine at Johns Hopkins University, Baltimore, and colleagues wrote.

“Developing the evidence base to evaluate the risks and benefits of surveillance colonoscopy in older adults and decisional support tools that help physicians and patients incorporate available data and weigh risks and benefits are needed to address current gaps in care for older adults with prior adenomas,” the authors wrote.

The study was published online in the American Journal of Gastroenterology.
 

Surveying physicians

National guidelines recommend surveillance colonoscopy after adenoma removal at more frequent intervals than screening colonoscopy because of a higher risk of colorectal cancer among patients with adenomas. The high quality of screening colonoscopies coupled with an aging population means that many older adults have a history of adenomas and continue to undergo surveillance colonoscopies, the authors wrote.

The benefit-harm balance becomes uncertain as potential harms from the procedure increase with age. However, there is no clear guidance on when to stop surveillance in older adults following adenoma detection, they wrote.

Dr. Schoenborn and colleagues conducted a national cross-sectional survey of 1,800 primary care physicians and 600 gastroenterologists between April and November 2021. The primary care group included internal medicine, family medicine, general practice, and geriatric medicine physicians.

The research team asked whether physicians would recommend surveillance colonoscopy in a series of 12 vignettes that varied by patient age (75 or 85), patient health (good, medium, or poor), and prior adenoma risk (low or high).

Good health was described as well-controlled hypertension and living independently, whereas moderate health was described as moderate heart failure and has difficulty walking, and poor health was described as severe chronic obstructive pulmonary disease on oxygenand requires help with self-care.

For prior adenomas, high risk involved five tubular adenomas, one of which was 15 mm, and low risk involved two tubular adenomas, both of which were less than 10 mm. The survey also noted that the recommended surveillance intervals were 3 years in the high-risk scenario and 7 years in the low-risk scenario.

Researchers mailed 2,400 surveys and received 1,040 responses. They included 874 in the analysis because the physician respondents provided care to patients ages 65 and older and spent time seeing patients in clinic. Decisions about surveillance colonoscopies for adenomas in the absence of symptoms almost always occur in the outpatient setting, rather than acute or urgent care, the authors wrote.
 

Large variations found

Overall, physicians were less likely to recommend surveillance colonoscopies if the patient was older, had poor health, and had lower-risk adenomas. Patient age and health had larger effects on decision-making than adenoma risk, with health status having the largest effect.

About 20.6% of physicians recommended surveillance if the patient was 85, compared with 49.8% if the patient was 75. In addition, 7.1% of physicians recommended surveillance if the patient was in poor health, compared with 28.8% for those in moderate health, and 67.7% for patients in good health.

If the prior adenoma was low risk, 29.7% of physicians recommended surveillance, compared with 41.6% if the prior adenoma was high risk.

In general, family medicine and general practice physicians were most likely to recommend surveillance, at 40%, and gastroenterologists were least likely to recommend surveillance, at 30.9%. Patient age and health had larger effects among gastroenterologists than among primary care physicians, and adenoma risk had similar effects between the two groups.

“The importance of patient age and health status found in our study mirrors study results on physician decision-making regarding screening colonoscopies in older adults and makes intuitive sense,” the authors wrote. “Whether the priorities reflected in our findings are supported by evidence is not clear, and our results highlight important knowledge gaps in the field that warrant future research.”
 

Physician uncertainty

Additional guidance would be helpful, the authors wrote. In the survey, about 52.3% of primary care physicians and 35.4% of gastroenterologists reported uncertainty about the benefit–harm balance of surveillance in older adults.

“Current guidelines on surveillance colonoscopies are solely based on prior adenoma characteristics,” the authors wrote. “Guidelines need to incorporate guidance that considers patient age and health status, as well as adenoma risk, and explicitly considers when surveillance should stop in older adults.”

In addition, most physicians in the survey – 85.9% of primary care physicians and 77% of gastroenterologists – said they would find a decision support tool helpful. At the same time, 32.8% of primary care physicians and 71.5% of gastroenterologists perceived it as the gastroenterologist’s role to decide about surveillance colonoscopies.

“Developing patient-facing materials, communication tools for clinicians, and tools to support shared decision-making about surveillance colonoscopies that engage both physicians and patients are all important next steps,” the authors wrote. “To our knowledge, there is no existing patient decision aid about surveillance colonoscopies; developing such a tool may be valuable.”

The study was supported by Dr. Schoenborn’s career development award from the National Institute on Aging. The authors reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CV deaths jumped in 2020, reflecting pandemic toll

Article Type
Changed
Tue, 02/07/2023 - 10:01

Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.

During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.

“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.

Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.

“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”

The AHA Statistical Update was published online in the journal Circulation.
 

Data on deaths

Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.

Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.

In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.

“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”

The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.

“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.

Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
 

 

 

Additional considerations

In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.

“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”

For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.

“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”

The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.

During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.

“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.

Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.

“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”

The AHA Statistical Update was published online in the journal Circulation.
 

Data on deaths

Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.

Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.

In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.

“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”

The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.

“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.

Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
 

 

 

Additional considerations

In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.

“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”

For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.

“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”

The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.

A version of this article first appeared on Medscape.com.

Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.

During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.

“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.

Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.

“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”

The AHA Statistical Update was published online in the journal Circulation.
 

Data on deaths

Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.

Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.

In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.

“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”

The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.

“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.

Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
 

 

 

Additional considerations

In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.

“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”

For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.

“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”

The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two AI optical diagnosis systems appear clinically comparable for small colorectal polyps

Striking a balance
Article Type
Changed
Wed, 02/15/2023 - 09:49

In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.

For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.

Dr. Cesare Hassan

“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”

Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.

Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.

At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.

Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.

Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).

The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.

The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.

In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.

Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.

Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.

For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.

Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.

As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.

“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.

However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.

“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”

The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.

Body

Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.

Dr. Seth A. Gross

The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.

A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.

There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
 

Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.

Publications
Topics
Sections
Body

Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.

Dr. Seth A. Gross

The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.

A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.

There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
 

Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.

Body

Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.

Dr. Seth A. Gross

The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.

A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.

There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
 

Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.

Title
Striking a balance
Striking a balance

In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.

For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.

Dr. Cesare Hassan

“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”

Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.

Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.

At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.

Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.

Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).

The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.

The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.

In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.

Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.

Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.

For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.

Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.

As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.

“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.

However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.

“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”

The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.

In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.

For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.

Dr. Cesare Hassan

“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”

Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.

Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.

At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.

Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.

Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).

The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.

The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.

In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.

Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.

Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.

For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.

Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.

As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.

“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.

However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.

“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”

The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Noninvasive liver test may help select asymptomatic candidates for heart failure tests

Earlier ID of NAFLD, HFpEF?
Article Type
Changed
Thu, 02/02/2023 - 12:47

A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.

The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.

“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”

HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.

Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.

Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.

First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.

Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.

Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.

In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.

Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.

When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.

Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.

“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.

The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.

Body

The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.

Stephanie Heath/Smiling Eyes Inc.
Dr. Anand S. Shah
This study by a cardiovascular group provided a related argument to investigate a tool used for liver fibrosis, FIB-4, as a screen for the difficult-to-diagnosis heart failure with preserved ejection fraction (HFpEF). The current consensus diagnostic algorithm for HFpEF requires an echocardiogram and B-type natriuretic peptide measurement before invasive hemodynamic exercise stress testing. Okamoto et al. showed that a high FIB-4 index correlated to a high-risk HFA-PEFF score and higher all-cause mortality, cardiovascular mortality, and hospital admission for heart failure. Also, a FIB-4 index at the same cutoffs for NASH had high sensitivity and specificity. Further research would be needed to validate the benefit of FIB-4 as a screening test for HFpEF as well as its role in a sequential testing algorithm; additional research also should explore the influence of hepatic damage and fibrosis on cardiac function and morphology.

Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
 

Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.

Publications
Topics
Sections
Body

The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.

Stephanie Heath/Smiling Eyes Inc.
Dr. Anand S. Shah
This study by a cardiovascular group provided a related argument to investigate a tool used for liver fibrosis, FIB-4, as a screen for the difficult-to-diagnosis heart failure with preserved ejection fraction (HFpEF). The current consensus diagnostic algorithm for HFpEF requires an echocardiogram and B-type natriuretic peptide measurement before invasive hemodynamic exercise stress testing. Okamoto et al. showed that a high FIB-4 index correlated to a high-risk HFA-PEFF score and higher all-cause mortality, cardiovascular mortality, and hospital admission for heart failure. Also, a FIB-4 index at the same cutoffs for NASH had high sensitivity and specificity. Further research would be needed to validate the benefit of FIB-4 as a screening test for HFpEF as well as its role in a sequential testing algorithm; additional research also should explore the influence of hepatic damage and fibrosis on cardiac function and morphology.

Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
 

Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.

Body

The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.

Stephanie Heath/Smiling Eyes Inc.
Dr. Anand S. Shah
This study by a cardiovascular group provided a related argument to investigate a tool used for liver fibrosis, FIB-4, as a screen for the difficult-to-diagnosis heart failure with preserved ejection fraction (HFpEF). The current consensus diagnostic algorithm for HFpEF requires an echocardiogram and B-type natriuretic peptide measurement before invasive hemodynamic exercise stress testing. Okamoto et al. showed that a high FIB-4 index correlated to a high-risk HFA-PEFF score and higher all-cause mortality, cardiovascular mortality, and hospital admission for heart failure. Also, a FIB-4 index at the same cutoffs for NASH had high sensitivity and specificity. Further research would be needed to validate the benefit of FIB-4 as a screening test for HFpEF as well as its role in a sequential testing algorithm; additional research also should explore the influence of hepatic damage and fibrosis on cardiac function and morphology.

Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
 

Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.

Title
Earlier ID of NAFLD, HFpEF?
Earlier ID of NAFLD, HFpEF?

A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.

The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.

“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”

HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.

Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.

Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.

First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.

Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.

Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.

In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.

Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.

When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.

Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.

“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.

The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.

A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.

The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.

“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”

HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.

Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.

Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.

First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.

Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.

Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.

In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.

Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.

When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.

Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.

“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.

The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article