User login
CV deaths jumped in 2020, reflecting pandemic toll
Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.
During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.
“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.
Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.
“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”
The AHA Statistical Update was published online in the journal Circulation.
Data on deaths
Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.
Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.
In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.
“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”
The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.
“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.
Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
Additional considerations
In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.
“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”
For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.
“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”
The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.
A version of this article first appeared on Medscape.com.
Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.
During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.
“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.
Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.
“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”
The AHA Statistical Update was published online in the journal Circulation.
Data on deaths
Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.
Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.
In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.
“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”
The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.
“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.
Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
Additional considerations
In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.
“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”
For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.
“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”
The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.
A version of this article first appeared on Medscape.com.
Cardiovascular-related deaths increased dramatically in 2020, marking the largest single-year increase since 2015 and surpassing the previous record from 2003, according to the American Heart Association’s 2023 Statistical Update.
During the first year of the COVID-19 pandemic, the largest increases in cardiovascular disease (CVD) deaths were seen among Asian, Black, and Hispanic people.
“We thought we had been improving as a country with respect to CVD deaths over the past few decades,” Connie Tsao, MD, chair of the AHA Statistical Update writing committee, told this news organization.
Since 2020, however, those trends have changed. Dr. Tsao, a staff cardiologist at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School, both in Boston, noted the firsthand experience that many clinicians had in seeing the shift.
“We observed this sharp rise in age-adjusted CVD deaths, which corresponds to the COVID-19 pandemic,” she said. “Those of us health care providers knew from the overfull hospitals and ICUs that clearly COVID took a toll, particularly in those with cardiovascular risk factors.”
The AHA Statistical Update was published online in the journal Circulation.
Data on deaths
Each year, the American Heart Association and National Institutes of Health report the latest statistics related to heart disease, stroke, and cardiovascular risk factors. The 2023 update includes additional information about pandemic-related data.
Overall, the number of people who died from cardiovascular disease increased during the first year of the pandemic, rising from 876,613 in 2019 to 928,741 in 2020. This topped the previous high of 910,000 in 2003.
In addition, the age-adjusted mortality rate increased for the first time in several years, Dr. Tsao said, by a “fairly substantial” 4.6%. The age-adjusted mortality rate incorporates the variability in the aging population from year to year, accounting for higher death rates among older people.
“Even though our total number of deaths has been slowly increasing over the past decade, we have seen a decline each year in our age-adjusted rates – until 2020,” she said. “I think that is very indicative of what has been going on within our country – and the world – in light of people of all ages being impacted by the COVID-19 pandemic, especially before vaccines were available to slow the spread.”
The largest increases in CVD-related deaths occurred among Asian, Black, and Hispanic people, who were most heavily affected during the first year of the pandemic.
“People from communities of color were among those most highly impacted, especially early on, often due to a disproportionate burden of cardiovascular risk factors, such as hypertension and obesity,” Michelle Albert, MD, MPH, president of AHA and a professor of medicine at the University of California, San Francisco, said in a statement.
Dr. Albert, who is also the director of UCSF’s Center for the Study of Adversity and Cardiovascular Disease, does research on health equity and noted the disparities seen in the 2020 numbers. “Additionally, there are socioeconomic considerations, as well as the ongoing impact of structural racism on multiple factors, including limiting the ability to access quality health care,” she said.
Additional considerations
In a special commentary, the Statistical Update writing committee pointed to the need to track data for other underrepresented communities, including LGBTQ people and those living in rural or urban areas. The authors outlined several ways to better understand the effects of identity and social determinants of health, as well as strategies to reduce cardiovascular-related disparities.
“This year’s writing group made a concerted effort to gather information on specific social factors related to health risk and outcomes, including sexual orientation, gender identity, urbanization, and socioeconomic position,” Dr. Tsao said. “However, the data are lacking because these communities are grossly underrepresented in clinical and epidemiological research.”
For the next several years, the AHA Statistical Update will likely include more insights about the effects of the COVID-19 pandemic, as well as ongoing disparities.
“For sure, we will be continuing to see the effects of the pandemic for years to come,” Dr. Tsao said. “Recognition of the disparities in outcomes among vulnerable groups should be a call to action among health care providers and researchers, administration, and policy leaders to investigate the reasons and make changes to reverse these trends.”
The statistical update was prepared by a volunteer writing group on behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.
A version of this article first appeared on Medscape.com.
FROM CIRCULATION
Two AI optical diagnosis systems appear clinically comparable for small colorectal polyps
In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.
For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.
“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”
Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.
Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.
At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.
Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.
Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).
The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.
The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.
In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.
Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.
Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.
For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.
Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.
As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.
“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.
However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.
“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”
The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.
Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.
The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.
A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.
There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.
Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.
The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.
A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.
There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.
Colonoscopy is the gold standard test to reduce an individual’s chance of developing colorectal cancer. The latest tool to improve colonoscopy outcomes is integrating artificial intelligence (AI) during the exam. AI systems offer both computer aided detection (CADe) as well as diagnosis (CADx). Accurate CADx could lead to a cost-effective strategy of removing only neoplastic polyps.
The study by Hassan et al. compared two AI CADx systems for optical diagnosis of colorectal polyps ≤ 5 mm. Polyps were simultaneously evaluated by both AI systems, but initially the endoscopist performed a CADx unassisted diagnosis. The two systems (CAD EYE [Fujifilm Co.] and GI Genius [Medtronic]) had similar specificity: 94.9% and 92.4%, respectively. Furthermore, the systems demonstrated negative predictive values of 96.9% and 97.6%, respectively, which exceeds the American Society of Gastrointestinal Endoscopy’s threshold of at least 90%.
A surprising finding was the unassisted endoscopist before CADx interpretation had a polyp diagnosis accuracy of 97.8%, resulting in negligible benefit when CADx was activated. However, this level of polyp interpretation is likely lower in community practice, but clinical trials will be needed.
There is rapid development of CADx and CADe systems entering the clinical realm of colonoscopy. It is critical to have the ability to objectively review the performance of these AI systems in a real-life clinical setting to assess accuracy for both CADx and CADe. Clinicians must balance striving for high quality colonoscopy outcomes with the cost of innovative technology like AI. However, it is reassuring that the initial CADx systems have similar high-performance accuracy for polyp interpretation, since most practices will incorporate a single system. Future studies will be needed to compare not only the accuracy of AI platforms offering CADx and CADe, but also the many other features that will be entering the endoscopy space.
Seth A. Gross, MD, is professor of medicine at NYU Grossman School of Medicine and clinical chief of gastroenterology and hepatology at NYU Langone Health. He disclosed financial relationships with Medtronic, Olympus, Iterative Scopes, and Micro-Tech Endoscopy.
In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.
For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.
“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”
Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.
Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.
At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.
Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.
Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).
The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.
The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.
In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.
Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.
Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.
For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.
Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.
As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.
“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.
However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.
“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”
The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.
In a head-to-head comparison, two commercially available computer-aided diagnosis systems appeared clinically equivalent for the optical diagnosis of small colorectal polyps, according to a research letter published in Gastroenterology.
For the optical diagnosis of diminutive colorectal polyps, the comparable performances of both CAD EYE (Fujifilm Co.) and GI Genius (Medtronic) met cutoff guidelines to implement the cost-saving leave-in-situ and resect-and-discard strategies, wrote Cesare Hassan, MD, PhD, associate professor of gastroenterology at Humanitas University and member of the endoscopy unit at Humanitas Clinical Research Hospital in Milan, and colleagues.
“Screening colonoscopy is effective in reducing colorectal cancer risk but also represents a substantial financial burden,” the authors wrote. “Novel strategies based on artificial intelligence may enable targeted removal only of polyps deemed to be neoplastic, thus reducing patient burden for unnecessary removal of nonneoplastic polyps and reducing costs for histopathology.”
Several computer-aided diagnosis (CADx) systems are commercially available for optical diagnosis of colorectal polyps, the authors wrote. However, each artificial intelligence (AI) system has been trained and validated with different polyp datasets, which may contribute to variability and affect the clinical outcome of optical diagnosis-based strategies.
Dr. Hassan and colleagues conducted a prospective comparison trial at a single center to look at the real-life performances of two CADx systems on optical diagnosis of polyps smaller than 5 mm.
At colonoscopy, the same polyp was visualized by the same endoscopist on two different monitors simultaneously with the respective output from each of the two CADx systems. Pre- and post-CADx human diagnoses were also collected.
Between January 2022 and March 2022, 176 consecutive patients age 40 and older underwent colonoscopy for colorectal cancer screening, polypectomy surveillance, or gastrointestinal symptoms. About 60.8% of participants were men, and the average age was 60.
Among 543 polyps detected and removed, 169 (31.3%) were adenomas, and 373 (68.7%) were nonadenomas. Of those, 325 (59.9%) were rectosigmoid polyps of 5 mm or less in diameter and eligible for analyses in the study. This included 44 adenomas (13.5%) and 281 nonadenomas (86.5%).
The two CADx systems were grouped as CADx-A for CAD EYE and CADx-B for GI Genius. CADx-A provided prediction output for all 325 rectosigmoid polyps of 5 mm or less, whereas CADx-B wasn’t able to provide output for six of the nonadenomas, which were excluded from the analysis.
The negative predictive value (NPV) for rectosigmoid polyps of 5 mm or less was 97% for CADx-A and 97.7% for CADx-B, the authors wrote. The American Society for Gastrointestinal Endoscopy recommends a threshold for optical diagnosis of at least 90%.
In addition, the sensitivity for adenomas was 81.8% for CADx-A and 86.4% for CADx-B. The accuracy of CADx-A was slightly higher, at 93.2%, as compared with 91.5% for CADx-B.
Based on AI prediction alone, 269 of 319 polyps (84.3%) with CADx-A and 260 of 319 polyps (81.5%) with CADx-B would have been classified as nonneoplastic and avoided removal. This corresponded to a specificity of 94.9% for CADx-A and 92.4% for CADx-B, which wasn’t significantly different, the authors wrote. Concordance in histology prediction between the two systems was 94.7%.
Based on the 2020 U.S. Multi-Society Task Force on Colorectal Cancer (USMSTF) guidelines, the agreement with histopathology in surveillance interval assignment was 84.7% for CADx-A and 89.2% for CADx-B. Based on the 2020 European Society of Gastrointestinal Endoscopy (ESGE) guidelines, the agreement was 98.3% for both systems.
For rectosigmoid polyps of 5 mm or less, the NPV of unassisted optical diagnosis was 97.8% for a high-confidence diagnosis, but it wasn’t significantly different from the NPV of CADx-A (96.9%) or CADx-B (97.6%). The NPV of a CADx-assisted optical diagnosis at high confidence was 97.7%, without statistically significant differences as compared with unassisted interpretation.
Based on the 2020 USMSTF and ESGE guidelines, the agreement between unassisted interpretation and histopathology in surveillance interval assignment was 92.6% and 98.9%, respectively. There was total agreement between unassisted interpretation and CADx-assisted interpretation in surveillance interval assignment based on both guidelines.
As in previous findings, unassisted endoscopic diagnosis was on par with CADx-assisted, both in technical accuracy and clinical outcomes. The study authors attributed the lack of additional benefit from CADx to a high performance of unassisted-endoscopist diagnosis, with the 97.8% NPV for rectosigmoid polyps and 90% or greater concordance in postpolypectomy surveillance intervals with histology. In addition, a human endoscopist was the only one to achieve 90% or greater agreement in postpolypectomy surveillance intervals under the U.S. guidelines, mainly due to a very high specificity.
“This confirms the complexity of the human-machine interaction that should not be marginalized in the stand-alone performance of the machine,” the authors wrote.
However, the high accuracy of unassisted endoscopists in the academic center in Italy is unlikely to mirror the real performance in community settings, they added. Future studies should focus on nontertiary centers to show the additional benefit, if any, that CADx provides for leave-in-situ colorectal polyps.
“A high degree of concordance in clinical outcomes was shown when directly comparing in vivo two different systems of CADx,” the authors concluded. “This reassured our confidence in the standardization of performance that may be achieved with the incorporation of AI in clinical practice, irrespective of the availability of multiple systems.”
The study authors declared no funding source for this study. Several authors reported consulting relationships with numerous companies, including Fuji and Medtronic, which make the CAD EYE and GI Genius systems, respectively.
FROM GASTROENTEROLOGY
Noninvasive liver test may help select asymptomatic candidates for heart failure tests
A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.
The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.
“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”
HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.
Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.
Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.
First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.
Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.
Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.
In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.
Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.
When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.
Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.
“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.
The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.
The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.
Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.
The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.
Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.
The 2021 NAFLD clinical care pathway is a shining example of how a simple score like the fibrosis-4 (FIB-4) index – paired sequentially with a second noninvasive test like vibration-controlled elastography – can provide an accurate, cost-effective screening tool and risk stratification and further limit invasive testing such as liver biopsy.
Broader use of FIB-4 by cardiovascular and hepatology providers may increase earlier identification of NAFLD or HFpEF or both.
Anand S. Shah, MD, is director of hepatology at Atlanta VA Healthcare and assistant professor of medicine, division of digestive disease, department of medicine, Emory University, Atlanta. He has no financial conflicts.
A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.
The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.
“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”
HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.
Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.
Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.
First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.
Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.
Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.
In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.
Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.
When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.
Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.
“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.
The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.
A noninvasive test for liver disease may be a useful, low-cost screening tool to select asymptomatic candidates for a detailed examination of heart failure with preserved ejection fraction (HFpEF), say authors of a report published in Gastro Hep Advances.
The fibrosis-4 (FIB-4) index was a significant predictor of high HFpEF risk, wrote Chisato Okamoto, MD, of the department of medical biochemistry at Osaka University Graduate School of Medicine and the National Cerebral and Cardiovascular Center in Japan, and colleagues.
“Recognition of heart failure with preserved ejection fraction at an early stage in mass screening is desirable, but difficult to achieve,” the authors wrote. “The FIB-4 index is calculated using only four parameters that are routinely evaluated in general health check-up programs.”
HFpEF is an emerging disease in recent years with a poor prognosis, they wrote. Early diagnosis can be challenging for several reasons, particularly because HFpEF patients are often asymptomatic until late in the disease process and have normal left ventricular filling pressures at rest. By using a tool to select probable cases from subclinical participants in a health check-up program, clinicians can refer patients for a diastolic stress test, which is considered the gold standard for diagnosing HFpEF.
Previous studies have found that the FIB-4 index, a noninvasive tool to estimate liver stiffness and fibrosis, is associated with a higher risk of major adverse cardiovascular events (MACE) in patients with HFpEF. In addition, patients with nonalcoholic fatty liver disease (NAFLD) have a twofold higher prevalence of HFpEF than the general population.
Dr. Okamoto and colleagues examined the association between the FIB-4 index and HFpEF risk based on the Heart Failure Association’s diagnostic algorithm for HFpEF in patients with breathlessness (HFA-PEFF). The researchers looked at the prognostic impact of the FIB-4 index in 710 patients who participated in a health check-up program in the rural community of Arita-cho, Japan, between 2006 and 2007. They excluded participants with a history of cardiovascular disease or reduced left ventricular systolic function (LVEF < 50%). Researchers calculated the FIB-4 index and HFA-PEFF score for all participants.
First, using the HFA-PEFF scores, the researchers sorted participants into five groups by HFpEF risk: 215 (30%) with zero points, 100 (14%) with 1 point, 171 (24%) with 2 points, 163 (23%) with 3 points, and 61 (9%) with 4-6 points. Participants in the high-risk group (scores 4-6) were older, mostly men, and had higher blood pressure, alcohol intake, hypertension, dyslipidemia, and liver disease. The higher the HFpEF risk group, the higher the rates of all-cause mortality, hospitalization for heart failure, and MACE.
Overall, the FIB-4 index was correlated with the HFpEF risk groups and showed a stepwise increase across the groups, with .94 for the low-risk group, 1.45 for the intermediate-risk group, and 1.99 for the high-risk group, the authors wrote. The FIB-4 index also correlated with markers associated with components of the HFA-PEFF scoring system.
Using multivariate logistic regression analysis, the FIB-4 index was associated with a high HFpEF risk, and an increase in FIB-4 was associated with increased odds of high HFpEF risk. The association remained significant across four separate models that accounted for risk factors associated with lifestyle-related diseases, blood parameters associated with liver disease, and chronic conditions such as hypertension, dyslipidemia, diabetes mellitus, and liver disease.
In additional area under the curve (AUC) analyses, the FIB-4 index was a significant predictor of high HFpEF risk. At cutoff values typically used for advanced liver fibrosis in NAFLD, a FIB-4 cutoff of 1.3 or less had a sensitivity of 85.2%, while a FIB-4 cutoff of 2.67 or higher had a specificity of 94.8%. At alternate cutoff values typically used for patients with HIV/hepatitis C virus infection, a FIB-4 cutoff of less than 1.45 had a sensitivity of 75.4%, while a FIB-4 cutoff of greater than 3.25 had a specificity of 98%.
Using cutoffs of 1.3 and 2.67, a higher FIB-4 was associated with higher rates of clinical events and MACE, as well as a higher HFpEF risk. Using the alternate cutoffs of 1.45 and 3.25, prognostic stratification of clinical events and MACE was also possible.
When all variables were included in the multivariate model, the FIB-4 index remained a significant prognostic predictor. The FIB-4 index stratified clinical prognosis was also an independent predictor of all-cause mortality and hospitalization for heart failure.
Although additional studies are needed to reveal the interaction between liver and heart function, the study authors wrote, the findings provide valuable insights that can help discover the cardiohepatic interaction to reduce the development of HFpEF.
“Since it can be easily, quickly, and inexpensively measured, routine or repeated measurements of the FIB-4 index could help in selecting preferred candidates for detailed examination of HFpEF risk, which may improve clinical outcomes by diagnosing HFpEF at an early stage,” they wrote.
The study was supported by grants from the Osaka Medical Research Foundation for Intractable Disease, the Japan Arteriosclerosis Prevention Fund, the Japan Society for the Promotion of Science, and the Japan Heart Foundation. The authors disclosed no conflicts.
FROM GASTRO HEP ADVANCES
Acute hepatic porphyrias no longer as rare as previously thought
from the American Gastroenterological Association.
For acute attacks, treatment should include intravenous hemin, and for patients with recurrent attacks, a newly-approved therapy called givosiran should be considered, wrote the authors of the update, which was published Jan. 13 in Gastroenterology.
“Diagnoses of AHPs are often missed, with a delay of more than 15 years from initial presentation. The key to early diagnosis is to consider the diagnosis, especially in patients with recurring severe abdominal pain not ascribable to other causes,” wrote the authors, who were led by Bruce Wang, MD, a hepatologist with the University of California, San Francisco.
AHPs are inherited disorders of heme-metabolism, which include acute intermittent porphyria, hereditary coproporphyria, variegate porphyria, and porphyria due to severe deficiency of 5-aminolevulinic acid dehydratase.
Acute intermittent porphyria (AIP) is the most common type, with an estimated prevalence of symptomatic AHP of 1 in 100,000 patients. However, population-level genetic studies show that the prevalence of pathogenic variants for AIP is between 1 in 1,300 and 1 in 1,785.
The major clinical presentation includes attacks of severe abdominal pain, nausea, vomiting, constipation, muscle weakness, neuropathy, tachycardia, and hypertension, yet without peritoneal signs or abnormalities on cross-sectional imaging.
Recent advances in treatment have improved the outlook for patients with AHP. To provide timely guidance, the authors developed 12 clinical practice advice statements on the diagnosis and management of AHPs based on a review of the published literature and expert opinion.
First, AHP screening should be considered in the evaluation of all patients, particularly among women in their childbearing years between ages 15 and 50 with unexplained, recurrent severe abdominal pain that doesn’t have a clear etiology. About 90% of patients with symptomatic AHP are women, and more than 90% of them experience only one or a few acute attacks in their lifetime, which are often precipitated by factors that increase the activity of the enzyme ALAS1 in the liver.
For initial AHP diagnosis, biochemical testing should measure porphobilinogen (PBG) and delta-aminolevulinic acid (ALA) corrected to creatine on a random urine sample. All patients with significantly elevated urinary PBG or ALA should initially be presumed to have AHP, and during acute attacks, both will be elevated at least five-fold of the upper limit of normal. Because ALA and PBG are porphyrin precursors, urine porphyrin testing should not be used alone for AHP screening.
After that, genetic testing should be used to confirm the AHP diagnosis, as well as the specific type of AHP. Sequencing of the four genes ALAD, HMBS, CPOX, and PPOX leads to aminolevulinic acid dehydrase deficiency, acute intermittent porphyria, hereditary coproporphyria, and variegate porphyria, respectively. When whole-gene sequencing is performed, about 95%-99% of cases can be identified. First-degree family members should be screened with genetic testing, and those who are mutation carriers should be counseled.
For acute attacks of AHP that are severe enough to require hospitalization, the currently approved treatment is intravenous hemin infusion, usually given once daily at a dose of 3-4 mg/kg body weight for 3-5 days. Due to potential thrombophlebitis, it’s best to administer hemin in a high-flow central vein via a peripherally inserted central catheter or central port.
In addition, treatment for acute attacks should include analgesics, antiemetics, and management of systemic arterial hypertension, tachycardia, hyponatremia, and hypomagnesemia. The primary goal of treatment during an acute attack is to decrease ALA production. Patients should be counseled to avoid identifiable triggers, such as porphyrinogenic medications, excess alcohol intake, tobacco use, and caloric deprivation.
Although recent advances have improved treatment for acute attacks, management for patients with frequent attacks remains challenging, the study authors wrote. About 3%-5% of patients with symptomatic AHP experience recurrent attacks, which is defined as four or more attacks per year. These attacks aren’t typically associated with identifiable triggers, although some that occur during the luteal phase of a patient’s menstrual cycle are believed to be triggered by progesterone. However, treatment with hormonal suppression therapy, such as GnRH agonists, has had limited success.
Off-label use of prophylactic intravenous heme therapy is common, although the effectiveness in preventing recurrent attacks isn’t well-established. In addition, chronic hemin use is associated with several complications, including infections, iron overload, and the need for indwelling central venous catheters.
Recently, the Food and Drug Administration approved givosiran, a small interfering RNA-based therapy that targets delta-aminolevulinate synthase 1, for treatment in adults with AHP. Monthly subcutaneous therapy appears to significantly lower rates of acute attacks among patients who experience recurrent attacks.
“We suggest prescribing givosiran only for those patients with recurrent acute attacks that are both biochemically and genetically confirmed,” the authors wrote. “Due to limited safety data, givosiran should not be used in women who are pregnant or planning a pregnancy.”
In the most severe cases, liver transplantation should be limited to patients with intractable symptoms and a significantly decreased quality of life who are refractory to pharmacotherapy. If living donor transplantation is considered, genetic testing should be used to screen related living donors since HMBS pathogenic variants in asymptomatic donors could results in poor posttransplantation outcomes.
In the long-term, patients with AHP should be monitored annually for liver disease and chronic kidney disease with serum creatinine and estimated glomerular filtration rate monitored. Patients also face an increased risk of hepatocellular carcinoma and should start screening at age 50, with a liver ultrasound every 6 months.
“Fortunately, most people with genetic defects never experience severe acute attacks or may experience only one or a few attacks throughout their lives,” the authors wrote.
The authors (Bruce Wang, MD, Herbert L. Bonkovsky, MD, AGAF, and Manisha Balwani, MD, MS) reported that they are part of the Porphyrias Consortium. The Porphyrias Consortium is part of the Rare Diseases Clinical Research Network, an initiative of the Division of Rare Diseases Research Innovation at the National Center for Advancing Translational Sciences. The consortium is funded through a collaboration between the center and the National Institute of Diabetes and Digestive and Kidney Diseases. Several authors disclosed funding support and honoraria for advisory board roles with various pharmaceutical companies, including Alnylam, which makes givosiran.
This article was updated 2/3/23.
from the American Gastroenterological Association.
For acute attacks, treatment should include intravenous hemin, and for patients with recurrent attacks, a newly-approved therapy called givosiran should be considered, wrote the authors of the update, which was published Jan. 13 in Gastroenterology.
“Diagnoses of AHPs are often missed, with a delay of more than 15 years from initial presentation. The key to early diagnosis is to consider the diagnosis, especially in patients with recurring severe abdominal pain not ascribable to other causes,” wrote the authors, who were led by Bruce Wang, MD, a hepatologist with the University of California, San Francisco.
AHPs are inherited disorders of heme-metabolism, which include acute intermittent porphyria, hereditary coproporphyria, variegate porphyria, and porphyria due to severe deficiency of 5-aminolevulinic acid dehydratase.
Acute intermittent porphyria (AIP) is the most common type, with an estimated prevalence of symptomatic AHP of 1 in 100,000 patients. However, population-level genetic studies show that the prevalence of pathogenic variants for AIP is between 1 in 1,300 and 1 in 1,785.
The major clinical presentation includes attacks of severe abdominal pain, nausea, vomiting, constipation, muscle weakness, neuropathy, tachycardia, and hypertension, yet without peritoneal signs or abnormalities on cross-sectional imaging.
Recent advances in treatment have improved the outlook for patients with AHP. To provide timely guidance, the authors developed 12 clinical practice advice statements on the diagnosis and management of AHPs based on a review of the published literature and expert opinion.
First, AHP screening should be considered in the evaluation of all patients, particularly among women in their childbearing years between ages 15 and 50 with unexplained, recurrent severe abdominal pain that doesn’t have a clear etiology. About 90% of patients with symptomatic AHP are women, and more than 90% of them experience only one or a few acute attacks in their lifetime, which are often precipitated by factors that increase the activity of the enzyme ALAS1 in the liver.
For initial AHP diagnosis, biochemical testing should measure porphobilinogen (PBG) and delta-aminolevulinic acid (ALA) corrected to creatine on a random urine sample. All patients with significantly elevated urinary PBG or ALA should initially be presumed to have AHP, and during acute attacks, both will be elevated at least five-fold of the upper limit of normal. Because ALA and PBG are porphyrin precursors, urine porphyrin testing should not be used alone for AHP screening.
After that, genetic testing should be used to confirm the AHP diagnosis, as well as the specific type of AHP. Sequencing of the four genes ALAD, HMBS, CPOX, and PPOX leads to aminolevulinic acid dehydrase deficiency, acute intermittent porphyria, hereditary coproporphyria, and variegate porphyria, respectively. When whole-gene sequencing is performed, about 95%-99% of cases can be identified. First-degree family members should be screened with genetic testing, and those who are mutation carriers should be counseled.
For acute attacks of AHP that are severe enough to require hospitalization, the currently approved treatment is intravenous hemin infusion, usually given once daily at a dose of 3-4 mg/kg body weight for 3-5 days. Due to potential thrombophlebitis, it’s best to administer hemin in a high-flow central vein via a peripherally inserted central catheter or central port.
In addition, treatment for acute attacks should include analgesics, antiemetics, and management of systemic arterial hypertension, tachycardia, hyponatremia, and hypomagnesemia. The primary goal of treatment during an acute attack is to decrease ALA production. Patients should be counseled to avoid identifiable triggers, such as porphyrinogenic medications, excess alcohol intake, tobacco use, and caloric deprivation.
Although recent advances have improved treatment for acute attacks, management for patients with frequent attacks remains challenging, the study authors wrote. About 3%-5% of patients with symptomatic AHP experience recurrent attacks, which is defined as four or more attacks per year. These attacks aren’t typically associated with identifiable triggers, although some that occur during the luteal phase of a patient’s menstrual cycle are believed to be triggered by progesterone. However, treatment with hormonal suppression therapy, such as GnRH agonists, has had limited success.
Off-label use of prophylactic intravenous heme therapy is common, although the effectiveness in preventing recurrent attacks isn’t well-established. In addition, chronic hemin use is associated with several complications, including infections, iron overload, and the need for indwelling central venous catheters.
Recently, the Food and Drug Administration approved givosiran, a small interfering RNA-based therapy that targets delta-aminolevulinate synthase 1, for treatment in adults with AHP. Monthly subcutaneous therapy appears to significantly lower rates of acute attacks among patients who experience recurrent attacks.
“We suggest prescribing givosiran only for those patients with recurrent acute attacks that are both biochemically and genetically confirmed,” the authors wrote. “Due to limited safety data, givosiran should not be used in women who are pregnant or planning a pregnancy.”
In the most severe cases, liver transplantation should be limited to patients with intractable symptoms and a significantly decreased quality of life who are refractory to pharmacotherapy. If living donor transplantation is considered, genetic testing should be used to screen related living donors since HMBS pathogenic variants in asymptomatic donors could results in poor posttransplantation outcomes.
In the long-term, patients with AHP should be monitored annually for liver disease and chronic kidney disease with serum creatinine and estimated glomerular filtration rate monitored. Patients also face an increased risk of hepatocellular carcinoma and should start screening at age 50, with a liver ultrasound every 6 months.
“Fortunately, most people with genetic defects never experience severe acute attacks or may experience only one or a few attacks throughout their lives,” the authors wrote.
The authors (Bruce Wang, MD, Herbert L. Bonkovsky, MD, AGAF, and Manisha Balwani, MD, MS) reported that they are part of the Porphyrias Consortium. The Porphyrias Consortium is part of the Rare Diseases Clinical Research Network, an initiative of the Division of Rare Diseases Research Innovation at the National Center for Advancing Translational Sciences. The consortium is funded through a collaboration between the center and the National Institute of Diabetes and Digestive and Kidney Diseases. Several authors disclosed funding support and honoraria for advisory board roles with various pharmaceutical companies, including Alnylam, which makes givosiran.
This article was updated 2/3/23.
from the American Gastroenterological Association.
For acute attacks, treatment should include intravenous hemin, and for patients with recurrent attacks, a newly-approved therapy called givosiran should be considered, wrote the authors of the update, which was published Jan. 13 in Gastroenterology.
“Diagnoses of AHPs are often missed, with a delay of more than 15 years from initial presentation. The key to early diagnosis is to consider the diagnosis, especially in patients with recurring severe abdominal pain not ascribable to other causes,” wrote the authors, who were led by Bruce Wang, MD, a hepatologist with the University of California, San Francisco.
AHPs are inherited disorders of heme-metabolism, which include acute intermittent porphyria, hereditary coproporphyria, variegate porphyria, and porphyria due to severe deficiency of 5-aminolevulinic acid dehydratase.
Acute intermittent porphyria (AIP) is the most common type, with an estimated prevalence of symptomatic AHP of 1 in 100,000 patients. However, population-level genetic studies show that the prevalence of pathogenic variants for AIP is between 1 in 1,300 and 1 in 1,785.
The major clinical presentation includes attacks of severe abdominal pain, nausea, vomiting, constipation, muscle weakness, neuropathy, tachycardia, and hypertension, yet without peritoneal signs or abnormalities on cross-sectional imaging.
Recent advances in treatment have improved the outlook for patients with AHP. To provide timely guidance, the authors developed 12 clinical practice advice statements on the diagnosis and management of AHPs based on a review of the published literature and expert opinion.
First, AHP screening should be considered in the evaluation of all patients, particularly among women in their childbearing years between ages 15 and 50 with unexplained, recurrent severe abdominal pain that doesn’t have a clear etiology. About 90% of patients with symptomatic AHP are women, and more than 90% of them experience only one or a few acute attacks in their lifetime, which are often precipitated by factors that increase the activity of the enzyme ALAS1 in the liver.
For initial AHP diagnosis, biochemical testing should measure porphobilinogen (PBG) and delta-aminolevulinic acid (ALA) corrected to creatine on a random urine sample. All patients with significantly elevated urinary PBG or ALA should initially be presumed to have AHP, and during acute attacks, both will be elevated at least five-fold of the upper limit of normal. Because ALA and PBG are porphyrin precursors, urine porphyrin testing should not be used alone for AHP screening.
After that, genetic testing should be used to confirm the AHP diagnosis, as well as the specific type of AHP. Sequencing of the four genes ALAD, HMBS, CPOX, and PPOX leads to aminolevulinic acid dehydrase deficiency, acute intermittent porphyria, hereditary coproporphyria, and variegate porphyria, respectively. When whole-gene sequencing is performed, about 95%-99% of cases can be identified. First-degree family members should be screened with genetic testing, and those who are mutation carriers should be counseled.
For acute attacks of AHP that are severe enough to require hospitalization, the currently approved treatment is intravenous hemin infusion, usually given once daily at a dose of 3-4 mg/kg body weight for 3-5 days. Due to potential thrombophlebitis, it’s best to administer hemin in a high-flow central vein via a peripherally inserted central catheter or central port.
In addition, treatment for acute attacks should include analgesics, antiemetics, and management of systemic arterial hypertension, tachycardia, hyponatremia, and hypomagnesemia. The primary goal of treatment during an acute attack is to decrease ALA production. Patients should be counseled to avoid identifiable triggers, such as porphyrinogenic medications, excess alcohol intake, tobacco use, and caloric deprivation.
Although recent advances have improved treatment for acute attacks, management for patients with frequent attacks remains challenging, the study authors wrote. About 3%-5% of patients with symptomatic AHP experience recurrent attacks, which is defined as four or more attacks per year. These attacks aren’t typically associated with identifiable triggers, although some that occur during the luteal phase of a patient’s menstrual cycle are believed to be triggered by progesterone. However, treatment with hormonal suppression therapy, such as GnRH agonists, has had limited success.
Off-label use of prophylactic intravenous heme therapy is common, although the effectiveness in preventing recurrent attacks isn’t well-established. In addition, chronic hemin use is associated with several complications, including infections, iron overload, and the need for indwelling central venous catheters.
Recently, the Food and Drug Administration approved givosiran, a small interfering RNA-based therapy that targets delta-aminolevulinate synthase 1, for treatment in adults with AHP. Monthly subcutaneous therapy appears to significantly lower rates of acute attacks among patients who experience recurrent attacks.
“We suggest prescribing givosiran only for those patients with recurrent acute attacks that are both biochemically and genetically confirmed,” the authors wrote. “Due to limited safety data, givosiran should not be used in women who are pregnant or planning a pregnancy.”
In the most severe cases, liver transplantation should be limited to patients with intractable symptoms and a significantly decreased quality of life who are refractory to pharmacotherapy. If living donor transplantation is considered, genetic testing should be used to screen related living donors since HMBS pathogenic variants in asymptomatic donors could results in poor posttransplantation outcomes.
In the long-term, patients with AHP should be monitored annually for liver disease and chronic kidney disease with serum creatinine and estimated glomerular filtration rate monitored. Patients also face an increased risk of hepatocellular carcinoma and should start screening at age 50, with a liver ultrasound every 6 months.
“Fortunately, most people with genetic defects never experience severe acute attacks or may experience only one or a few attacks throughout their lives,” the authors wrote.
The authors (Bruce Wang, MD, Herbert L. Bonkovsky, MD, AGAF, and Manisha Balwani, MD, MS) reported that they are part of the Porphyrias Consortium. The Porphyrias Consortium is part of the Rare Diseases Clinical Research Network, an initiative of the Division of Rare Diseases Research Innovation at the National Center for Advancing Translational Sciences. The consortium is funded through a collaboration between the center and the National Institute of Diabetes and Digestive and Kidney Diseases. Several authors disclosed funding support and honoraria for advisory board roles with various pharmaceutical companies, including Alnylam, which makes givosiran.
This article was updated 2/3/23.
FROM GASTROENTEROLOGY
Childhood behavioral, emotional problems linked to poor economic and social outcomes in adulthood
Children with chronically elevated externalizing symptoms, such as behavioral problems, or internalizing symptoms, such as mental health concerns, have an increased risk for poor economic and social outcomes in adulthood, data from a new study suggest.
Children with comorbid externalizing and internalizing symptoms were especially vulnerable to long-term economic and social exclusion.
“Research has mostly studied the outcomes of children with either behavioral problems or depression-anxiety problems. However, comorbidity is the rule rather than the exception in clinical practice,” senior author Massimilliano Orri, PhD, an assistant professor of psychiatry at McGill University and clinical psychologist with the Douglas Mental Health University Institute, both in Montreal, said in an interview.
“Our findings are important, as they show that comorbidity between externalizing and internalizing problems is associated with real-life outcomes that profoundly influence a youth’s chances to participate in society later in life,” he said.
The study was published in JAMA Network Open.
Analyzing associations
Dr. Orri and colleagues analyzed data for 3,017 children in the Quebec Longitudinal Study of Kindergarten Children, a population-based birth cohort that enrolled participants in 1986-1987 and 1987-1988 while they were attending kindergarten. The sample included 2,000 children selected at random and 1,017 children who scored at or above the 80th percentile for disruptive behavior problems.
The research team looked at the association between childhood behavioral profiles and economic and social outcomes for ages 19-37 years, including employment earnings, receipt of welfare, intimate partnerships, and having children living in the household. They obtained the outcome data from participants’ tax returns for 1998-2017.
During enrollment in the study, the children’s teachers assessed behavioral symptoms annually for ages 6-12 years using the Social Behavior Questionnaire. Based on the assessments, the research team categorized the students as having no or low symptoms, high externalizing symptoms only (such as hyperactivity, impulsivity, aggression, and rule violation), high internalizing symptoms only (such as anxiety, depression, worry, and social withdrawal), or comorbid symptoms. They looked at other variables as well, including the child’s sex, the parents’ age at the birth of their first child, the parents’ years of education, family structure, and the parents’ household income.
Among the 3,017 participants, 45.4% of children had no or low symptoms, 29.2% had high externalizing symptoms, 11.7% had high internalizing symptoms, and 13.7% had comorbid symptoms. About 53% were boys, and 47% were girls.
In general, boys were more likely to exhibit high externalizing symptoms, and girls were more likely to exhibit high internalizing symptoms. In the comorbid group, about 82% were boys, and they were more likely to have younger mothers, come from households with lower earnings when they were ages 3-5 years, and have a nonintact family at age 6 years.
The average age at follow-up was 37 years. Participants earned an average of $32,800 per year at ages 33-37 years (between 2013 and 2017). During the 20 years of follow-up, participants received welfare support for about 1.5 years, had an intimate partner for 7.4 years, and had children living in the household for 11 years.
Overall, participants in the high externalizing and high internalizing symptom profiles – and especially those in the comorbid profile – had lower earnings and a higher incidence of annual welfare receipt across early adulthood, compared with participants with low or no symptoms. They were also less likely to have an intimate partner or have children living in the household. Participants with a comorbid symptom profile earned $15,031 less per year and had a 3.79-times higher incidence of annual welfare receipt.
Lower earnings
Across the sample, men were more likely to have higher earnings and less likely to receive welfare each year, but they also were less likely to have an intimate partner or have children in the household. Among those with the high externalizing profile, men were significantly less likely to receive welfare. Among the comorbid profile, men were less likely to have children in the household.
Compared with the no-symptom or low-symptom profile, those in the high externalizing profile earned $5,904 less per year and had a two-times–higher incidence of welfare receipt. Those in the high internalizing profile earned $8,473 less per year, had a 2.07-times higher incidence of welfare receipt, and had a lower incidence of intimate partnership.
Compared with the high externalizing profile, those in the comorbid profile earned $9,126 less per year, had a higher incidence of annual welfare receipt, had a lower incidence of intimate partnership, and were less likely to have children in the household. Similarly, compared with the high internalizing profile, those in the comorbid profile earned $6,558 less per year and were more likely to exhibit the other poor long-term outcomes. Participants in the high internalizing profile earned $2,568 less per year than those in the high externalizing profile.
During a 40-year working career, the estimated lost personal employment earnings were $140,515 for the high externalizing profile, $201,657 for the high internalizing profile, and $357,737 for the comorbid profile, compared with those in the no-symptom or low-symptom profile.
“We know that children with externalizing and internalizing symptoms can have many problems in the short term – like social difficulties and lower education attainment – but it’s important to also understand the potential long-term outcomes,” study author Francis Vergunst, DPhil/PhD, an associate professor of child psychosocial difficulties at the University of Oslo, told this news organization.
“For example, when people have insufficient income, are forced to seek welfare support, or lack the social support structure that comes from an intimate partnership, it can have profound consequences for their mental health and well-being – and for society as a whole,” he said. “Understanding this helps to build the case for early prevention programs that can reduce childhood externalizing and internalizing problems and improve long-term outcomes.”
Several mechanisms could explain the associations found across the childhood symptom profiles, the study authors wrote. For instance, children with early behavior problems may be more likely to engage in risky adolescent activities, such as substance use, delinquent peer affiliations, and academic underachievement, which affects their transition to adulthood and accumulation of social and economic capital throughout life. Those with comorbid symptoms likely experience a compounded effect.
Future studies should investigate how to intervene effectively to support children, particularly those with comorbid externalizing and internalizing symptoms, the study authors write.
“Currently, most published studies focus on children with either externalizing or internalizing problems (and these programs can be effective, especially for externalizing problems), but we know very little about how to improve long-term outcomes for children with comorbid symptoms,” Dr. Vergunst said. “Given the large costs of these problems for individuals and society, this is a critical area for further research.”
‘Solid evidence’
Commenting on the findings, Ian Colman, PhD, a professor of epidemiology and public health and director of the Applied Psychiatric Epidemiology Across the Life course (APEAL) lab at the University of Ottawa, said, “Research like this provides solid evidence that if we do not provide appropriate supports for children who are struggling with their mental health or related behaviors, then these children are more likely to face a life of social and economic exclusion.”
Dr. Colman, who wasn’t involved with this study, has researched long-term psychosocial outcomes among adolescents with depression, as well as those with externalizing behaviors. He and colleagues have found poorer outcomes among those who exhibit mild or severe difficulties during childhood.
“Studying the long-term outcomes associated with child and adolescent mental and behavioral disorders gives us an idea of how concerned we should be about their future,” he said.
Dr. Vergunst was funded by the Canadian Institute of Health Research and Fonds de Recherche du Quebec Santé postdoctoral fellowships. Dr. Orri and Dr. Colman report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Children with chronically elevated externalizing symptoms, such as behavioral problems, or internalizing symptoms, such as mental health concerns, have an increased risk for poor economic and social outcomes in adulthood, data from a new study suggest.
Children with comorbid externalizing and internalizing symptoms were especially vulnerable to long-term economic and social exclusion.
“Research has mostly studied the outcomes of children with either behavioral problems or depression-anxiety problems. However, comorbidity is the rule rather than the exception in clinical practice,” senior author Massimilliano Orri, PhD, an assistant professor of psychiatry at McGill University and clinical psychologist with the Douglas Mental Health University Institute, both in Montreal, said in an interview.
“Our findings are important, as they show that comorbidity between externalizing and internalizing problems is associated with real-life outcomes that profoundly influence a youth’s chances to participate in society later in life,” he said.
The study was published in JAMA Network Open.
Analyzing associations
Dr. Orri and colleagues analyzed data for 3,017 children in the Quebec Longitudinal Study of Kindergarten Children, a population-based birth cohort that enrolled participants in 1986-1987 and 1987-1988 while they were attending kindergarten. The sample included 2,000 children selected at random and 1,017 children who scored at or above the 80th percentile for disruptive behavior problems.
The research team looked at the association between childhood behavioral profiles and economic and social outcomes for ages 19-37 years, including employment earnings, receipt of welfare, intimate partnerships, and having children living in the household. They obtained the outcome data from participants’ tax returns for 1998-2017.
During enrollment in the study, the children’s teachers assessed behavioral symptoms annually for ages 6-12 years using the Social Behavior Questionnaire. Based on the assessments, the research team categorized the students as having no or low symptoms, high externalizing symptoms only (such as hyperactivity, impulsivity, aggression, and rule violation), high internalizing symptoms only (such as anxiety, depression, worry, and social withdrawal), or comorbid symptoms. They looked at other variables as well, including the child’s sex, the parents’ age at the birth of their first child, the parents’ years of education, family structure, and the parents’ household income.
Among the 3,017 participants, 45.4% of children had no or low symptoms, 29.2% had high externalizing symptoms, 11.7% had high internalizing symptoms, and 13.7% had comorbid symptoms. About 53% were boys, and 47% were girls.
In general, boys were more likely to exhibit high externalizing symptoms, and girls were more likely to exhibit high internalizing symptoms. In the comorbid group, about 82% were boys, and they were more likely to have younger mothers, come from households with lower earnings when they were ages 3-5 years, and have a nonintact family at age 6 years.
The average age at follow-up was 37 years. Participants earned an average of $32,800 per year at ages 33-37 years (between 2013 and 2017). During the 20 years of follow-up, participants received welfare support for about 1.5 years, had an intimate partner for 7.4 years, and had children living in the household for 11 years.
Overall, participants in the high externalizing and high internalizing symptom profiles – and especially those in the comorbid profile – had lower earnings and a higher incidence of annual welfare receipt across early adulthood, compared with participants with low or no symptoms. They were also less likely to have an intimate partner or have children living in the household. Participants with a comorbid symptom profile earned $15,031 less per year and had a 3.79-times higher incidence of annual welfare receipt.
Lower earnings
Across the sample, men were more likely to have higher earnings and less likely to receive welfare each year, but they also were less likely to have an intimate partner or have children in the household. Among those with the high externalizing profile, men were significantly less likely to receive welfare. Among the comorbid profile, men were less likely to have children in the household.
Compared with the no-symptom or low-symptom profile, those in the high externalizing profile earned $5,904 less per year and had a two-times–higher incidence of welfare receipt. Those in the high internalizing profile earned $8,473 less per year, had a 2.07-times higher incidence of welfare receipt, and had a lower incidence of intimate partnership.
Compared with the high externalizing profile, those in the comorbid profile earned $9,126 less per year, had a higher incidence of annual welfare receipt, had a lower incidence of intimate partnership, and were less likely to have children in the household. Similarly, compared with the high internalizing profile, those in the comorbid profile earned $6,558 less per year and were more likely to exhibit the other poor long-term outcomes. Participants in the high internalizing profile earned $2,568 less per year than those in the high externalizing profile.
During a 40-year working career, the estimated lost personal employment earnings were $140,515 for the high externalizing profile, $201,657 for the high internalizing profile, and $357,737 for the comorbid profile, compared with those in the no-symptom or low-symptom profile.
“We know that children with externalizing and internalizing symptoms can have many problems in the short term – like social difficulties and lower education attainment – but it’s important to also understand the potential long-term outcomes,” study author Francis Vergunst, DPhil/PhD, an associate professor of child psychosocial difficulties at the University of Oslo, told this news organization.
“For example, when people have insufficient income, are forced to seek welfare support, or lack the social support structure that comes from an intimate partnership, it can have profound consequences for their mental health and well-being – and for society as a whole,” he said. “Understanding this helps to build the case for early prevention programs that can reduce childhood externalizing and internalizing problems and improve long-term outcomes.”
Several mechanisms could explain the associations found across the childhood symptom profiles, the study authors wrote. For instance, children with early behavior problems may be more likely to engage in risky adolescent activities, such as substance use, delinquent peer affiliations, and academic underachievement, which affects their transition to adulthood and accumulation of social and economic capital throughout life. Those with comorbid symptoms likely experience a compounded effect.
Future studies should investigate how to intervene effectively to support children, particularly those with comorbid externalizing and internalizing symptoms, the study authors write.
“Currently, most published studies focus on children with either externalizing or internalizing problems (and these programs can be effective, especially for externalizing problems), but we know very little about how to improve long-term outcomes for children with comorbid symptoms,” Dr. Vergunst said. “Given the large costs of these problems for individuals and society, this is a critical area for further research.”
‘Solid evidence’
Commenting on the findings, Ian Colman, PhD, a professor of epidemiology and public health and director of the Applied Psychiatric Epidemiology Across the Life course (APEAL) lab at the University of Ottawa, said, “Research like this provides solid evidence that if we do not provide appropriate supports for children who are struggling with their mental health or related behaviors, then these children are more likely to face a life of social and economic exclusion.”
Dr. Colman, who wasn’t involved with this study, has researched long-term psychosocial outcomes among adolescents with depression, as well as those with externalizing behaviors. He and colleagues have found poorer outcomes among those who exhibit mild or severe difficulties during childhood.
“Studying the long-term outcomes associated with child and adolescent mental and behavioral disorders gives us an idea of how concerned we should be about their future,” he said.
Dr. Vergunst was funded by the Canadian Institute of Health Research and Fonds de Recherche du Quebec Santé postdoctoral fellowships. Dr. Orri and Dr. Colman report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Children with chronically elevated externalizing symptoms, such as behavioral problems, or internalizing symptoms, such as mental health concerns, have an increased risk for poor economic and social outcomes in adulthood, data from a new study suggest.
Children with comorbid externalizing and internalizing symptoms were especially vulnerable to long-term economic and social exclusion.
“Research has mostly studied the outcomes of children with either behavioral problems or depression-anxiety problems. However, comorbidity is the rule rather than the exception in clinical practice,” senior author Massimilliano Orri, PhD, an assistant professor of psychiatry at McGill University and clinical psychologist with the Douglas Mental Health University Institute, both in Montreal, said in an interview.
“Our findings are important, as they show that comorbidity between externalizing and internalizing problems is associated with real-life outcomes that profoundly influence a youth’s chances to participate in society later in life,” he said.
The study was published in JAMA Network Open.
Analyzing associations
Dr. Orri and colleagues analyzed data for 3,017 children in the Quebec Longitudinal Study of Kindergarten Children, a population-based birth cohort that enrolled participants in 1986-1987 and 1987-1988 while they were attending kindergarten. The sample included 2,000 children selected at random and 1,017 children who scored at or above the 80th percentile for disruptive behavior problems.
The research team looked at the association between childhood behavioral profiles and economic and social outcomes for ages 19-37 years, including employment earnings, receipt of welfare, intimate partnerships, and having children living in the household. They obtained the outcome data from participants’ tax returns for 1998-2017.
During enrollment in the study, the children’s teachers assessed behavioral symptoms annually for ages 6-12 years using the Social Behavior Questionnaire. Based on the assessments, the research team categorized the students as having no or low symptoms, high externalizing symptoms only (such as hyperactivity, impulsivity, aggression, and rule violation), high internalizing symptoms only (such as anxiety, depression, worry, and social withdrawal), or comorbid symptoms. They looked at other variables as well, including the child’s sex, the parents’ age at the birth of their first child, the parents’ years of education, family structure, and the parents’ household income.
Among the 3,017 participants, 45.4% of children had no or low symptoms, 29.2% had high externalizing symptoms, 11.7% had high internalizing symptoms, and 13.7% had comorbid symptoms. About 53% were boys, and 47% were girls.
In general, boys were more likely to exhibit high externalizing symptoms, and girls were more likely to exhibit high internalizing symptoms. In the comorbid group, about 82% were boys, and they were more likely to have younger mothers, come from households with lower earnings when they were ages 3-5 years, and have a nonintact family at age 6 years.
The average age at follow-up was 37 years. Participants earned an average of $32,800 per year at ages 33-37 years (between 2013 and 2017). During the 20 years of follow-up, participants received welfare support for about 1.5 years, had an intimate partner for 7.4 years, and had children living in the household for 11 years.
Overall, participants in the high externalizing and high internalizing symptom profiles – and especially those in the comorbid profile – had lower earnings and a higher incidence of annual welfare receipt across early adulthood, compared with participants with low or no symptoms. They were also less likely to have an intimate partner or have children living in the household. Participants with a comorbid symptom profile earned $15,031 less per year and had a 3.79-times higher incidence of annual welfare receipt.
Lower earnings
Across the sample, men were more likely to have higher earnings and less likely to receive welfare each year, but they also were less likely to have an intimate partner or have children in the household. Among those with the high externalizing profile, men were significantly less likely to receive welfare. Among the comorbid profile, men were less likely to have children in the household.
Compared with the no-symptom or low-symptom profile, those in the high externalizing profile earned $5,904 less per year and had a two-times–higher incidence of welfare receipt. Those in the high internalizing profile earned $8,473 less per year, had a 2.07-times higher incidence of welfare receipt, and had a lower incidence of intimate partnership.
Compared with the high externalizing profile, those in the comorbid profile earned $9,126 less per year, had a higher incidence of annual welfare receipt, had a lower incidence of intimate partnership, and were less likely to have children in the household. Similarly, compared with the high internalizing profile, those in the comorbid profile earned $6,558 less per year and were more likely to exhibit the other poor long-term outcomes. Participants in the high internalizing profile earned $2,568 less per year than those in the high externalizing profile.
During a 40-year working career, the estimated lost personal employment earnings were $140,515 for the high externalizing profile, $201,657 for the high internalizing profile, and $357,737 for the comorbid profile, compared with those in the no-symptom or low-symptom profile.
“We know that children with externalizing and internalizing symptoms can have many problems in the short term – like social difficulties and lower education attainment – but it’s important to also understand the potential long-term outcomes,” study author Francis Vergunst, DPhil/PhD, an associate professor of child psychosocial difficulties at the University of Oslo, told this news organization.
“For example, when people have insufficient income, are forced to seek welfare support, or lack the social support structure that comes from an intimate partnership, it can have profound consequences for their mental health and well-being – and for society as a whole,” he said. “Understanding this helps to build the case for early prevention programs that can reduce childhood externalizing and internalizing problems and improve long-term outcomes.”
Several mechanisms could explain the associations found across the childhood symptom profiles, the study authors wrote. For instance, children with early behavior problems may be more likely to engage in risky adolescent activities, such as substance use, delinquent peer affiliations, and academic underachievement, which affects their transition to adulthood and accumulation of social and economic capital throughout life. Those with comorbid symptoms likely experience a compounded effect.
Future studies should investigate how to intervene effectively to support children, particularly those with comorbid externalizing and internalizing symptoms, the study authors write.
“Currently, most published studies focus on children with either externalizing or internalizing problems (and these programs can be effective, especially for externalizing problems), but we know very little about how to improve long-term outcomes for children with comorbid symptoms,” Dr. Vergunst said. “Given the large costs of these problems for individuals and society, this is a critical area for further research.”
‘Solid evidence’
Commenting on the findings, Ian Colman, PhD, a professor of epidemiology and public health and director of the Applied Psychiatric Epidemiology Across the Life course (APEAL) lab at the University of Ottawa, said, “Research like this provides solid evidence that if we do not provide appropriate supports for children who are struggling with their mental health or related behaviors, then these children are more likely to face a life of social and economic exclusion.”
Dr. Colman, who wasn’t involved with this study, has researched long-term psychosocial outcomes among adolescents with depression, as well as those with externalizing behaviors. He and colleagues have found poorer outcomes among those who exhibit mild or severe difficulties during childhood.
“Studying the long-term outcomes associated with child and adolescent mental and behavioral disorders gives us an idea of how concerned we should be about their future,” he said.
Dr. Vergunst was funded by the Canadian Institute of Health Research and Fonds de Recherche du Quebec Santé postdoctoral fellowships. Dr. Orri and Dr. Colman report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
FAST score appears accurate for diagnosis of fibrotic NASH
The FAST score had an overall sensitivity of 89% and an overall specificity of 89% with a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher, respectively, Federico Ravaioli, MD, PhD, a gastroenterologist at the University of Modena & Reggio Emilia in Italy, and colleagues, wrote in Gut.
“These results could be used in clinical screening studies to efficiently identify patients at risk of progressive NASH, who should be referred for a conclusive liver biopsy, and who might benefit from treatment with emerging pharmacotherapies,” the authors wrote.
The research team analyzed 12 observational studies with 5,835 participants with biopsy-confirmed nonalcoholic fatty liver disease (NAFLD) between February 2020 and April 2022. They included articles that reported data for the calculation of sensitivity and specificity of the FAST score for identifying adult patients with fibrotic NASH based on a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher. Fibrotic NASH was defined as patients with NASH plus a NAFLD activity score of 4 or greater and fibrosis stage 2 or higher.
The pooled prevalence of fibrotic NASH was 28%. The mean age of participants ranged from 40 to 60, and the proportion of men ranged from 23% to 91%. The mean body mass index ranged from 23 kg/m2 to 41 kg/m2, with a prevalence of obesity ranging from 23% to 100% and preexisting type 2 diabetes ranging from 18% to 60%. Nine studies included patients with biopsy-proven NAFLD from tertiary care liver centers, and three studies included patients from bariatric clinics or bariatric surgery centers with available liver biopsy data.
Fibrotic NASH was ruled out in 2,723 patients (45.5%) by a FAST score of .35 or lower and ruled in 1,287 patients (21.5%) by a FAST score of .67 or higher. In addition, 1,979 patients (33%) had a FAST score in the so-called “grey” intermediate zone.
Overall, the FAST score pooled sensitivity was 89%, and the pooled specificity was 89%. By the rule-out cutoff of .35, the sensitivity was 89% and the specificity was 56%. By the rule-in cutoff of .67, the sensitivity was 46% and the specificity was 89%.
At an expected prevalence of fibrotic NASH of 30%, the negative predictive value of the .35 cutoff was 92%, and the positive predictive value of the .67 cutoff was 65%. Across the included studies, the negative predictive value ranged from 77% to 98%, and the positive predictive value ranged from 32% to 87%.
For the rule-in cutoff of .67, at a pretest probability of 10%, 20%, 26.3%, and 30%, there was an increasing likelihood of detecting fibrotic NASH by FAST score at 32%, 52%, 60%, and 65%, respectively. For the rule-out cutoff of .35, at the same pretest probability levels, the likelihood of someone not having fibrotic NASH and not being detected by FAST score was 2%, 5%, 7%, and 8%, respectively.
In subgroup analyses, the sensitivity of the rule-out cutoff was significantly affected by the study design. In addition, age and BMI above the median both affected pooled sensitivity but not pooled specificity. On the other hand, the rule-in cutoff was significantly affected by study design, BMI above the median, and presence of preexisting type 2 diabetes above the median.
“Today, we stand on the cusp of a revolutionary time to treat NASH. This is due in part to the fact that many exciting, novel precision metabolic treatments are in the pipeline to combat this disease,” said Brian DeBosch, MD, PhD, associate professor of cell biology and physiology at the Washington University in St. Louis, who was not involved with this study.
“A major barrier in clinical NASH management is a rapid, noninvasive, and precise means by which to clinically stage such patients,” Dr. DeBosch said. “We now approach as closely as ever the sensitivity and specificity required to stratify the highest-risk patients, identify candidates for advanced therapy, and meaningfully reduce biopsies through using noninvasive testing.”
Dr. DeBosch noted the importance of pretest probability and specific subpopulations when deciding whether to use the FAST score. For instance, he said, a tertiary academic liver transplant center will see a different patient population than in a primary care setting. Also, in this study, the presence or absence of diabetes and a BMI above 30 significantly altered sensitivity and specificity.
“One important remaining question stemming from these data is whether FAST can also be used as a surrogate measure to follow disease regression over time following intervention,” Dr. DeBosch said. “Even if FAST is not useful in that way, defining individuals who most need to undergo biopsy and/or those who need to undergo treatment remain important uses for this test.”
The study authors did not declare a specific funding source or report any competing interests. DeBosch reported no relevant disclosures.
The FAST score had an overall sensitivity of 89% and an overall specificity of 89% with a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher, respectively, Federico Ravaioli, MD, PhD, a gastroenterologist at the University of Modena & Reggio Emilia in Italy, and colleagues, wrote in Gut.
“These results could be used in clinical screening studies to efficiently identify patients at risk of progressive NASH, who should be referred for a conclusive liver biopsy, and who might benefit from treatment with emerging pharmacotherapies,” the authors wrote.
The research team analyzed 12 observational studies with 5,835 participants with biopsy-confirmed nonalcoholic fatty liver disease (NAFLD) between February 2020 and April 2022. They included articles that reported data for the calculation of sensitivity and specificity of the FAST score for identifying adult patients with fibrotic NASH based on a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher. Fibrotic NASH was defined as patients with NASH plus a NAFLD activity score of 4 or greater and fibrosis stage 2 or higher.
The pooled prevalence of fibrotic NASH was 28%. The mean age of participants ranged from 40 to 60, and the proportion of men ranged from 23% to 91%. The mean body mass index ranged from 23 kg/m2 to 41 kg/m2, with a prevalence of obesity ranging from 23% to 100% and preexisting type 2 diabetes ranging from 18% to 60%. Nine studies included patients with biopsy-proven NAFLD from tertiary care liver centers, and three studies included patients from bariatric clinics or bariatric surgery centers with available liver biopsy data.
Fibrotic NASH was ruled out in 2,723 patients (45.5%) by a FAST score of .35 or lower and ruled in 1,287 patients (21.5%) by a FAST score of .67 or higher. In addition, 1,979 patients (33%) had a FAST score in the so-called “grey” intermediate zone.
Overall, the FAST score pooled sensitivity was 89%, and the pooled specificity was 89%. By the rule-out cutoff of .35, the sensitivity was 89% and the specificity was 56%. By the rule-in cutoff of .67, the sensitivity was 46% and the specificity was 89%.
At an expected prevalence of fibrotic NASH of 30%, the negative predictive value of the .35 cutoff was 92%, and the positive predictive value of the .67 cutoff was 65%. Across the included studies, the negative predictive value ranged from 77% to 98%, and the positive predictive value ranged from 32% to 87%.
For the rule-in cutoff of .67, at a pretest probability of 10%, 20%, 26.3%, and 30%, there was an increasing likelihood of detecting fibrotic NASH by FAST score at 32%, 52%, 60%, and 65%, respectively. For the rule-out cutoff of .35, at the same pretest probability levels, the likelihood of someone not having fibrotic NASH and not being detected by FAST score was 2%, 5%, 7%, and 8%, respectively.
In subgroup analyses, the sensitivity of the rule-out cutoff was significantly affected by the study design. In addition, age and BMI above the median both affected pooled sensitivity but not pooled specificity. On the other hand, the rule-in cutoff was significantly affected by study design, BMI above the median, and presence of preexisting type 2 diabetes above the median.
“Today, we stand on the cusp of a revolutionary time to treat NASH. This is due in part to the fact that many exciting, novel precision metabolic treatments are in the pipeline to combat this disease,” said Brian DeBosch, MD, PhD, associate professor of cell biology and physiology at the Washington University in St. Louis, who was not involved with this study.
“A major barrier in clinical NASH management is a rapid, noninvasive, and precise means by which to clinically stage such patients,” Dr. DeBosch said. “We now approach as closely as ever the sensitivity and specificity required to stratify the highest-risk patients, identify candidates for advanced therapy, and meaningfully reduce biopsies through using noninvasive testing.”
Dr. DeBosch noted the importance of pretest probability and specific subpopulations when deciding whether to use the FAST score. For instance, he said, a tertiary academic liver transplant center will see a different patient population than in a primary care setting. Also, in this study, the presence or absence of diabetes and a BMI above 30 significantly altered sensitivity and specificity.
“One important remaining question stemming from these data is whether FAST can also be used as a surrogate measure to follow disease regression over time following intervention,” Dr. DeBosch said. “Even if FAST is not useful in that way, defining individuals who most need to undergo biopsy and/or those who need to undergo treatment remain important uses for this test.”
The study authors did not declare a specific funding source or report any competing interests. DeBosch reported no relevant disclosures.
The FAST score had an overall sensitivity of 89% and an overall specificity of 89% with a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher, respectively, Federico Ravaioli, MD, PhD, a gastroenterologist at the University of Modena & Reggio Emilia in Italy, and colleagues, wrote in Gut.
“These results could be used in clinical screening studies to efficiently identify patients at risk of progressive NASH, who should be referred for a conclusive liver biopsy, and who might benefit from treatment with emerging pharmacotherapies,” the authors wrote.
The research team analyzed 12 observational studies with 5,835 participants with biopsy-confirmed nonalcoholic fatty liver disease (NAFLD) between February 2020 and April 2022. They included articles that reported data for the calculation of sensitivity and specificity of the FAST score for identifying adult patients with fibrotic NASH based on a defined rule-out cutoff of .35 or lower and rule-in cutoff of .67 or higher. Fibrotic NASH was defined as patients with NASH plus a NAFLD activity score of 4 or greater and fibrosis stage 2 or higher.
The pooled prevalence of fibrotic NASH was 28%. The mean age of participants ranged from 40 to 60, and the proportion of men ranged from 23% to 91%. The mean body mass index ranged from 23 kg/m2 to 41 kg/m2, with a prevalence of obesity ranging from 23% to 100% and preexisting type 2 diabetes ranging from 18% to 60%. Nine studies included patients with biopsy-proven NAFLD from tertiary care liver centers, and three studies included patients from bariatric clinics or bariatric surgery centers with available liver biopsy data.
Fibrotic NASH was ruled out in 2,723 patients (45.5%) by a FAST score of .35 or lower and ruled in 1,287 patients (21.5%) by a FAST score of .67 or higher. In addition, 1,979 patients (33%) had a FAST score in the so-called “grey” intermediate zone.
Overall, the FAST score pooled sensitivity was 89%, and the pooled specificity was 89%. By the rule-out cutoff of .35, the sensitivity was 89% and the specificity was 56%. By the rule-in cutoff of .67, the sensitivity was 46% and the specificity was 89%.
At an expected prevalence of fibrotic NASH of 30%, the negative predictive value of the .35 cutoff was 92%, and the positive predictive value of the .67 cutoff was 65%. Across the included studies, the negative predictive value ranged from 77% to 98%, and the positive predictive value ranged from 32% to 87%.
For the rule-in cutoff of .67, at a pretest probability of 10%, 20%, 26.3%, and 30%, there was an increasing likelihood of detecting fibrotic NASH by FAST score at 32%, 52%, 60%, and 65%, respectively. For the rule-out cutoff of .35, at the same pretest probability levels, the likelihood of someone not having fibrotic NASH and not being detected by FAST score was 2%, 5%, 7%, and 8%, respectively.
In subgroup analyses, the sensitivity of the rule-out cutoff was significantly affected by the study design. In addition, age and BMI above the median both affected pooled sensitivity but not pooled specificity. On the other hand, the rule-in cutoff was significantly affected by study design, BMI above the median, and presence of preexisting type 2 diabetes above the median.
“Today, we stand on the cusp of a revolutionary time to treat NASH. This is due in part to the fact that many exciting, novel precision metabolic treatments are in the pipeline to combat this disease,” said Brian DeBosch, MD, PhD, associate professor of cell biology and physiology at the Washington University in St. Louis, who was not involved with this study.
“A major barrier in clinical NASH management is a rapid, noninvasive, and precise means by which to clinically stage such patients,” Dr. DeBosch said. “We now approach as closely as ever the sensitivity and specificity required to stratify the highest-risk patients, identify candidates for advanced therapy, and meaningfully reduce biopsies through using noninvasive testing.”
Dr. DeBosch noted the importance of pretest probability and specific subpopulations when deciding whether to use the FAST score. For instance, he said, a tertiary academic liver transplant center will see a different patient population than in a primary care setting. Also, in this study, the presence or absence of diabetes and a BMI above 30 significantly altered sensitivity and specificity.
“One important remaining question stemming from these data is whether FAST can also be used as a surrogate measure to follow disease regression over time following intervention,” Dr. DeBosch said. “Even if FAST is not useful in that way, defining individuals who most need to undergo biopsy and/or those who need to undergo treatment remain important uses for this test.”
The study authors did not declare a specific funding source or report any competing interests. DeBosch reported no relevant disclosures.
FROM GUT
Updated celiac disease guideline addresses common clinical questions
The American College of Gastroenterology issued updated guidelines for celiac disease diagnosis, management, and screening that incorporates research conducted since the last update in 2013.
The guidelines offer evidence-based recommendations for common clinical questions on topics that include nonbiopsy diagnosis, gluten-free oats, probiotic use, and gluten-detection devices. They also point to areas for ongoing research.
“The main message of the guideline is all about quality of care,” Alberto Rubio-Tapia, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
“A precise celiac disease diagnosis is just the beginning of the role of the gastroenterologist,” he said. “But most importantly, we need to take care of our patients’ needs with good goal-directed follow-up using a multidisciplinary approach, with experienced dietitians playing an important role.”
The update was published in the American Journal of Gastroenterology.
Diagnosis recommendations
The ACG assembled a team of celiac disease experts and expert guideline methodologists to develop an update with high-quality evidence, Dr. Rubio-Tapia said. The authors made recommendations and suggestions for future research regarding eight questions concerning diagnosis, disease management, and screening.
For diagnosis, the guidelines recommend esophagogastroduodenoscopy (EGD) with multiple duodenal biopsies – one or two from the bulb and four from the distal duodenum – for confirmation in children and adults with suspicion of celiac disease. EGD and duodenal biopsies can also be useful for the differential diagnosis of other malabsorptive disorders or enteropathies, the authors wrote.
For children, a nonbiopsy option may be considered to be reliable for diagnosis. This option includes a combination of high-level tissue transglutaminase (TTG) IgA – at greater than 10 times the upper limit of normal – and a positive endomysial antibody finding in a second blood sample. The same criteria may be considered after the fact for symptomatic adults who are unwilling or unable to undergo upper GI endoscopy.
For children younger than 2 years, the TTG-IgA is the preferred test for those who are not IgA deficient. For children with IgA deficiency, testing should be performed using IgG-based antibodies.
Disease management guidance
After diagnosis, intestinal healing should be the endpoint for a gluten-free diet, the guidelines recommended. Clinicians and patients should discuss individualized goals of the gluten-free diet beyond clinical and serologic remission.
The standard of care for assessing patients’ diet adherence is an interview with a dietician who has expertise in gluten-free diets, the recommendations stated. Subsequent visits should be encouraged as needed to reinforce adherence.
During disease management, upper endoscopy with intestinal biopsies can be helpful for monitoring cases in which there is a lack of clinical response or in which symptoms relapse despite a gluten-free diet, the authors noted.
In addition, after a shared decision-making conversation between the patient and provider, a follow-up biopsy could be considered for assessment of mucosal healing in adults who don’t have symptoms 2 years after starting a gluten-free diet, they wrote.
“Although most patients do well on a gluten-free diet, it’s a heavy burden of care and an important issue that impacts patients,” Joseph Murray, MD, a gastroenterologist at the Mayo Clinic in Rochester, Minn., said in an interview.
Dr. Murray, who wasn’t involved with this guideline update, contributed to the 2013 guidelines and the 2019 American Gastroenterological Association practice update on diagnosing and monitoring celiac disease. He agreed with many of the recommendations in this update.
“The goal of achieving healing is a good goal to reach. We do that routinely in my practice,” he said. “The older the patient, perhaps the more important it is to discuss, including the risk for complications. There’s a nuance involved with shared decision-making.”
Nutrition advice
The guidelines recommended against routine use of gluten-detection devices for food or biospecimens for patients with celiac disease. Although multiple devices have become commercially available in recent years, they are not regulated by the Food and Drug Administration and have sensitivity problems that can lead to false positive and false negative results, the authors noted. There’s also a lack of evidence that the devices enhance diet adherence or quality of life.
The evidence is insufficient to recommend for or against the use of probiotics for the treatment of celiac disease, the recommendations stated. Although dysbiosis is a feature of celiac disease, its role in disease pathogenesis and symptomatology is uncertain, the authors wrote.
Probiotics may help with functional disorders, such as irritable bowel syndrome, but because probiotics are marketed as supplements and regulations are lax, some products may contain detectable gluten despite being labeled gluten free, they added.
On the other hand, the authors recommended gluten-free oats as part of a gluten-free diet. Oat consumption appears to be safe for most patients with celiac disease, but it may be immunogenic in a subset of patients, depending on the products or quantity consumed. Given the small risk for an immune reaction to the oat protein avenin, monitoring for oat tolerance through symptoms and serology should be conducted, although the intervals for monitoring remain unknown.
Vaccination and screening
The guidelines also support vaccination against pneumococcal disease, since adults with celiac disease are at significantly increased risk of infection and complications. Vaccination is widely recommended for people aged 65 and older, for smokers aged 19-64, and for adults with underlying conditions that place them at higher risk, the authors noted.
Overall, the guidelines recommended case findings to increase detection of celiac disease in clinical practice but recommend against mass screening in the community. Patients with symptoms for whom there is lab evidence of malabsorption should be tested, as well as those for whom celiac disease could be a treatable cause of symptoms, the authors wrote. Those with a first-degree family member who has a confirmed diagnosis should also be tested if they have possible symptoms, and asymptomatic relatives should consider testing as well.
The updated guidelines include changes that are important for patients and patient care, and they emphasize the need for continued research on key questions, Isabel Hujoel, MD, a gastroenterologist at the University of Washington Medical Center, Seattle, told this news organization.
“In particular, the discussion on the lack of evidence behind gluten-detection devices and probiotic use in celiac disease addresses conversations that come up frequently in clinic,” said Dr. Hujoel, who wasn’t involved with the update. “The guidelines also include a new addition below each recommendation where future research questions are raised. Many of these questions address gaps in our understanding on celiac disease, such as the possibility of a nonbiopsy diagnosis in adults, which will potentially dramatically impact patient care if addressed.”
The update received no funding. The authors, Dr. Murray, and Dr. Hujoel have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The American College of Gastroenterology issued updated guidelines for celiac disease diagnosis, management, and screening that incorporates research conducted since the last update in 2013.
The guidelines offer evidence-based recommendations for common clinical questions on topics that include nonbiopsy diagnosis, gluten-free oats, probiotic use, and gluten-detection devices. They also point to areas for ongoing research.
“The main message of the guideline is all about quality of care,” Alberto Rubio-Tapia, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
“A precise celiac disease diagnosis is just the beginning of the role of the gastroenterologist,” he said. “But most importantly, we need to take care of our patients’ needs with good goal-directed follow-up using a multidisciplinary approach, with experienced dietitians playing an important role.”
The update was published in the American Journal of Gastroenterology.
Diagnosis recommendations
The ACG assembled a team of celiac disease experts and expert guideline methodologists to develop an update with high-quality evidence, Dr. Rubio-Tapia said. The authors made recommendations and suggestions for future research regarding eight questions concerning diagnosis, disease management, and screening.
For diagnosis, the guidelines recommend esophagogastroduodenoscopy (EGD) with multiple duodenal biopsies – one or two from the bulb and four from the distal duodenum – for confirmation in children and adults with suspicion of celiac disease. EGD and duodenal biopsies can also be useful for the differential diagnosis of other malabsorptive disorders or enteropathies, the authors wrote.
For children, a nonbiopsy option may be considered to be reliable for diagnosis. This option includes a combination of high-level tissue transglutaminase (TTG) IgA – at greater than 10 times the upper limit of normal – and a positive endomysial antibody finding in a second blood sample. The same criteria may be considered after the fact for symptomatic adults who are unwilling or unable to undergo upper GI endoscopy.
For children younger than 2 years, the TTG-IgA is the preferred test for those who are not IgA deficient. For children with IgA deficiency, testing should be performed using IgG-based antibodies.
Disease management guidance
After diagnosis, intestinal healing should be the endpoint for a gluten-free diet, the guidelines recommended. Clinicians and patients should discuss individualized goals of the gluten-free diet beyond clinical and serologic remission.
The standard of care for assessing patients’ diet adherence is an interview with a dietician who has expertise in gluten-free diets, the recommendations stated. Subsequent visits should be encouraged as needed to reinforce adherence.
During disease management, upper endoscopy with intestinal biopsies can be helpful for monitoring cases in which there is a lack of clinical response or in which symptoms relapse despite a gluten-free diet, the authors noted.
In addition, after a shared decision-making conversation between the patient and provider, a follow-up biopsy could be considered for assessment of mucosal healing in adults who don’t have symptoms 2 years after starting a gluten-free diet, they wrote.
“Although most patients do well on a gluten-free diet, it’s a heavy burden of care and an important issue that impacts patients,” Joseph Murray, MD, a gastroenterologist at the Mayo Clinic in Rochester, Minn., said in an interview.
Dr. Murray, who wasn’t involved with this guideline update, contributed to the 2013 guidelines and the 2019 American Gastroenterological Association practice update on diagnosing and monitoring celiac disease. He agreed with many of the recommendations in this update.
“The goal of achieving healing is a good goal to reach. We do that routinely in my practice,” he said. “The older the patient, perhaps the more important it is to discuss, including the risk for complications. There’s a nuance involved with shared decision-making.”
Nutrition advice
The guidelines recommended against routine use of gluten-detection devices for food or biospecimens for patients with celiac disease. Although multiple devices have become commercially available in recent years, they are not regulated by the Food and Drug Administration and have sensitivity problems that can lead to false positive and false negative results, the authors noted. There’s also a lack of evidence that the devices enhance diet adherence or quality of life.
The evidence is insufficient to recommend for or against the use of probiotics for the treatment of celiac disease, the recommendations stated. Although dysbiosis is a feature of celiac disease, its role in disease pathogenesis and symptomatology is uncertain, the authors wrote.
Probiotics may help with functional disorders, such as irritable bowel syndrome, but because probiotics are marketed as supplements and regulations are lax, some products may contain detectable gluten despite being labeled gluten free, they added.
On the other hand, the authors recommended gluten-free oats as part of a gluten-free diet. Oat consumption appears to be safe for most patients with celiac disease, but it may be immunogenic in a subset of patients, depending on the products or quantity consumed. Given the small risk for an immune reaction to the oat protein avenin, monitoring for oat tolerance through symptoms and serology should be conducted, although the intervals for monitoring remain unknown.
Vaccination and screening
The guidelines also support vaccination against pneumococcal disease, since adults with celiac disease are at significantly increased risk of infection and complications. Vaccination is widely recommended for people aged 65 and older, for smokers aged 19-64, and for adults with underlying conditions that place them at higher risk, the authors noted.
Overall, the guidelines recommended case findings to increase detection of celiac disease in clinical practice but recommend against mass screening in the community. Patients with symptoms for whom there is lab evidence of malabsorption should be tested, as well as those for whom celiac disease could be a treatable cause of symptoms, the authors wrote. Those with a first-degree family member who has a confirmed diagnosis should also be tested if they have possible symptoms, and asymptomatic relatives should consider testing as well.
The updated guidelines include changes that are important for patients and patient care, and they emphasize the need for continued research on key questions, Isabel Hujoel, MD, a gastroenterologist at the University of Washington Medical Center, Seattle, told this news organization.
“In particular, the discussion on the lack of evidence behind gluten-detection devices and probiotic use in celiac disease addresses conversations that come up frequently in clinic,” said Dr. Hujoel, who wasn’t involved with the update. “The guidelines also include a new addition below each recommendation where future research questions are raised. Many of these questions address gaps in our understanding on celiac disease, such as the possibility of a nonbiopsy diagnosis in adults, which will potentially dramatically impact patient care if addressed.”
The update received no funding. The authors, Dr. Murray, and Dr. Hujoel have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The American College of Gastroenterology issued updated guidelines for celiac disease diagnosis, management, and screening that incorporates research conducted since the last update in 2013.
The guidelines offer evidence-based recommendations for common clinical questions on topics that include nonbiopsy diagnosis, gluten-free oats, probiotic use, and gluten-detection devices. They also point to areas for ongoing research.
“The main message of the guideline is all about quality of care,” Alberto Rubio-Tapia, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
“A precise celiac disease diagnosis is just the beginning of the role of the gastroenterologist,” he said. “But most importantly, we need to take care of our patients’ needs with good goal-directed follow-up using a multidisciplinary approach, with experienced dietitians playing an important role.”
The update was published in the American Journal of Gastroenterology.
Diagnosis recommendations
The ACG assembled a team of celiac disease experts and expert guideline methodologists to develop an update with high-quality evidence, Dr. Rubio-Tapia said. The authors made recommendations and suggestions for future research regarding eight questions concerning diagnosis, disease management, and screening.
For diagnosis, the guidelines recommend esophagogastroduodenoscopy (EGD) with multiple duodenal biopsies – one or two from the bulb and four from the distal duodenum – for confirmation in children and adults with suspicion of celiac disease. EGD and duodenal biopsies can also be useful for the differential diagnosis of other malabsorptive disorders or enteropathies, the authors wrote.
For children, a nonbiopsy option may be considered to be reliable for diagnosis. This option includes a combination of high-level tissue transglutaminase (TTG) IgA – at greater than 10 times the upper limit of normal – and a positive endomysial antibody finding in a second blood sample. The same criteria may be considered after the fact for symptomatic adults who are unwilling or unable to undergo upper GI endoscopy.
For children younger than 2 years, the TTG-IgA is the preferred test for those who are not IgA deficient. For children with IgA deficiency, testing should be performed using IgG-based antibodies.
Disease management guidance
After diagnosis, intestinal healing should be the endpoint for a gluten-free diet, the guidelines recommended. Clinicians and patients should discuss individualized goals of the gluten-free diet beyond clinical and serologic remission.
The standard of care for assessing patients’ diet adherence is an interview with a dietician who has expertise in gluten-free diets, the recommendations stated. Subsequent visits should be encouraged as needed to reinforce adherence.
During disease management, upper endoscopy with intestinal biopsies can be helpful for monitoring cases in which there is a lack of clinical response or in which symptoms relapse despite a gluten-free diet, the authors noted.
In addition, after a shared decision-making conversation between the patient and provider, a follow-up biopsy could be considered for assessment of mucosal healing in adults who don’t have symptoms 2 years after starting a gluten-free diet, they wrote.
“Although most patients do well on a gluten-free diet, it’s a heavy burden of care and an important issue that impacts patients,” Joseph Murray, MD, a gastroenterologist at the Mayo Clinic in Rochester, Minn., said in an interview.
Dr. Murray, who wasn’t involved with this guideline update, contributed to the 2013 guidelines and the 2019 American Gastroenterological Association practice update on diagnosing and monitoring celiac disease. He agreed with many of the recommendations in this update.
“The goal of achieving healing is a good goal to reach. We do that routinely in my practice,” he said. “The older the patient, perhaps the more important it is to discuss, including the risk for complications. There’s a nuance involved with shared decision-making.”
Nutrition advice
The guidelines recommended against routine use of gluten-detection devices for food or biospecimens for patients with celiac disease. Although multiple devices have become commercially available in recent years, they are not regulated by the Food and Drug Administration and have sensitivity problems that can lead to false positive and false negative results, the authors noted. There’s also a lack of evidence that the devices enhance diet adherence or quality of life.
The evidence is insufficient to recommend for or against the use of probiotics for the treatment of celiac disease, the recommendations stated. Although dysbiosis is a feature of celiac disease, its role in disease pathogenesis and symptomatology is uncertain, the authors wrote.
Probiotics may help with functional disorders, such as irritable bowel syndrome, but because probiotics are marketed as supplements and regulations are lax, some products may contain detectable gluten despite being labeled gluten free, they added.
On the other hand, the authors recommended gluten-free oats as part of a gluten-free diet. Oat consumption appears to be safe for most patients with celiac disease, but it may be immunogenic in a subset of patients, depending on the products or quantity consumed. Given the small risk for an immune reaction to the oat protein avenin, monitoring for oat tolerance through symptoms and serology should be conducted, although the intervals for monitoring remain unknown.
Vaccination and screening
The guidelines also support vaccination against pneumococcal disease, since adults with celiac disease are at significantly increased risk of infection and complications. Vaccination is widely recommended for people aged 65 and older, for smokers aged 19-64, and for adults with underlying conditions that place them at higher risk, the authors noted.
Overall, the guidelines recommended case findings to increase detection of celiac disease in clinical practice but recommend against mass screening in the community. Patients with symptoms for whom there is lab evidence of malabsorption should be tested, as well as those for whom celiac disease could be a treatable cause of symptoms, the authors wrote. Those with a first-degree family member who has a confirmed diagnosis should also be tested if they have possible symptoms, and asymptomatic relatives should consider testing as well.
The updated guidelines include changes that are important for patients and patient care, and they emphasize the need for continued research on key questions, Isabel Hujoel, MD, a gastroenterologist at the University of Washington Medical Center, Seattle, told this news organization.
“In particular, the discussion on the lack of evidence behind gluten-detection devices and probiotic use in celiac disease addresses conversations that come up frequently in clinic,” said Dr. Hujoel, who wasn’t involved with the update. “The guidelines also include a new addition below each recommendation where future research questions are raised. Many of these questions address gaps in our understanding on celiac disease, such as the possibility of a nonbiopsy diagnosis in adults, which will potentially dramatically impact patient care if addressed.”
The update received no funding. The authors, Dr. Murray, and Dr. Hujoel have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY
Vision screening at well-child visits cost-effective for detecting amblyopia
Screening for amblyopia during primary care visits is more cost-effective than screening in school settings and optometric examinations in kindergarten-aged children in Toronto, data suggest.
Because of the low prevalence of amblyopia among young children, a population-based screening program may not warrant the resources required, despite the added health benefits of a universal program, according to the researchers.
“Amblyopia is a public health problem. For this reason, population-wide approaches to detect and treat amblyopia are critical, and approaches such as school screening and mandated optometry exams have been recommended and introduced in some jurisdictions,” study author Afua Oteng Asare, OD, PhD, a research assistant professor at the University of Utah in Salt Lake City, told this news organization. Dr. Asare conducted the study as a PhD student at the University of Toronto.
“With increasing budgeting constraints and limited resources, policymakers are relying more on economic analyses that measure value-for-money to inform their decisions on programming,” she said. “Evidence comparing the cost-effectiveness of vision-testing approaches to the status quo is, however, limited.”
The study was published in JAMA Network Open.
Analyzing costs
Despite recommendations for routine testing, a notable percentage of children in Canada and the United States don’t receive an annual vision exam. The percentage is even higher among children from low-income households, said Dr. Asare. Universal screening in schools and mandatory optometric examinations may improve vision care. But the cost-effectiveness of these measures is unknown for certain conditions, such as amblyopia, the prevalence of which ranges between 3% and 5% in young children.
In Ontario, Canada’s largest province with about 3 million children, universal funding for children’s annual comprehensive eye exams and vision screening during well-child visits is provided through provincial health insurance.
In 2018, the Ontario Ministry of Health introduced guidelines for administering vision screening in kindergartens by public health departments. However, school-based screening has been difficult to introduce because of increasing costs and budgeting constraints, the authors wrote. As an alternative to underfunded programs, optometric associations in Canada have advocated for physicians to recommend early childhood optometric exams.
The investigators analyzed the incremental costs and health benefits, from the perspective of the Ontario government, of public health school screening and optometrist-based vision exams, compared with standard vision screening conducted during well-child visits with primary care physicians. They focused on the aim of detecting amblyopia and amblyopia-related risk factors in children between ages 3 and 5 years in Toronto.
For the analysis, the research team simulated a hypothetical cohort of 25,000 children over 15 years in a probabilistic health state transition model. They incorporated various assumptions, including that children had irreversible vision impairment if not diagnosed by an optometrist. In addition, incremental costs were adjusted to favor the standard screening strategy during well-child visits.
In the school-based and primary care scenarios, children with a positive or inconclusive test result were referred to an optometrist for diagnosis and treatment, which would incur the cost of an optometric evaluation. If positive, children were treated with prescription glasses and additional patching for amblyopia.
The research team measured outcomes as incremental quality-adjusted life-years (QALYs), and health utilities were derived from data on adults, because of the lack of data on children under age 6 years with amblyopia or amblyopia risk factors. The researchers also estimated direct costs to the Ontario government, including visits with primary care doctors, optometrists, public health nurses, and contract screeners, as well as prescription glasses for children with vision impairment who receive social assistance. Costs were expressed in Canadian dollars (CAD).
Overall, compared with the primary care screening strategy, the school screening and optometric examination strategies were generally less costly and had more health benefits. The incremental difference in cost was a savings per child of $84.09 CAD for school screening and $74.47 CAD for optometric examinations. Optometric examinations yielded the largest gain in QALYs, compared with the primary care screening strategy, producing average QALYs of 0.0508 per child.
However, only 20% of school screening iterations and 29% of optometric exam iterations were cost-effective, relative to the primary care screening strategy, at a willingness-to-pay threshold of $50,000 CAD per QALY gained. For instance, when comparing optometric exams with primary care screenings, if the cost of vision screening was $11.50 CAD, the incremental cost-effectiveness ratio would be $77.95 CAD per QALY gained.
Results ‘make sense’
“We were initially surprised that the alternative screening programs were not cost-effective, compared to status quo vision screening in well-child visits,” said Dr. Asare. “However, the results make sense, considering the study’s universal approach (screening all children regardless of their vision status) and the study’s consideration only of amblyopia, and not of refractive errors, which are even more common in kindergarten children.”
Dr. Asare noted the lack of current data on the rate of vision screenings conducted in childhood by primary care practitioners and on referrals to eye care providers for children with abnormal screenings. Data on vision health disparities and barriers to accessing vision care in young children also are scarce.
“My ultimate research goal is to create and evaluate evidence-based, cost-effective interventions to be used at the point of care by pediatric primary care providers to improve the quality of vision care for children, especially those from socioeconomically deprived backgrounds,” she said. “The take-home message is that school vision screening and mandated eye exams are excellent programs, but they may not be suitable for all contexts.”
Additional studies are needed to look at the cost-effectiveness of the different screening strategies for other aspects included in childhood vision tests, including binocular vision problems, refractive disorders, myopia, allergies, and rare eye diseases.
Significant underestimation?
Susan Leat, PhD, a researcher and professor emerita at the University of Waterloo (Ont.) School of Optometry and Vision Science, said, “This study only considers amblyopia, and not all eye diseases and disorders, which significantly underestimates the cost-effectiveness of optometric eye exams.”
Dr. Leat, who wasn’t involved with this study, has researched pediatric optometry and visual development. She and colleagues are developing new tools to test visual acuity in young children.
“If all disorders were taken into account, then optometric testing would be by far the most cost-effective,” she said. “Optometrists can detect all disorders, including more subtle disorders, which if uncorrected or untreated, can impact a child’s early learning.”
The study authors reported no funding for the study. Dr. Asare and Dr. Leat reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Screening for amblyopia during primary care visits is more cost-effective than screening in school settings and optometric examinations in kindergarten-aged children in Toronto, data suggest.
Because of the low prevalence of amblyopia among young children, a population-based screening program may not warrant the resources required, despite the added health benefits of a universal program, according to the researchers.
“Amblyopia is a public health problem. For this reason, population-wide approaches to detect and treat amblyopia are critical, and approaches such as school screening and mandated optometry exams have been recommended and introduced in some jurisdictions,” study author Afua Oteng Asare, OD, PhD, a research assistant professor at the University of Utah in Salt Lake City, told this news organization. Dr. Asare conducted the study as a PhD student at the University of Toronto.
“With increasing budgeting constraints and limited resources, policymakers are relying more on economic analyses that measure value-for-money to inform their decisions on programming,” she said. “Evidence comparing the cost-effectiveness of vision-testing approaches to the status quo is, however, limited.”
The study was published in JAMA Network Open.
Analyzing costs
Despite recommendations for routine testing, a notable percentage of children in Canada and the United States don’t receive an annual vision exam. The percentage is even higher among children from low-income households, said Dr. Asare. Universal screening in schools and mandatory optometric examinations may improve vision care. But the cost-effectiveness of these measures is unknown for certain conditions, such as amblyopia, the prevalence of which ranges between 3% and 5% in young children.
In Ontario, Canada’s largest province with about 3 million children, universal funding for children’s annual comprehensive eye exams and vision screening during well-child visits is provided through provincial health insurance.
In 2018, the Ontario Ministry of Health introduced guidelines for administering vision screening in kindergartens by public health departments. However, school-based screening has been difficult to introduce because of increasing costs and budgeting constraints, the authors wrote. As an alternative to underfunded programs, optometric associations in Canada have advocated for physicians to recommend early childhood optometric exams.
The investigators analyzed the incremental costs and health benefits, from the perspective of the Ontario government, of public health school screening and optometrist-based vision exams, compared with standard vision screening conducted during well-child visits with primary care physicians. They focused on the aim of detecting amblyopia and amblyopia-related risk factors in children between ages 3 and 5 years in Toronto.
For the analysis, the research team simulated a hypothetical cohort of 25,000 children over 15 years in a probabilistic health state transition model. They incorporated various assumptions, including that children had irreversible vision impairment if not diagnosed by an optometrist. In addition, incremental costs were adjusted to favor the standard screening strategy during well-child visits.
In the school-based and primary care scenarios, children with a positive or inconclusive test result were referred to an optometrist for diagnosis and treatment, which would incur the cost of an optometric evaluation. If positive, children were treated with prescription glasses and additional patching for amblyopia.
The research team measured outcomes as incremental quality-adjusted life-years (QALYs), and health utilities were derived from data on adults, because of the lack of data on children under age 6 years with amblyopia or amblyopia risk factors. The researchers also estimated direct costs to the Ontario government, including visits with primary care doctors, optometrists, public health nurses, and contract screeners, as well as prescription glasses for children with vision impairment who receive social assistance. Costs were expressed in Canadian dollars (CAD).
Overall, compared with the primary care screening strategy, the school screening and optometric examination strategies were generally less costly and had more health benefits. The incremental difference in cost was a savings per child of $84.09 CAD for school screening and $74.47 CAD for optometric examinations. Optometric examinations yielded the largest gain in QALYs, compared with the primary care screening strategy, producing average QALYs of 0.0508 per child.
However, only 20% of school screening iterations and 29% of optometric exam iterations were cost-effective, relative to the primary care screening strategy, at a willingness-to-pay threshold of $50,000 CAD per QALY gained. For instance, when comparing optometric exams with primary care screenings, if the cost of vision screening was $11.50 CAD, the incremental cost-effectiveness ratio would be $77.95 CAD per QALY gained.
Results ‘make sense’
“We were initially surprised that the alternative screening programs were not cost-effective, compared to status quo vision screening in well-child visits,” said Dr. Asare. “However, the results make sense, considering the study’s universal approach (screening all children regardless of their vision status) and the study’s consideration only of amblyopia, and not of refractive errors, which are even more common in kindergarten children.”
Dr. Asare noted the lack of current data on the rate of vision screenings conducted in childhood by primary care practitioners and on referrals to eye care providers for children with abnormal screenings. Data on vision health disparities and barriers to accessing vision care in young children also are scarce.
“My ultimate research goal is to create and evaluate evidence-based, cost-effective interventions to be used at the point of care by pediatric primary care providers to improve the quality of vision care for children, especially those from socioeconomically deprived backgrounds,” she said. “The take-home message is that school vision screening and mandated eye exams are excellent programs, but they may not be suitable for all contexts.”
Additional studies are needed to look at the cost-effectiveness of the different screening strategies for other aspects included in childhood vision tests, including binocular vision problems, refractive disorders, myopia, allergies, and rare eye diseases.
Significant underestimation?
Susan Leat, PhD, a researcher and professor emerita at the University of Waterloo (Ont.) School of Optometry and Vision Science, said, “This study only considers amblyopia, and not all eye diseases and disorders, which significantly underestimates the cost-effectiveness of optometric eye exams.”
Dr. Leat, who wasn’t involved with this study, has researched pediatric optometry and visual development. She and colleagues are developing new tools to test visual acuity in young children.
“If all disorders were taken into account, then optometric testing would be by far the most cost-effective,” she said. “Optometrists can detect all disorders, including more subtle disorders, which if uncorrected or untreated, can impact a child’s early learning.”
The study authors reported no funding for the study. Dr. Asare and Dr. Leat reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Screening for amblyopia during primary care visits is more cost-effective than screening in school settings and optometric examinations in kindergarten-aged children in Toronto, data suggest.
Because of the low prevalence of amblyopia among young children, a population-based screening program may not warrant the resources required, despite the added health benefits of a universal program, according to the researchers.
“Amblyopia is a public health problem. For this reason, population-wide approaches to detect and treat amblyopia are critical, and approaches such as school screening and mandated optometry exams have been recommended and introduced in some jurisdictions,” study author Afua Oteng Asare, OD, PhD, a research assistant professor at the University of Utah in Salt Lake City, told this news organization. Dr. Asare conducted the study as a PhD student at the University of Toronto.
“With increasing budgeting constraints and limited resources, policymakers are relying more on economic analyses that measure value-for-money to inform their decisions on programming,” she said. “Evidence comparing the cost-effectiveness of vision-testing approaches to the status quo is, however, limited.”
The study was published in JAMA Network Open.
Analyzing costs
Despite recommendations for routine testing, a notable percentage of children in Canada and the United States don’t receive an annual vision exam. The percentage is even higher among children from low-income households, said Dr. Asare. Universal screening in schools and mandatory optometric examinations may improve vision care. But the cost-effectiveness of these measures is unknown for certain conditions, such as amblyopia, the prevalence of which ranges between 3% and 5% in young children.
In Ontario, Canada’s largest province with about 3 million children, universal funding for children’s annual comprehensive eye exams and vision screening during well-child visits is provided through provincial health insurance.
In 2018, the Ontario Ministry of Health introduced guidelines for administering vision screening in kindergartens by public health departments. However, school-based screening has been difficult to introduce because of increasing costs and budgeting constraints, the authors wrote. As an alternative to underfunded programs, optometric associations in Canada have advocated for physicians to recommend early childhood optometric exams.
The investigators analyzed the incremental costs and health benefits, from the perspective of the Ontario government, of public health school screening and optometrist-based vision exams, compared with standard vision screening conducted during well-child visits with primary care physicians. They focused on the aim of detecting amblyopia and amblyopia-related risk factors in children between ages 3 and 5 years in Toronto.
For the analysis, the research team simulated a hypothetical cohort of 25,000 children over 15 years in a probabilistic health state transition model. They incorporated various assumptions, including that children had irreversible vision impairment if not diagnosed by an optometrist. In addition, incremental costs were adjusted to favor the standard screening strategy during well-child visits.
In the school-based and primary care scenarios, children with a positive or inconclusive test result were referred to an optometrist for diagnosis and treatment, which would incur the cost of an optometric evaluation. If positive, children were treated with prescription glasses and additional patching for amblyopia.
The research team measured outcomes as incremental quality-adjusted life-years (QALYs), and health utilities were derived from data on adults, because of the lack of data on children under age 6 years with amblyopia or amblyopia risk factors. The researchers also estimated direct costs to the Ontario government, including visits with primary care doctors, optometrists, public health nurses, and contract screeners, as well as prescription glasses for children with vision impairment who receive social assistance. Costs were expressed in Canadian dollars (CAD).
Overall, compared with the primary care screening strategy, the school screening and optometric examination strategies were generally less costly and had more health benefits. The incremental difference in cost was a savings per child of $84.09 CAD for school screening and $74.47 CAD for optometric examinations. Optometric examinations yielded the largest gain in QALYs, compared with the primary care screening strategy, producing average QALYs of 0.0508 per child.
However, only 20% of school screening iterations and 29% of optometric exam iterations were cost-effective, relative to the primary care screening strategy, at a willingness-to-pay threshold of $50,000 CAD per QALY gained. For instance, when comparing optometric exams with primary care screenings, if the cost of vision screening was $11.50 CAD, the incremental cost-effectiveness ratio would be $77.95 CAD per QALY gained.
Results ‘make sense’
“We were initially surprised that the alternative screening programs were not cost-effective, compared to status quo vision screening in well-child visits,” said Dr. Asare. “However, the results make sense, considering the study’s universal approach (screening all children regardless of their vision status) and the study’s consideration only of amblyopia, and not of refractive errors, which are even more common in kindergarten children.”
Dr. Asare noted the lack of current data on the rate of vision screenings conducted in childhood by primary care practitioners and on referrals to eye care providers for children with abnormal screenings. Data on vision health disparities and barriers to accessing vision care in young children also are scarce.
“My ultimate research goal is to create and evaluate evidence-based, cost-effective interventions to be used at the point of care by pediatric primary care providers to improve the quality of vision care for children, especially those from socioeconomically deprived backgrounds,” she said. “The take-home message is that school vision screening and mandated eye exams are excellent programs, but they may not be suitable for all contexts.”
Additional studies are needed to look at the cost-effectiveness of the different screening strategies for other aspects included in childhood vision tests, including binocular vision problems, refractive disorders, myopia, allergies, and rare eye diseases.
Significant underestimation?
Susan Leat, PhD, a researcher and professor emerita at the University of Waterloo (Ont.) School of Optometry and Vision Science, said, “This study only considers amblyopia, and not all eye diseases and disorders, which significantly underestimates the cost-effectiveness of optometric eye exams.”
Dr. Leat, who wasn’t involved with this study, has researched pediatric optometry and visual development. She and colleagues are developing new tools to test visual acuity in young children.
“If all disorders were taken into account, then optometric testing would be by far the most cost-effective,” she said. “Optometrists can detect all disorders, including more subtle disorders, which if uncorrected or untreated, can impact a child’s early learning.”
The study authors reported no funding for the study. Dr. Asare and Dr. Leat reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Which treatments improve long-term outcomes of critical COVID illness?
, according to new data.
However, survival wasn’t improved with therapeutic anticoagulation, convalescent plasma, or lopinavir-ritonavir, and survival was worsened with hydroxychloroquine.
“After critically ill patients leave the hospital, there’s a high risk of readmission, death after discharge, or exacerbations of chronic illness,” study author Patrick Lawler, MD, a clinician-scientist at the Peter Munk Cardiac Centre at University Health Network and an assistant professor of medicine at the University of Toronto, said in an interview.
“When looking at the impact of treatment, we don’t want to improve short-term outcomes yet worsen long-term disability,” he said. “That long-term, 6-month horizon is what matters most to patients.”
The study was published online in JAMA.
Investigating treatments
The investigators analyzed data from an ongoing platform trial called Randomized Embedded Multifactorial Adaptive Platform for Community Acquired Pneumonia (REMAP-CAP). The trial is evaluating treatments for patients with severe pneumonia in pandemic and nonpandemic settings.
In the trial, patients are randomly assigned to receive one or more interventions within the following six treatment domains: immune modulators, convalescent plasma, antiplatelet therapy, anticoagulation, antivirals, and corticosteroids. The trial’s primary outcome for patients with COVID-19 is hospital survival and organ support–free days up to 21 days. Researchers previously observed improvement after treatment with IL-6 receptor antagonists (which are immune modulators).
For this study, the research team analyzed data for 4,869 critically ill adult patients with COVID-19 who were enrolled between March 2020 and June 2021 at 197 sites in 14 countries. A 180-day follow-up was completed in March 2022. The critically ill patients had been admitted to an intensive care unit and had received respiratory or cardiovascular organ support.
The researchers examined survival through day 180. A hazard ratio of less than 1 represented improved survival, and an HR greater than 1 represented harm. Futility was represented by a relative improvement in outcome of less than 20%, which was shown by an HR greater than 0.83.
Among the 4,869 patients, 4,107 patients had a known mortality status, and 2,590 were alive at day 180. Among the 1,517 patients who died by day 180, 91 deaths (6%) occurred between hospital discharge and day 180.
Overall, use of IL-6 receptor antagonists (either tocilizumab or sarilumab) had a greater than 99.9% probability of improving 6-month survival, and use of antiplatelet agents (aspirin or a P2Y12 inhibitor such as clopidogrel, prasugrel, or ticagrelor) had a 95% probability of improving 6-month survival, compared with control therapies.
In contrast, long-term survival wasn’t improved with therapeutic anticoagulation (11.5%), convalescent plasma (54.7%), or lopinavir-ritonavir (31.9%). The probability of trial-defined statistical futility was high for anticoagulation (99.9%), convalescent plasma (99.2%), and lopinavir-ritonavir (96.6%).
Long-term survival was worsened with hydroxychloroquine, with a posterior probability of harm of 96.9%. In addition, the combination of lopinavir-ritonavir and hydroxychloroquine had a 96.8% probability of harm.
Corticosteroids didn’t improve long-term outcomes, although enrollment in the treatment domain was terminated early in response to external evidence. The probability of improving 6-month survival ranged from 57.1% to 61.6% for various hydrocortisone dosing strategies.
Consistent treatment effects
When considered along with previously reported short-term results from the REMAP-CAP trial, the findings indicate that initial in-hospital treatment effects were consistent for most therapies through 6 months.
“We were very relieved to see that treatments with a favorable benefit for patients in the short term also appeared to be beneficial through 180 days,” said Dr. Lawler. “This supports the current clinical practice strategy in providing treatment to critically ill patients with COVID-19.”
In a subgroup analysis of 989 patients, health-related quality of life at day 180 was higher among those treated with IL-6 receptor antagonists and antiplatelet agents. The average quality-of-life score for the lopinavir-ritonavir group was lower than for control patients.
Among 720 survivors, 273 patients (37.9%) had moderate, severe, or complete disability at day 180. IL-6 receptor antagonists had a 92.6% probability of reducing disability, and anakinra (an IL-1 receptor antagonist) had a 90.8% probability of reducing disability. However, lopinavir-ritonavir had a 91.7% probability of worsening disability.
The REMAP-CAP trial investigators will continue to assess treatment domains and long-term outcomes among COVID-19 patients. They will evaluate additional data regarding disability, quality of life, and long-COVID outcomes.
“Reassuring” results
Commenting on the study, Angela Cheung, MD, PhD, a professor of medicine at the University of Toronto and senior scientist at the Toronto General Research Institute, said, “It is important to look at the longer-term effects of these therapies, as sometimes we may improve things in the short term, but that may not translate to longer-term gains. Historically, most trials conducted in this patient population assess only short outcomes, such as organ failure or 28-day mortality.”
Dr. Cheung, who wasn’t involved with this study, serves as the co-lead for the Canadian COVID-19 Prospective Cohort Study (CANCOV) and the Recovering From COVID-19 Lingering Symptoms Adaptive Integrative Medicine Trial (RECLAIM). These studies are also analyzing long-term outcomes among COVID-19 patients.
“It is reassuring to see that the 6-month outcomes are consistent with the short-term outcomes,” she said. “This study will help guide critical care medicine physicians in their treatment of critically ill patients with COVID-19.”
The study was supported by numerous grants and funds, including the Canadian Institute of Health Research COVID-19 Rapid Research Funding. Amgen and Eisai also provided funding. Dr. Lawler received grants from Canadian Institutes for Health Research and the Heart and Stroke Foundation of Canada during the conduct of the study and personal fees from Novartis, CorEvitas, Partners Healthcare, and the American College of Cardiology outside the submitted work. Dr. Cheung has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to new data.
However, survival wasn’t improved with therapeutic anticoagulation, convalescent plasma, or lopinavir-ritonavir, and survival was worsened with hydroxychloroquine.
“After critically ill patients leave the hospital, there’s a high risk of readmission, death after discharge, or exacerbations of chronic illness,” study author Patrick Lawler, MD, a clinician-scientist at the Peter Munk Cardiac Centre at University Health Network and an assistant professor of medicine at the University of Toronto, said in an interview.
“When looking at the impact of treatment, we don’t want to improve short-term outcomes yet worsen long-term disability,” he said. “That long-term, 6-month horizon is what matters most to patients.”
The study was published online in JAMA.
Investigating treatments
The investigators analyzed data from an ongoing platform trial called Randomized Embedded Multifactorial Adaptive Platform for Community Acquired Pneumonia (REMAP-CAP). The trial is evaluating treatments for patients with severe pneumonia in pandemic and nonpandemic settings.
In the trial, patients are randomly assigned to receive one or more interventions within the following six treatment domains: immune modulators, convalescent plasma, antiplatelet therapy, anticoagulation, antivirals, and corticosteroids. The trial’s primary outcome for patients with COVID-19 is hospital survival and organ support–free days up to 21 days. Researchers previously observed improvement after treatment with IL-6 receptor antagonists (which are immune modulators).
For this study, the research team analyzed data for 4,869 critically ill adult patients with COVID-19 who were enrolled between March 2020 and June 2021 at 197 sites in 14 countries. A 180-day follow-up was completed in March 2022. The critically ill patients had been admitted to an intensive care unit and had received respiratory or cardiovascular organ support.
The researchers examined survival through day 180. A hazard ratio of less than 1 represented improved survival, and an HR greater than 1 represented harm. Futility was represented by a relative improvement in outcome of less than 20%, which was shown by an HR greater than 0.83.
Among the 4,869 patients, 4,107 patients had a known mortality status, and 2,590 were alive at day 180. Among the 1,517 patients who died by day 180, 91 deaths (6%) occurred between hospital discharge and day 180.
Overall, use of IL-6 receptor antagonists (either tocilizumab or sarilumab) had a greater than 99.9% probability of improving 6-month survival, and use of antiplatelet agents (aspirin or a P2Y12 inhibitor such as clopidogrel, prasugrel, or ticagrelor) had a 95% probability of improving 6-month survival, compared with control therapies.
In contrast, long-term survival wasn’t improved with therapeutic anticoagulation (11.5%), convalescent plasma (54.7%), or lopinavir-ritonavir (31.9%). The probability of trial-defined statistical futility was high for anticoagulation (99.9%), convalescent plasma (99.2%), and lopinavir-ritonavir (96.6%).
Long-term survival was worsened with hydroxychloroquine, with a posterior probability of harm of 96.9%. In addition, the combination of lopinavir-ritonavir and hydroxychloroquine had a 96.8% probability of harm.
Corticosteroids didn’t improve long-term outcomes, although enrollment in the treatment domain was terminated early in response to external evidence. The probability of improving 6-month survival ranged from 57.1% to 61.6% for various hydrocortisone dosing strategies.
Consistent treatment effects
When considered along with previously reported short-term results from the REMAP-CAP trial, the findings indicate that initial in-hospital treatment effects were consistent for most therapies through 6 months.
“We were very relieved to see that treatments with a favorable benefit for patients in the short term also appeared to be beneficial through 180 days,” said Dr. Lawler. “This supports the current clinical practice strategy in providing treatment to critically ill patients with COVID-19.”
In a subgroup analysis of 989 patients, health-related quality of life at day 180 was higher among those treated with IL-6 receptor antagonists and antiplatelet agents. The average quality-of-life score for the lopinavir-ritonavir group was lower than for control patients.
Among 720 survivors, 273 patients (37.9%) had moderate, severe, or complete disability at day 180. IL-6 receptor antagonists had a 92.6% probability of reducing disability, and anakinra (an IL-1 receptor antagonist) had a 90.8% probability of reducing disability. However, lopinavir-ritonavir had a 91.7% probability of worsening disability.
The REMAP-CAP trial investigators will continue to assess treatment domains and long-term outcomes among COVID-19 patients. They will evaluate additional data regarding disability, quality of life, and long-COVID outcomes.
“Reassuring” results
Commenting on the study, Angela Cheung, MD, PhD, a professor of medicine at the University of Toronto and senior scientist at the Toronto General Research Institute, said, “It is important to look at the longer-term effects of these therapies, as sometimes we may improve things in the short term, but that may not translate to longer-term gains. Historically, most trials conducted in this patient population assess only short outcomes, such as organ failure or 28-day mortality.”
Dr. Cheung, who wasn’t involved with this study, serves as the co-lead for the Canadian COVID-19 Prospective Cohort Study (CANCOV) and the Recovering From COVID-19 Lingering Symptoms Adaptive Integrative Medicine Trial (RECLAIM). These studies are also analyzing long-term outcomes among COVID-19 patients.
“It is reassuring to see that the 6-month outcomes are consistent with the short-term outcomes,” she said. “This study will help guide critical care medicine physicians in their treatment of critically ill patients with COVID-19.”
The study was supported by numerous grants and funds, including the Canadian Institute of Health Research COVID-19 Rapid Research Funding. Amgen and Eisai also provided funding. Dr. Lawler received grants from Canadian Institutes for Health Research and the Heart and Stroke Foundation of Canada during the conduct of the study and personal fees from Novartis, CorEvitas, Partners Healthcare, and the American College of Cardiology outside the submitted work. Dr. Cheung has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to new data.
However, survival wasn’t improved with therapeutic anticoagulation, convalescent plasma, or lopinavir-ritonavir, and survival was worsened with hydroxychloroquine.
“After critically ill patients leave the hospital, there’s a high risk of readmission, death after discharge, or exacerbations of chronic illness,” study author Patrick Lawler, MD, a clinician-scientist at the Peter Munk Cardiac Centre at University Health Network and an assistant professor of medicine at the University of Toronto, said in an interview.
“When looking at the impact of treatment, we don’t want to improve short-term outcomes yet worsen long-term disability,” he said. “That long-term, 6-month horizon is what matters most to patients.”
The study was published online in JAMA.
Investigating treatments
The investigators analyzed data from an ongoing platform trial called Randomized Embedded Multifactorial Adaptive Platform for Community Acquired Pneumonia (REMAP-CAP). The trial is evaluating treatments for patients with severe pneumonia in pandemic and nonpandemic settings.
In the trial, patients are randomly assigned to receive one or more interventions within the following six treatment domains: immune modulators, convalescent plasma, antiplatelet therapy, anticoagulation, antivirals, and corticosteroids. The trial’s primary outcome for patients with COVID-19 is hospital survival and organ support–free days up to 21 days. Researchers previously observed improvement after treatment with IL-6 receptor antagonists (which are immune modulators).
For this study, the research team analyzed data for 4,869 critically ill adult patients with COVID-19 who were enrolled between March 2020 and June 2021 at 197 sites in 14 countries. A 180-day follow-up was completed in March 2022. The critically ill patients had been admitted to an intensive care unit and had received respiratory or cardiovascular organ support.
The researchers examined survival through day 180. A hazard ratio of less than 1 represented improved survival, and an HR greater than 1 represented harm. Futility was represented by a relative improvement in outcome of less than 20%, which was shown by an HR greater than 0.83.
Among the 4,869 patients, 4,107 patients had a known mortality status, and 2,590 were alive at day 180. Among the 1,517 patients who died by day 180, 91 deaths (6%) occurred between hospital discharge and day 180.
Overall, use of IL-6 receptor antagonists (either tocilizumab or sarilumab) had a greater than 99.9% probability of improving 6-month survival, and use of antiplatelet agents (aspirin or a P2Y12 inhibitor such as clopidogrel, prasugrel, or ticagrelor) had a 95% probability of improving 6-month survival, compared with control therapies.
In contrast, long-term survival wasn’t improved with therapeutic anticoagulation (11.5%), convalescent plasma (54.7%), or lopinavir-ritonavir (31.9%). The probability of trial-defined statistical futility was high for anticoagulation (99.9%), convalescent plasma (99.2%), and lopinavir-ritonavir (96.6%).
Long-term survival was worsened with hydroxychloroquine, with a posterior probability of harm of 96.9%. In addition, the combination of lopinavir-ritonavir and hydroxychloroquine had a 96.8% probability of harm.
Corticosteroids didn’t improve long-term outcomes, although enrollment in the treatment domain was terminated early in response to external evidence. The probability of improving 6-month survival ranged from 57.1% to 61.6% for various hydrocortisone dosing strategies.
Consistent treatment effects
When considered along with previously reported short-term results from the REMAP-CAP trial, the findings indicate that initial in-hospital treatment effects were consistent for most therapies through 6 months.
“We were very relieved to see that treatments with a favorable benefit for patients in the short term also appeared to be beneficial through 180 days,” said Dr. Lawler. “This supports the current clinical practice strategy in providing treatment to critically ill patients with COVID-19.”
In a subgroup analysis of 989 patients, health-related quality of life at day 180 was higher among those treated with IL-6 receptor antagonists and antiplatelet agents. The average quality-of-life score for the lopinavir-ritonavir group was lower than for control patients.
Among 720 survivors, 273 patients (37.9%) had moderate, severe, or complete disability at day 180. IL-6 receptor antagonists had a 92.6% probability of reducing disability, and anakinra (an IL-1 receptor antagonist) had a 90.8% probability of reducing disability. However, lopinavir-ritonavir had a 91.7% probability of worsening disability.
The REMAP-CAP trial investigators will continue to assess treatment domains and long-term outcomes among COVID-19 patients. They will evaluate additional data regarding disability, quality of life, and long-COVID outcomes.
“Reassuring” results
Commenting on the study, Angela Cheung, MD, PhD, a professor of medicine at the University of Toronto and senior scientist at the Toronto General Research Institute, said, “It is important to look at the longer-term effects of these therapies, as sometimes we may improve things in the short term, but that may not translate to longer-term gains. Historically, most trials conducted in this patient population assess only short outcomes, such as organ failure or 28-day mortality.”
Dr. Cheung, who wasn’t involved with this study, serves as the co-lead for the Canadian COVID-19 Prospective Cohort Study (CANCOV) and the Recovering From COVID-19 Lingering Symptoms Adaptive Integrative Medicine Trial (RECLAIM). These studies are also analyzing long-term outcomes among COVID-19 patients.
“It is reassuring to see that the 6-month outcomes are consistent with the short-term outcomes,” she said. “This study will help guide critical care medicine physicians in their treatment of critically ill patients with COVID-19.”
The study was supported by numerous grants and funds, including the Canadian Institute of Health Research COVID-19 Rapid Research Funding. Amgen and Eisai also provided funding. Dr. Lawler received grants from Canadian Institutes for Health Research and the Heart and Stroke Foundation of Canada during the conduct of the study and personal fees from Novartis, CorEvitas, Partners Healthcare, and the American College of Cardiology outside the submitted work. Dr. Cheung has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA
Two drug classes appear effective for gastroparesis treatment
according to a new report.
Oral dopamine antagonists and tachykinin-1 antagonists appear superior to placebo, finds the study. In addition, some drugs rank higher for addressing individual symptoms.
“Gastroparesis has a substantial impact on quality of life and societal functioning for patients, and the costs to the health service are high,” Alexander Ford, MBChB, MD, a professor of gastroenterology and honorary consultant gastroenterologist at the Leeds (England) Institute of Medical Research at St. James’s, University of Leeds, said in an interview.
“There are very few licensed therapies, but some novel drugs are in the pipeline, some existing drugs that are licensed for other conditions could be repurposed if efficacious, and some older drugs that have safety concerns may be beneficial,” he said. “Given the impact on patients and their symptoms, they may be willing to accept these safety risks in return for symptom improvement.”
Only one drug, the dopamine antagonist metoclopramide, has Food and Drug Administration approval for the treatment of gastroparesis, noted Dr. Ford and colleagues. The lack of other recommended drugs or new medications has resulted in off-label use of drugs in other classes.
The study was published online in Gastroenterology.
Investigating treatments
To address the lack of evidence supporting the efficacy and safety of licensed and unlicensed drugs for the condition, the researchers conducted a systematic review and network meta-analysis of randomized controlled trials of drugs for gastroparesis dating from 1947 to September 2022. The trials involved more than dozen drugs in several classes.
They determined drug efficacy on the basis of global symptoms of gastroparesis and individual symptoms such as nausea, vomiting, abdominal pain, bloating, or fullness. They judged safety on the basis of total adverse events and adverse events leading to withdrawal.
The research team extracted data as intention-to-treat analyses, assuming dropouts to be treatment failures. They reported efficacy as a pooled relative risk of symptoms not improving and ranked the drugs according to P-score.
The analysis included 29 randomized controlled trials with 3,772 patients. Only four trials were at low risk of bias.
Overall, only two drug classes were considered efficacious: oral dopamine antagonists (RR, 0.58; P-score, 0.96) and tachykinin-1 antagonists (RR, 0.69; P-score, 0.83).
On the basis of 25 trials that reported on global symptoms, clebopride ranked first for efficacy (RR, 0.30; P-score, 0.99), followed by domperidone (RR, 0.69; P-score, 0.76). None of the other drugs were superior to the placebo. After direct and indirect comparisons, clebopride was superior to all other drugs except aprepitant.
After excluding three trials with a placebo run-in and a trial where only responders to single-blind domperidone were randomized, the researchers analyzed 21 trials with 2,233 patients. In this analysis, domperidone ranked first (RR, 0.48; P-score, 0.93), followed by oral metoclopramide (RR, 0.54; P-score, 0.87). None of the other drugs were superior to placebo.
Among 16 trials, including 1,381 patients, that confirmed delayed gastric emptying among all participants, only clebopride and metoclopramide were more efficacious than placebo. Clebopride ranked first (RR, 0.30; P-score, 0.95) and metoclopramide ranked third (RR, 0.48).
Among 13 trials with 785 patients with diabetic gastroparesis, none of the active drugs were superior to placebo. Among 12 trials recruiting patients with idiopathic or mixed etiology gastroparesis, clebopride ranked first (RR, 0.30; P-score, 0.93).
On the basis of trials that assessed individual symptoms, oral metoclopramide ranked first for nausea (RR, 0.46; P-score, 0.95), fullness (RR, 0.67; P-score, 0.86), and bloating (RR, 0.53; P-score, 0.97). However, the data came from one small trial. Tradipitant and TZP-102, a ghrelin agonist, were efficacious for nausea, and TZP-102 ranked second for fullness. No drugs were more efficacious than the placebo for abdominal pain or vomiting.
Among 20 trials that reported on the total number of adverse events, camicinal was the least likely to be associated with adverse events (RR, 0.77; P-score, 0.93) and prucalopride was the most likely to be associated with adverse events (RR, 2.96; P-score, 0.10). Prucalopride, oral metoclopramide, and aprepitant also were more likely than placebo to be associated with adverse events.
In 23 trials that reported on withdrawals caused by adverse events, camicinal was the least likely to be associated with withdrawals (RR, 0.20; P-score, 0.87). Nortriptyline was the most likely to be associated with withdrawals (RR, 3.33; P-score, 0.16). However, there were no significant differences between any individual drug and placebo.
Urgent need remains
More trials of drugs to treat gastroparesis are needed, Ford said.
“We need to consider the reintroduction of dopamine antagonists, if patients are willing to accept the safety concerns,” he added. “The other important point is most drugs were not of benefit. There is an urgent need to find efficacious therapies, and these should be fast-tracked for licensing approval if efficacy is proven.”
The study is “helpful for practicing clinicians since it provides a comprehensive review of clinical trials in gastroparesis,” Anthony Lembo, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
Dr. Lembo, who wasn’t involved with this study, has researched several drugs for gastroparesis, including relamorelin and TZP-102. He agreed that additional research is needed.
“There is a paucity of novel treatments currently in development,” he said. “However, there is interest in developing a product similar to domperidone without cardiac side effects, as well as performing larger studies with botulinum toxin injection.”
The authors did not disclose a funding source for the study. One author disclosed research funding from the National Institutes of Health and consulting roles with various pharmaceutical companies. Ford and the other authors reported no disclosures. Dr. Lembo reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
according to a new report.
Oral dopamine antagonists and tachykinin-1 antagonists appear superior to placebo, finds the study. In addition, some drugs rank higher for addressing individual symptoms.
“Gastroparesis has a substantial impact on quality of life and societal functioning for patients, and the costs to the health service are high,” Alexander Ford, MBChB, MD, a professor of gastroenterology and honorary consultant gastroenterologist at the Leeds (England) Institute of Medical Research at St. James’s, University of Leeds, said in an interview.
“There are very few licensed therapies, but some novel drugs are in the pipeline, some existing drugs that are licensed for other conditions could be repurposed if efficacious, and some older drugs that have safety concerns may be beneficial,” he said. “Given the impact on patients and their symptoms, they may be willing to accept these safety risks in return for symptom improvement.”
Only one drug, the dopamine antagonist metoclopramide, has Food and Drug Administration approval for the treatment of gastroparesis, noted Dr. Ford and colleagues. The lack of other recommended drugs or new medications has resulted in off-label use of drugs in other classes.
The study was published online in Gastroenterology.
Investigating treatments
To address the lack of evidence supporting the efficacy and safety of licensed and unlicensed drugs for the condition, the researchers conducted a systematic review and network meta-analysis of randomized controlled trials of drugs for gastroparesis dating from 1947 to September 2022. The trials involved more than dozen drugs in several classes.
They determined drug efficacy on the basis of global symptoms of gastroparesis and individual symptoms such as nausea, vomiting, abdominal pain, bloating, or fullness. They judged safety on the basis of total adverse events and adverse events leading to withdrawal.
The research team extracted data as intention-to-treat analyses, assuming dropouts to be treatment failures. They reported efficacy as a pooled relative risk of symptoms not improving and ranked the drugs according to P-score.
The analysis included 29 randomized controlled trials with 3,772 patients. Only four trials were at low risk of bias.
Overall, only two drug classes were considered efficacious: oral dopamine antagonists (RR, 0.58; P-score, 0.96) and tachykinin-1 antagonists (RR, 0.69; P-score, 0.83).
On the basis of 25 trials that reported on global symptoms, clebopride ranked first for efficacy (RR, 0.30; P-score, 0.99), followed by domperidone (RR, 0.69; P-score, 0.76). None of the other drugs were superior to the placebo. After direct and indirect comparisons, clebopride was superior to all other drugs except aprepitant.
After excluding three trials with a placebo run-in and a trial where only responders to single-blind domperidone were randomized, the researchers analyzed 21 trials with 2,233 patients. In this analysis, domperidone ranked first (RR, 0.48; P-score, 0.93), followed by oral metoclopramide (RR, 0.54; P-score, 0.87). None of the other drugs were superior to placebo.
Among 16 trials, including 1,381 patients, that confirmed delayed gastric emptying among all participants, only clebopride and metoclopramide were more efficacious than placebo. Clebopride ranked first (RR, 0.30; P-score, 0.95) and metoclopramide ranked third (RR, 0.48).
Among 13 trials with 785 patients with diabetic gastroparesis, none of the active drugs were superior to placebo. Among 12 trials recruiting patients with idiopathic or mixed etiology gastroparesis, clebopride ranked first (RR, 0.30; P-score, 0.93).
On the basis of trials that assessed individual symptoms, oral metoclopramide ranked first for nausea (RR, 0.46; P-score, 0.95), fullness (RR, 0.67; P-score, 0.86), and bloating (RR, 0.53; P-score, 0.97). However, the data came from one small trial. Tradipitant and TZP-102, a ghrelin agonist, were efficacious for nausea, and TZP-102 ranked second for fullness. No drugs were more efficacious than the placebo for abdominal pain or vomiting.
Among 20 trials that reported on the total number of adverse events, camicinal was the least likely to be associated with adverse events (RR, 0.77; P-score, 0.93) and prucalopride was the most likely to be associated with adverse events (RR, 2.96; P-score, 0.10). Prucalopride, oral metoclopramide, and aprepitant also were more likely than placebo to be associated with adverse events.
In 23 trials that reported on withdrawals caused by adverse events, camicinal was the least likely to be associated with withdrawals (RR, 0.20; P-score, 0.87). Nortriptyline was the most likely to be associated with withdrawals (RR, 3.33; P-score, 0.16). However, there were no significant differences between any individual drug and placebo.
Urgent need remains
More trials of drugs to treat gastroparesis are needed, Ford said.
“We need to consider the reintroduction of dopamine antagonists, if patients are willing to accept the safety concerns,” he added. “The other important point is most drugs were not of benefit. There is an urgent need to find efficacious therapies, and these should be fast-tracked for licensing approval if efficacy is proven.”
The study is “helpful for practicing clinicians since it provides a comprehensive review of clinical trials in gastroparesis,” Anthony Lembo, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
Dr. Lembo, who wasn’t involved with this study, has researched several drugs for gastroparesis, including relamorelin and TZP-102. He agreed that additional research is needed.
“There is a paucity of novel treatments currently in development,” he said. “However, there is interest in developing a product similar to domperidone without cardiac side effects, as well as performing larger studies with botulinum toxin injection.”
The authors did not disclose a funding source for the study. One author disclosed research funding from the National Institutes of Health and consulting roles with various pharmaceutical companies. Ford and the other authors reported no disclosures. Dr. Lembo reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
according to a new report.
Oral dopamine antagonists and tachykinin-1 antagonists appear superior to placebo, finds the study. In addition, some drugs rank higher for addressing individual symptoms.
“Gastroparesis has a substantial impact on quality of life and societal functioning for patients, and the costs to the health service are high,” Alexander Ford, MBChB, MD, a professor of gastroenterology and honorary consultant gastroenterologist at the Leeds (England) Institute of Medical Research at St. James’s, University of Leeds, said in an interview.
“There are very few licensed therapies, but some novel drugs are in the pipeline, some existing drugs that are licensed for other conditions could be repurposed if efficacious, and some older drugs that have safety concerns may be beneficial,” he said. “Given the impact on patients and their symptoms, they may be willing to accept these safety risks in return for symptom improvement.”
Only one drug, the dopamine antagonist metoclopramide, has Food and Drug Administration approval for the treatment of gastroparesis, noted Dr. Ford and colleagues. The lack of other recommended drugs or new medications has resulted in off-label use of drugs in other classes.
The study was published online in Gastroenterology.
Investigating treatments
To address the lack of evidence supporting the efficacy and safety of licensed and unlicensed drugs for the condition, the researchers conducted a systematic review and network meta-analysis of randomized controlled trials of drugs for gastroparesis dating from 1947 to September 2022. The trials involved more than dozen drugs in several classes.
They determined drug efficacy on the basis of global symptoms of gastroparesis and individual symptoms such as nausea, vomiting, abdominal pain, bloating, or fullness. They judged safety on the basis of total adverse events and adverse events leading to withdrawal.
The research team extracted data as intention-to-treat analyses, assuming dropouts to be treatment failures. They reported efficacy as a pooled relative risk of symptoms not improving and ranked the drugs according to P-score.
The analysis included 29 randomized controlled trials with 3,772 patients. Only four trials were at low risk of bias.
Overall, only two drug classes were considered efficacious: oral dopamine antagonists (RR, 0.58; P-score, 0.96) and tachykinin-1 antagonists (RR, 0.69; P-score, 0.83).
On the basis of 25 trials that reported on global symptoms, clebopride ranked first for efficacy (RR, 0.30; P-score, 0.99), followed by domperidone (RR, 0.69; P-score, 0.76). None of the other drugs were superior to the placebo. After direct and indirect comparisons, clebopride was superior to all other drugs except aprepitant.
After excluding three trials with a placebo run-in and a trial where only responders to single-blind domperidone were randomized, the researchers analyzed 21 trials with 2,233 patients. In this analysis, domperidone ranked first (RR, 0.48; P-score, 0.93), followed by oral metoclopramide (RR, 0.54; P-score, 0.87). None of the other drugs were superior to placebo.
Among 16 trials, including 1,381 patients, that confirmed delayed gastric emptying among all participants, only clebopride and metoclopramide were more efficacious than placebo. Clebopride ranked first (RR, 0.30; P-score, 0.95) and metoclopramide ranked third (RR, 0.48).
Among 13 trials with 785 patients with diabetic gastroparesis, none of the active drugs were superior to placebo. Among 12 trials recruiting patients with idiopathic or mixed etiology gastroparesis, clebopride ranked first (RR, 0.30; P-score, 0.93).
On the basis of trials that assessed individual symptoms, oral metoclopramide ranked first for nausea (RR, 0.46; P-score, 0.95), fullness (RR, 0.67; P-score, 0.86), and bloating (RR, 0.53; P-score, 0.97). However, the data came from one small trial. Tradipitant and TZP-102, a ghrelin agonist, were efficacious for nausea, and TZP-102 ranked second for fullness. No drugs were more efficacious than the placebo for abdominal pain or vomiting.
Among 20 trials that reported on the total number of adverse events, camicinal was the least likely to be associated with adverse events (RR, 0.77; P-score, 0.93) and prucalopride was the most likely to be associated with adverse events (RR, 2.96; P-score, 0.10). Prucalopride, oral metoclopramide, and aprepitant also were more likely than placebo to be associated with adverse events.
In 23 trials that reported on withdrawals caused by adverse events, camicinal was the least likely to be associated with withdrawals (RR, 0.20; P-score, 0.87). Nortriptyline was the most likely to be associated with withdrawals (RR, 3.33; P-score, 0.16). However, there were no significant differences between any individual drug and placebo.
Urgent need remains
More trials of drugs to treat gastroparesis are needed, Ford said.
“We need to consider the reintroduction of dopamine antagonists, if patients are willing to accept the safety concerns,” he added. “The other important point is most drugs were not of benefit. There is an urgent need to find efficacious therapies, and these should be fast-tracked for licensing approval if efficacy is proven.”
The study is “helpful for practicing clinicians since it provides a comprehensive review of clinical trials in gastroparesis,” Anthony Lembo, MD, a gastroenterologist at the Cleveland Clinic, said in an interview.
Dr. Lembo, who wasn’t involved with this study, has researched several drugs for gastroparesis, including relamorelin and TZP-102. He agreed that additional research is needed.
“There is a paucity of novel treatments currently in development,” he said. “However, there is interest in developing a product similar to domperidone without cardiac side effects, as well as performing larger studies with botulinum toxin injection.”
The authors did not disclose a funding source for the study. One author disclosed research funding from the National Institutes of Health and consulting roles with various pharmaceutical companies. Ford and the other authors reported no disclosures. Dr. Lembo reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY