User login
Celebrating Excellence
As we settle back into our daily routines following another fantastic DDW, I’d like to take a moment to congratulate this year’s AGA’s Recognition Award recipients, who have made outstanding contributions to the organization and to our field, including through excellence in clinical practice, research, mentorship, and DEI.
This month’s Member Spotlight column highlights one of these remarkable individuals, Dr. Scott Ketover, president and CEO of MNGI Digestive Health, who is the recipient of this year’s AGA Distinguished Clinician Award in Private Practice. We hope you enjoy learning more about Scott, as well as the other award recipients who were recognized at a special ceremony in DC last month.
Also highlighted in our June issue is the FDA’s recent approval of subcutaneous vedolizumab for Crohn’s maintenance therapy, an exciting development that will provide us with more flexible treatment options for our patients. We also report on the 2024 AGA Tech Summit (Chicago, IL), and introduce the winners (survivors?) of its annual Shark Tank competition, Dr. Renu Dhanasekaran and Dr. Venthan Elango. Their company, Arithmedics, which developed technology that harnesses generative AI and data intelligence to streamline medical billing, was identified as the most promising among a robust field of entrants.
We also present some of the best clinically oriented content from our GI journals, including an observational study from Gastroenterology evaluating the effect of longitudinal alcohol use on risk of cirrhosis among patients with steatotic liver disease, and summarize recently released AGA Clinical Practice Updates on performance of high-quality upper endoscopy and treatment of cannabinoid hyperemesis syndrome. We hope you enjoy all the exciting content featured in this issue and take some well-deserved time to rest and recharge this summer!
Megan A. Adams, MD, JD, MSc
Editor-in-Chief
As we settle back into our daily routines following another fantastic DDW, I’d like to take a moment to congratulate this year’s AGA’s Recognition Award recipients, who have made outstanding contributions to the organization and to our field, including through excellence in clinical practice, research, mentorship, and DEI.
This month’s Member Spotlight column highlights one of these remarkable individuals, Dr. Scott Ketover, president and CEO of MNGI Digestive Health, who is the recipient of this year’s AGA Distinguished Clinician Award in Private Practice. We hope you enjoy learning more about Scott, as well as the other award recipients who were recognized at a special ceremony in DC last month.
Also highlighted in our June issue is the FDA’s recent approval of subcutaneous vedolizumab for Crohn’s maintenance therapy, an exciting development that will provide us with more flexible treatment options for our patients. We also report on the 2024 AGA Tech Summit (Chicago, IL), and introduce the winners (survivors?) of its annual Shark Tank competition, Dr. Renu Dhanasekaran and Dr. Venthan Elango. Their company, Arithmedics, which developed technology that harnesses generative AI and data intelligence to streamline medical billing, was identified as the most promising among a robust field of entrants.
We also present some of the best clinically oriented content from our GI journals, including an observational study from Gastroenterology evaluating the effect of longitudinal alcohol use on risk of cirrhosis among patients with steatotic liver disease, and summarize recently released AGA Clinical Practice Updates on performance of high-quality upper endoscopy and treatment of cannabinoid hyperemesis syndrome. We hope you enjoy all the exciting content featured in this issue and take some well-deserved time to rest and recharge this summer!
Megan A. Adams, MD, JD, MSc
Editor-in-Chief
As we settle back into our daily routines following another fantastic DDW, I’d like to take a moment to congratulate this year’s AGA’s Recognition Award recipients, who have made outstanding contributions to the organization and to our field, including through excellence in clinical practice, research, mentorship, and DEI.
This month’s Member Spotlight column highlights one of these remarkable individuals, Dr. Scott Ketover, president and CEO of MNGI Digestive Health, who is the recipient of this year’s AGA Distinguished Clinician Award in Private Practice. We hope you enjoy learning more about Scott, as well as the other award recipients who were recognized at a special ceremony in DC last month.
Also highlighted in our June issue is the FDA’s recent approval of subcutaneous vedolizumab for Crohn’s maintenance therapy, an exciting development that will provide us with more flexible treatment options for our patients. We also report on the 2024 AGA Tech Summit (Chicago, IL), and introduce the winners (survivors?) of its annual Shark Tank competition, Dr. Renu Dhanasekaran and Dr. Venthan Elango. Their company, Arithmedics, which developed technology that harnesses generative AI and data intelligence to streamline medical billing, was identified as the most promising among a robust field of entrants.
We also present some of the best clinically oriented content from our GI journals, including an observational study from Gastroenterology evaluating the effect of longitudinal alcohol use on risk of cirrhosis among patients with steatotic liver disease, and summarize recently released AGA Clinical Practice Updates on performance of high-quality upper endoscopy and treatment of cannabinoid hyperemesis syndrome. We hope you enjoy all the exciting content featured in this issue and take some well-deserved time to rest and recharge this summer!
Megan A. Adams, MD, JD, MSc
Editor-in-Chief
Is Red Meat Healthy? Multiverse Analysis Has Lessons Beyond Meat
Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise.
Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.
My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.
But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.
A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
Multiple Ways to Slice the Data
Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality.
Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies.
You might wonder how there could be thousands of ways to analyze a dataset. I surely did.
The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.
Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.
For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association.
Is Red Meat Good or Bad?
Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality.
Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review.
They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals.
Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology.
Specification Curve Analyses Results
Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.
- The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
- The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
- Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
- As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
- Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
- Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.
Paradigm Changing
As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have.
Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method.
Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result.
Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null.
What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible.
The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data.
I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices?
Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.
Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise.
Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.
My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.
But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.
A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
Multiple Ways to Slice the Data
Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality.
Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies.
You might wonder how there could be thousands of ways to analyze a dataset. I surely did.
The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.
Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.
For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association.
Is Red Meat Good or Bad?
Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality.
Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review.
They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals.
Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology.
Specification Curve Analyses Results
Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.
- The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
- The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
- Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
- As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
- Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
- Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.
Paradigm Changing
As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have.
Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method.
Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result.
Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null.
What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible.
The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data.
I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices?
Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.
Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Observational studies on red meat consumption and lifespan are prime examples of attempts to find signal in a sea of noise.
Randomized controlled trials are the best way to sort cause from mere correlation. But these are not possible in most matters of food consumption. So, we look back and observe groups with different exposures.
My most frequent complaint about these nonrandom comparison studies has been the chance that the two groups differ in important ways, and it’s these differences — not the food in question — that account for the disparate outcomes.
But selection biases are only one issue. There is also the matter of analytic flexibility. Observational studies are born from large databases. Researchers have many choices in how to analyze all these data.
A few years ago, Brian Nosek, PhD, and colleagues elegantly showed that analytic choices can affect results. His Many Analysts, One Data Set study had little uptake in the medical community, perhaps because he studied a social science question.
Multiple Ways to Slice the Data
Recently, a group from McMaster University, led by Dena Zeraatkar, PhD, has confirmed the analytic choices problem, using the question of red meat consumption and mortality.
Their idea was simple: Because there are many plausible and defensible ways to analyze a dataset, we should not choose one method; rather, we should choose thousands, combine the results, and see where the truth lies.
You might wonder how there could be thousands of ways to analyze a dataset. I surely did.
The answer stems from the choices that researchers face. For instance, there is the selection of eligible participants, the choice of analytic model (logistic, Poisson, etc.), and covariates for which to adjust. Think exponents when combining possible choices.
Dr. Zeraatkar and colleagues are research methodologists, so, sadly, they are comfortable with the clunky name of this approach: specification curve analysis. Don’t be deterred. It means that they analyze the data in thousands of ways using computers. Each way is a specification. In the end, the specifications give rise to a curve of hazard ratios for red meat and mortality. Another name for this approach is multiverse analysis.
For their paper in the Journal of Clinical Epidemiology, aptly named “Grilling the Data,” they didn’t just conjure up the many analytic ways to study the red meat–mortality question. Instead, they used a published systematic review of 15 studies on unprocessed red meat and early mortality. The studies included in this review reported 70 unique ways to analyze the association.
Is Red Meat Good or Bad?
Their first finding was that this analysis yielded widely disparate effect estimates, from 0.63 (reduced risk for early death) to 2.31 (a higher risk). The median hazard ratio was 1.14 with an interquartile range (IQR) of 1.02-1.23. One might conclude from this that eating red meat is associated with a slightly higher risk for early mortality.
Their second step was to calculate how many ways (specifications) there were to analyze the data by totaling all possible combinations of choices in the 70 ways found in the systematic review.
They calculated a total of 10 quadrillion possible unique analyses. A quadrillion is 1 with 15 zeros. Computing power cannot handle that amount of analyses yet. So, they generated 20 random unique combinations of covariates, which narrowed the number of analyses to about 1400. About 200 of these were excluded due to implausibly wide confidence intervals.
Voilà. They now had about 1200 different ways to analyze a dataset; they chose an NHANES longitudinal cohort study from 2007-2014. They deemed each of the more than 1200 approaches plausible because they were derived from peer-reviewed papers written by experts in epidemiology.
Specification Curve Analyses Results
Each analysis (or specification) yielded a hazard ratio for red meat exposure and death.
- The median HR was 0.94 (IQR, 0.83-1.05) for the effect of red meat on all-cause mortality — ie, not significant.
- The range of hazard ratios was large. They went from 0.51 — a 49% reduced risk for early mortality — to 1.75: a 75% increase in early mortality.
- Among all analyses, 36% yielded hazard ratios above 1.0 and 64% less than 1.0.
- As for statistical significance, defined as P ≤.05, only 4% (or 48 specifications) met this threshold. Zeraatkar reminded me that this is what you’d expect if unprocessed red meat has no effect on longevity.
- Of the 48 analyses deemed statistically significant, 40 indicated that red meat consumption reduced early death and eight indicated that eating red meat led to higher mortality.
- Nearly half the analyses yielded unexciting point estimates, with hazard ratios between 0.90 and 1.10.
Paradigm Changing
As a user of evidence, I find this a potentially paradigm-changing study. Observational studies far outnumber randomized trials. For many medical questions, observational data are all we have.
Now think about every observational study published. The authors tell you — post hoc — which method they used to analyze the data. The key point is that it is one method.
Dr. Zeraatkar and colleagues have shown that there are thousands of plausible ways to analyze the data, and this can lead to very different findings. In the specific question of red meat and mortality, their many analyses yielded a null result.
Now imagine other cases where the researchers did many analyses of a dataset and chose to publish only the significant ones. Observational studies are rarely preregistered, so a reader cannot know how a result would vary depending on analytic choices. A specification curve analysis of a dataset provides a much broader picture. In the case of red meat, you see some significant results, but the vast majority hover around null.
What about the difficulty in analyzing a dataset 1000 different ways? Dr. Zeraatkar told me that it is harder than just choosing one method, but it’s not impossible.
The main barrier to adopting this multiverse approach to data, she noted, was not the extra work but the entrenched belief among researchers that there is a best way to analyze data.
I hope you read this paper and think about it every time you read an observational study that finds a positive or negative association between two things. Ask: What if the researchers were as careful as Dr. Zeraatkar and colleagues and did multiple different analyses? Would the finding hold up to a series of plausible analytic choices?
Nutritional epidemiology would benefit greatly from this approach. But so would any observational study of an exposure and outcome. I suspect that the number of “positive” associations would diminish. And that would not be a bad thing.
Dr. Mandrola, a clinical electrophysiologist at Baptist Medical Associates, Louisville, Kentucky, disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Myth of the Month: Is Contrast-Induced Acute Kidney Injury Real?
A 59-year-old man presents with abdominal pain. He has a history of small bowel obstruction and diverticulitis. His medical history includes chronic kidney disease (CKD; baseline creatinine, 1.8 mg/dL), hypertension, type 2 diabetes, and depression. He had a colectomy 6 years ago for colon cancer.
He takes the following medications: Semaglutide (1 mg weekly), amlodipine (5 mg once daily), and escitalopram (10 mg once daily). On physical exam his blood pressure is 130/80 mm Hg, his pulse is 90, and his temperature is 37.2 degrees C. He has normal bowel sounds but guarding in the right lower quadrant.
His hemoglobin is 14 g/dL, his blood sodium is 136 mEq/L, his blood potassium is 4.0 mmol/L, his BUN is 26 mg/dL, and his creatinine is 1.9 mg/dL. His kidney, ureter, bladder x-ray is unremarkable.
What imaging would you recommend?
A) CT without contrast
B) CT with contrast
C) MRI
D) Abdominal ultrasound
This patient has several potential causes for his abdominal pain that imaging may clarify. I think a contrast CT scan would be the most likely to provide helpful information. It is likely that if it were ordered, there may be hesitation by the radiologist to perform the scan with contrast because of the patient’s CKD.
Concern for contrast-induced kidney injury has limited diagnostic testing for many years. How strong is the evidence for contrast-induced kidney injury, and should we avoid testing that requires contrast in patients with CKD? McDonald and colleagues performed a meta-analysis with 13 studies meeting inclusion criteria, involving 25,950 patients.1 They found no increased risk of acute kidney injury (AKI) in patients who received contrast medium compared with those who did not receive contrast; relative risk of AKI for those receiving contrast was 0.79 (confidence interval: 0.62-1.02). Importantly, there was no difference in AKI in patients with diabetes or CKD.
Ehmann et al. looked at renal outcomes in patients who received IV contrast when they presented to an emergency department with AKI.2 They found that in patients with AKI, receiving contrast was not associated with persistent AKI at hospital discharge. Hinson and colleagues looked at patients arriving at the emergency department and needing imaging.3 They did a retrospective, cohort analysis of 17,934 patients who had CT with contrast, CT with no contrast, or no CT. Contrast administration was not associated with increased incidence of AKI (odds ratio, 0.96, CI: 0.85-1.08).
Aycock et al. did a meta-analysis of AKI after CT scanning, including 28 studies involving 107,335 patients.4 They found that compared with noncontrast CT, CT scanning with contrast was not associated with AKI (OR, 0.94, CI: 0.83-1.07). Elias and Aronson looked at the risk of AKI after contrast in patients receiving CT scans compared with those who received ventilation/perfusion scans to evaluate for pulmonary embolism.5 There were 44 AKI events (4.5%) in patients exposed to contrast media and 33 events (3.4%) in patients not exposed to contrast media (risk difference: 1.1%, 95% CI: -0.6% to 2.9%; OR, 1.39, CI: 0.86-2.26; P = .18).
Despite multiple studies showing no increased risk, there is still a concern that contrast can cause AKI.6 Animal models have shown iodinated contrast can have a deleterious effect on mitochondria and membrane function.6 Criticisms of the retrospective nature of many of the studies I have shared, and the lack of randomized, controlled trials are that there may be bias in these studies, as the highest-risk patients are the ones most likely not to receive contrast. In a joint guideline from the American College of Radiology and the National Kidney Foundation, this statement was made: “The risk of acute kidney injury developing in patients with reduced kidney function following exposure to intravenous iodinated contrast media has been overstated.”7 Their recommendation was to give contrast if needed in patients with glomerular filtration rates (GFRs) greater than 30.
Myth: Contrast-induced renal injury is a concern.
Clinical impact: For CT scanning, it is OK to give contrast when needed. A conservative cutoff for contrast use would be a GFR less than 30.
Dr. Paauw is professor of medicine in the Division of General Internal Medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. McDonald JS et al. Radiology. 2013:267:119-128.
2. Ehmann MR et al. Intensive Care Med. 2023:49(2):205-215.
3. Hinson JS et al. Ann Emerg Med. 2017;69(5):577-586.
4. Aycock RD et al. Ann Emerg Med. 2018 Jan;71(1):44-53.
5. Elias A, Aronson D. Thromb Haemost. 2021 Jun;121(6):800-807.
6. Weisbord SD, du Cheryon D. Intensive Care Med. 2018;44(1):107-109.
7. Davenport MS et al. Radiology. 2020;294(3):660-668.
A 59-year-old man presents with abdominal pain. He has a history of small bowel obstruction and diverticulitis. His medical history includes chronic kidney disease (CKD; baseline creatinine, 1.8 mg/dL), hypertension, type 2 diabetes, and depression. He had a colectomy 6 years ago for colon cancer.
He takes the following medications: Semaglutide (1 mg weekly), amlodipine (5 mg once daily), and escitalopram (10 mg once daily). On physical exam his blood pressure is 130/80 mm Hg, his pulse is 90, and his temperature is 37.2 degrees C. He has normal bowel sounds but guarding in the right lower quadrant.
His hemoglobin is 14 g/dL, his blood sodium is 136 mEq/L, his blood potassium is 4.0 mmol/L, his BUN is 26 mg/dL, and his creatinine is 1.9 mg/dL. His kidney, ureter, bladder x-ray is unremarkable.
What imaging would you recommend?
A) CT without contrast
B) CT with contrast
C) MRI
D) Abdominal ultrasound
This patient has several potential causes for his abdominal pain that imaging may clarify. I think a contrast CT scan would be the most likely to provide helpful information. It is likely that if it were ordered, there may be hesitation by the radiologist to perform the scan with contrast because of the patient’s CKD.
Concern for contrast-induced kidney injury has limited diagnostic testing for many years. How strong is the evidence for contrast-induced kidney injury, and should we avoid testing that requires contrast in patients with CKD? McDonald and colleagues performed a meta-analysis with 13 studies meeting inclusion criteria, involving 25,950 patients.1 They found no increased risk of acute kidney injury (AKI) in patients who received contrast medium compared with those who did not receive contrast; relative risk of AKI for those receiving contrast was 0.79 (confidence interval: 0.62-1.02). Importantly, there was no difference in AKI in patients with diabetes or CKD.
Ehmann et al. looked at renal outcomes in patients who received IV contrast when they presented to an emergency department with AKI.2 They found that in patients with AKI, receiving contrast was not associated with persistent AKI at hospital discharge. Hinson and colleagues looked at patients arriving at the emergency department and needing imaging.3 They did a retrospective, cohort analysis of 17,934 patients who had CT with contrast, CT with no contrast, or no CT. Contrast administration was not associated with increased incidence of AKI (odds ratio, 0.96, CI: 0.85-1.08).
Aycock et al. did a meta-analysis of AKI after CT scanning, including 28 studies involving 107,335 patients.4 They found that compared with noncontrast CT, CT scanning with contrast was not associated with AKI (OR, 0.94, CI: 0.83-1.07). Elias and Aronson looked at the risk of AKI after contrast in patients receiving CT scans compared with those who received ventilation/perfusion scans to evaluate for pulmonary embolism.5 There were 44 AKI events (4.5%) in patients exposed to contrast media and 33 events (3.4%) in patients not exposed to contrast media (risk difference: 1.1%, 95% CI: -0.6% to 2.9%; OR, 1.39, CI: 0.86-2.26; P = .18).
Despite multiple studies showing no increased risk, there is still a concern that contrast can cause AKI.6 Animal models have shown iodinated contrast can have a deleterious effect on mitochondria and membrane function.6 Criticisms of the retrospective nature of many of the studies I have shared, and the lack of randomized, controlled trials are that there may be bias in these studies, as the highest-risk patients are the ones most likely not to receive contrast. In a joint guideline from the American College of Radiology and the National Kidney Foundation, this statement was made: “The risk of acute kidney injury developing in patients with reduced kidney function following exposure to intravenous iodinated contrast media has been overstated.”7 Their recommendation was to give contrast if needed in patients with glomerular filtration rates (GFRs) greater than 30.
Myth: Contrast-induced renal injury is a concern.
Clinical impact: For CT scanning, it is OK to give contrast when needed. A conservative cutoff for contrast use would be a GFR less than 30.
Dr. Paauw is professor of medicine in the Division of General Internal Medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. McDonald JS et al. Radiology. 2013:267:119-128.
2. Ehmann MR et al. Intensive Care Med. 2023:49(2):205-215.
3. Hinson JS et al. Ann Emerg Med. 2017;69(5):577-586.
4. Aycock RD et al. Ann Emerg Med. 2018 Jan;71(1):44-53.
5. Elias A, Aronson D. Thromb Haemost. 2021 Jun;121(6):800-807.
6. Weisbord SD, du Cheryon D. Intensive Care Med. 2018;44(1):107-109.
7. Davenport MS et al. Radiology. 2020;294(3):660-668.
A 59-year-old man presents with abdominal pain. He has a history of small bowel obstruction and diverticulitis. His medical history includes chronic kidney disease (CKD; baseline creatinine, 1.8 mg/dL), hypertension, type 2 diabetes, and depression. He had a colectomy 6 years ago for colon cancer.
He takes the following medications: Semaglutide (1 mg weekly), amlodipine (5 mg once daily), and escitalopram (10 mg once daily). On physical exam his blood pressure is 130/80 mm Hg, his pulse is 90, and his temperature is 37.2 degrees C. He has normal bowel sounds but guarding in the right lower quadrant.
His hemoglobin is 14 g/dL, his blood sodium is 136 mEq/L, his blood potassium is 4.0 mmol/L, his BUN is 26 mg/dL, and his creatinine is 1.9 mg/dL. His kidney, ureter, bladder x-ray is unremarkable.
What imaging would you recommend?
A) CT without contrast
B) CT with contrast
C) MRI
D) Abdominal ultrasound
This patient has several potential causes for his abdominal pain that imaging may clarify. I think a contrast CT scan would be the most likely to provide helpful information. It is likely that if it were ordered, there may be hesitation by the radiologist to perform the scan with contrast because of the patient’s CKD.
Concern for contrast-induced kidney injury has limited diagnostic testing for many years. How strong is the evidence for contrast-induced kidney injury, and should we avoid testing that requires contrast in patients with CKD? McDonald and colleagues performed a meta-analysis with 13 studies meeting inclusion criteria, involving 25,950 patients.1 They found no increased risk of acute kidney injury (AKI) in patients who received contrast medium compared with those who did not receive contrast; relative risk of AKI for those receiving contrast was 0.79 (confidence interval: 0.62-1.02). Importantly, there was no difference in AKI in patients with diabetes or CKD.
Ehmann et al. looked at renal outcomes in patients who received IV contrast when they presented to an emergency department with AKI.2 They found that in patients with AKI, receiving contrast was not associated with persistent AKI at hospital discharge. Hinson and colleagues looked at patients arriving at the emergency department and needing imaging.3 They did a retrospective, cohort analysis of 17,934 patients who had CT with contrast, CT with no contrast, or no CT. Contrast administration was not associated with increased incidence of AKI (odds ratio, 0.96, CI: 0.85-1.08).
Aycock et al. did a meta-analysis of AKI after CT scanning, including 28 studies involving 107,335 patients.4 They found that compared with noncontrast CT, CT scanning with contrast was not associated with AKI (OR, 0.94, CI: 0.83-1.07). Elias and Aronson looked at the risk of AKI after contrast in patients receiving CT scans compared with those who received ventilation/perfusion scans to evaluate for pulmonary embolism.5 There were 44 AKI events (4.5%) in patients exposed to contrast media and 33 events (3.4%) in patients not exposed to contrast media (risk difference: 1.1%, 95% CI: -0.6% to 2.9%; OR, 1.39, CI: 0.86-2.26; P = .18).
Despite multiple studies showing no increased risk, there is still a concern that contrast can cause AKI.6 Animal models have shown iodinated contrast can have a deleterious effect on mitochondria and membrane function.6 Criticisms of the retrospective nature of many of the studies I have shared, and the lack of randomized, controlled trials are that there may be bias in these studies, as the highest-risk patients are the ones most likely not to receive contrast. In a joint guideline from the American College of Radiology and the National Kidney Foundation, this statement was made: “The risk of acute kidney injury developing in patients with reduced kidney function following exposure to intravenous iodinated contrast media has been overstated.”7 Their recommendation was to give contrast if needed in patients with glomerular filtration rates (GFRs) greater than 30.
Myth: Contrast-induced renal injury is a concern.
Clinical impact: For CT scanning, it is OK to give contrast when needed. A conservative cutoff for contrast use would be a GFR less than 30.
Dr. Paauw is professor of medicine in the Division of General Internal Medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected].
References
1. McDonald JS et al. Radiology. 2013:267:119-128.
2. Ehmann MR et al. Intensive Care Med. 2023:49(2):205-215.
3. Hinson JS et al. Ann Emerg Med. 2017;69(5):577-586.
4. Aycock RD et al. Ann Emerg Med. 2018 Jan;71(1):44-53.
5. Elias A, Aronson D. Thromb Haemost. 2021 Jun;121(6):800-807.
6. Weisbord SD, du Cheryon D. Intensive Care Med. 2018;44(1):107-109.
7. Davenport MS et al. Radiology. 2020;294(3):660-668.
‘Green Whistle’ Provides Pain Relief -- But Not in the US
This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.
Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).
Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
RAMPED Trial: Evaluating the Efficacy of Methoxyflurane
Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.
William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.
They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.
Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.
There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.
Pharmacology of Penthrox: Its Role and Mechanism of Action
Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.
Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?
Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.
In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.
Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.
We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.
That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.
Regulatory Hurdles: Challenges in FDA Approval
Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?
Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.
Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.
Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?
Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.
Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”
Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.
Exploring the Ecological Footprint of Methoxyflurane
Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.
Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?
Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.
This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”
Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”
I’ll let Sergey speak to mechanisms of action and those types of things.
Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.
Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.
The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.
To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.
US Military Use and Implications for Civilian Practice
Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.
Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.
For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.
Safety Considerations and Aerosolization Concerns
Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.
Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.
Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.
Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.
Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.
There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.
We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.
Final Thoughts
Dr. Glatter: Are there any key takeaways you’d like to share with our audience?
Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.
Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.
Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.
A version of this article appeared on Medscape.com.
This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.
Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).
Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
RAMPED Trial: Evaluating the Efficacy of Methoxyflurane
Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.
William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.
They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.
Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.
There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.
Pharmacology of Penthrox: Its Role and Mechanism of Action
Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.
Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?
Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.
In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.
Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.
We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.
That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.
Regulatory Hurdles: Challenges in FDA Approval
Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?
Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.
Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.
Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?
Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.
Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”
Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.
Exploring the Ecological Footprint of Methoxyflurane
Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.
Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?
Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.
This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”
Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”
I’ll let Sergey speak to mechanisms of action and those types of things.
Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.
Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.
The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.
To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.
US Military Use and Implications for Civilian Practice
Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.
Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.
For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.
Safety Considerations and Aerosolization Concerns
Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.
Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.
Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.
Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.
Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.
There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.
We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.
Final Thoughts
Dr. Glatter: Are there any key takeaways you’d like to share with our audience?
Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.
Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.
Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.
A version of this article appeared on Medscape.com.
This discussion was recorded on March 29, 2024. The transcript has been edited for clarity.
Robert D. Glatter, MD: Joining me today to discuss the use of methoxyflurane (Penthrox), an inhaled nonopioid analgesic for the relief of acute pain, is Dr. William Kenneth (Ken) Milne, an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM).
Also joining me is Dr. Sergey Motov, an emergency physician and research director at Maimonides Medical Center in Brooklyn, New York, and an expert in pain management. I want to welcome both of you and thank you for joining me.
RAMPED Trial: Evaluating the Efficacy of Methoxyflurane
Dr. Glatter: Ken, your recent post on Twitter [now X] regarding the utility of Penthrox in the RAMPED trial really caught my attention. While the trial was from 2021, it really is relevant regarding the prehospital management of pain in the practice of emergency medicine, and certainly in-hospital practice. I was hoping you could review the study design but also get into the rationale behind the use of this novel agent.
William Kenneth (Ken) Milne, MD, MSc: Sure. I’d be happy to kick this episode off with talking about a study that was published in 2020 in Academic Emergency Medicine. It was an Australian study by Brichko et al., and they were doing a randomized controlled trial looking at methoxyflurane vs standard care.
They selected out a population of adults, which they defined as 18-75 years of age. They were in the prehospital setting and they had a pain score of greater than 8. They gave the participants methoxyflurane, which is also called the “green whistle.” They had the subjects take that for their prehospital pain, and they compared that with whatever your standard analgesic in the prehospital setting would be.
Their primary outcome was how many patients had at least 50% reduction in their pain score within 30 minutes. They recruited about 120 people, and they found that there was no statistical difference in the primary outcome between methoxyflurane and standard care. Again, that primary outcome was a reduction in pain score by greater than 50% at 30 minutes, and there wasn’t a statistical difference between the two.
There are obviously limits to any study, and it was a convenience sample. This was an unmasked trial, so people knew if they were getting this green whistle, which is popular in Australia. People would be familiar with this device, and they didn’t compare it with a sham or placebo group.
Pharmacology of Penthrox: Its Role and Mechanism of Action
Dr. Glatter: The primary outcome wasn’t met, but certainly secondary outcomes were. There was, again, a relatively small number of patients in this trial. That said, there was significant pain relief. I think there are issues with the trial, as with any trial limitations.
Getting to the pharmacology of Penthrox, can you describe this inhaled anesthetic and how we use it, specifically its role at the subanesthetic doses?
Sergey M. Motov, MD: Methoxyflurane is embedded in the green whistle package, and that whole contraption is called Penthrox. It’s an inhaled volatile fluorinated hydrocarbon anesthetic that was predominantly used, I’d say 40, 50 years ago, for general anesthesia and slowly but surely fell out of favor due to the fact that, when used for prolonged duration or in supratherapeutic doses, there were cases of severe or even fatal nephrotoxicity and hepatotoxicity.
In the late ‘70s and early ‘80s, all the fluranes came on board that are slightly different as general anesthetics, and methoxyflurane started slowly falling out of favor. Because of this paucity and then a subsequent slightly greater number of cases of nephrotoxicity and hepatotoxicity, [the US Food and Drug Administration] FDA made a decision to pull the drug off the market in 2005. FDA successfully accomplished its mission and since then has pretty much banned the use of inhaled methoxyflurane in any shape, form, or color in the United States.
Going back to the green whistle, it has been used in Australia probably for about 50-60 years, and has been used in Europe for probably 10-20 years. Ken can attest that it has been used in Canada for at least a decade and the track record is phenomenal.
We are using subanesthetic, even supratherapeutic doses that, based on available literature, has no incidence of this fatal hepatotoxicity or nephrotoxicity. We’re talking about 10 million doses administered worldwide, except in the United States. There are 40-plus randomized clinical trials with over 30,000 patients enrolled that prove efficacy and safety.
That’s where we are right now, in a conundrum. We have a great deal of data all over the world, except in the United States, that push for the use of this noninvasive, patient-controlled nonopioid inhaled anesthetic. We just don’t have the access in North America, with the exception of Canada.
Regulatory Hurdles: Challenges in FDA Approval
Dr. Glatter: Absolutely. The FDA wants to be cautious, but if you look at the evidence base of data on this, it really indicates otherwise. Do you think that these roadblocks can be somehow overcome?
Dr. Milne: In the 2000s and 2010s, everybody was focused on opioids and all the dangers and potential adverse events. Opioids are great drugs like many other drugs; it depends on dose and duration. If used properly, it’s an excellent drug. Well, here’s another excellent drug if it’s used properly, and the adverse events are dependent on their dose and duration. Penthrox, or methoxyflurane, is a subtherapeutic, small dose and there have been no reported cases of addiction or abuse related to these inhalers.
Dr. Glatter: That argues for the point — and I’ll turn this over to you, Sergey — of how can this not, in my mind, be an issue that the FDA can overcome.
Dr. Motov: I agree with you. It’s very hard for me to speak on behalf of the FDA, to allude to their thinking processes, but we need to be up to speed with the evidence. The first thing is, why don’t you study the drug in the United States? I’m not asking you to lift the ban, which you put in 2005, but why don’t you honor what has been done over two decades and at least open the door a little bit and let us do what we do best? Why don’t you allow us to do the research in a controlled setting with a carefully, properly selected group of patients without underlying renal or hepatic insufficiency and see where we’re at?
Let’s compare it against placebo. If that’s not ethical, let’s compare it against active comparators — God knows we have 15-20 drugs we can use — and let’s see where we’re at. Ken has been nothing short of superb when it comes to evidence. Let us put the evidence together.
Dr. Milne: If there were concerns decades ago, those need to be addressed. As science is iterative and as other information becomes available, the scientific method would say, Let’s reexamine this and let’s reexamine our position, and do that with evidence. To do that, it has to have validity within the US system. Someone like you doing the research, you are a pain research guru; you should be doing this research to say, “Does it work or not? Does this nonapproval still stand today in 2024?”
Dr. Motov: Thank you for the shout-out, and I agree with you. All of us, those who are interested, on the frontiers of emergency care — as present clinicians — we should be doing this. There is nothing that will convince the FDA more than properly and rightly conducted research, time to reassess the evidence, and time to be less rigid. I understand that you placed a ban 20 years ago, but let’s go with the science. We cannot be behind it.
Exploring the Ecological Footprint of Methoxyflurane
Dr. Milne: There was an Austrian study in 2022 and a very interesting study out of the UK looking at life-cycle impact assessment on the environment. If we’re not just concerned about patient care —obviously, we want to provide patients with a safe and effective product, compared with other products that are available that might not have as good a safety profile — this looks at the impact on the environment.
Dr. Glatter: Ken, can you tell me about some of your recent research regarding the environmental effects related to use of Penthrox, but also its utility pharmacologically and its mechanism of action?
Dr. Milne: There was a really interesting study published this year by Martindale in the Emergency Medicine Journal. It took a different approach to this question about could we be using this drug, and why should we be using this drug? Sergey and I have already talked about the potential benefits and the potential harms. I mentioned opioids and some of the concerns about that. For this drug, if we’re using it in the prehospital setting in this little green whistle, the potential benefits look really good, and we haven’t seen any of the potential harms come through in the literature.
This was another line of evidence of why this might be a good drug, because of the environmental impact of this low-dose methoxyflurane. They compared it with nitrous oxide and said, “Well, what about the life-cycle impact on the environment of using this and the overall cradle-to-grave environmental impacts?”
Obviously, Sergey and I are interested in patient care, and we treat patients one at a time. But we have a larger responsibility to social determinants of health, like our environment. If you look at the overall cradle-to-grave environmental impact of this drug, it was better than for nitrous oxide when looking specifically at climate-change impact. That might be another reason, another line of argument, that could be put forward in the United States to say, “We want to have a healthy environment and a healthy option for patients.”
I’ll let Sergey speak to mechanisms of action and those types of things.
Dr. Motov: As a general anesthetic and hydrocarbonated volatile ones, I’m just going to say that it causes this generalized diffuse cortical depression, and there are no particular channels, receptors, or enzymes we need to worry much about. In short, it’s an inhaled gas used to put patients or people to sleep.
Over the past 30 or 40 years — and I’ll go back to the past decade — there have been numerous studies in different countries (outside of the United States, of course), and with the recent study that Ken just cited, there were comparisons for managing predominantly acute traumatic injuries in pediatric and adult populations presenting to EDs in various regions of the world that compared Penthrox, or the green whistle, with either placebo or active comparators, which included parenteral opioids, oral opioids, and NSAIDs.
The recent systematic review by Fabbri, out of Italy, showed that for ultra–short-term pain — we’re talking about 5, 10, or 15 minutes — inhaled methoxyflurane was found to be equal or even superior to standard of care, primarily related to parenteral opioids, and safety was off the hook. Interestingly, with respect to analgesia, they found that geriatric patients seemed to be responding more, with respect to changing pain score, than younger adults — we’re talking about ages 18-64 vs 65 or older. Again, we need to make sure that we carefully select those elderly people without underlying renal or hepatic insufficiency.
To wrap this up, there is evidence clearly supporting its analgesic efficacy and safety, even in comparison to commonly used and traditionally accepted analgesic modalities that we use for managing acute pain.
US Military Use and Implications for Civilian Practice
Dr. Glatter: Do you think that methoxyflurane’s use in the military will help propel its use in clinical settings in the US, and possibly convince the FDA to look at this closer? The military is currently using it in deployed combat veterans in an ongoing fashion.
Dr. Motov: I’m excited that the Department of Defense in the United States has taken the lead, and they’re being very progressive. There are data that we’ve adapted to the civilian environment by use of intranasal opioids and intranasal ketamine with more doctors who came out of the military. In the military, it’s a kingdom within a kingdom. I don’t know their relationship with the FDA, but I support the military’s pharmacologic initiative by honoring and disseminating their research once it becomes available.
For us nonmilitary folks, we still need to work with the FDA. We need to convince the FDA to let us study the drug, and then we need to pile the evidence within the United States so that the FDA will start looking at this favorably. It wouldn’t hurt and it wouldn’t harm. Any piece of evidence will add to the existing body of literature that we need to allow this medication to be available to us.
Safety Considerations and Aerosolization Concerns
Dr. Glatter: Its safety in children is well established in Australia and throughout the world. I think it deserves a careful look, and the evidence that you’ve both presented argues for the use of this prehospital but also in hospital. I guess there was concern in the hospital with underventilation and healthcare workers being exposed to the fumes, and then getting headaches, dizziness, and so forth. I don’t know if that’s borne out, Ken, in any of your experience in Canada at all.
Dr. Milne: We currently don’t have it in our shop. It’s being used in British Columbia right now in the prehospital setting, and I’m not aware of anybody using it in their department. It’s used prehospital as far as I know.
Dr. Motov: I can attest to it, if I may, because I had familiarized myself with the device. I actually was able to hold it in my hands. I have not used it yet but I had the prototype. The way it’s set up, there is an activated charcoal chamber that sits right on top of the device, which serves as the scavenger for exhaled air that contains particles of methoxyflurane. In theory, but I’m telling how it is in practicality, it significantly reduces occupational exposure, based on data that lacks specifics.
Although most of the researchers did not measure the concentration of methoxyflurane in ambient air within the treatment room in the EDs, I believe the additional data sources clearly stating that it’s within or even below the detectable level that would cause any harm. Once again, we need to honor pathology. We need to make sure that pregnant women will not be exposed to it.
Dr. Milne: In 2024, we also need to be concerned about aerosolizing procedures and aerosolizing treatments, and just take that into account because we should be considering all the potential benefits and all the potential harms. Going through the COVID-19 pandemic, there was concern about transmission and whether or not it was droplet or aerosolized.
There was an observational study published in 2022 in Austria by Trimmel in BMC Emergency Medicine showing similar results. It seemed to work well and potential harms didn’t get picked up. They had to stop the study early because of COVID-19.
We need to always focus in on the potential benefits, the potential harms; where does the science land? Where do the data lie? Then we move forward from that and make informed decisions.
Final Thoughts
Dr. Glatter: Are there any key takeaways you’d like to share with our audience?
Dr. Milne: One of the takeaways from this whole conversation is that science is iterative and science changes. When new evidence becomes available, and we’ve seen it accumulate around the world, we as scientists, as a researcher, as somebody committed to great patient care should revisit our positions on this. Since there is a prohibition against this medication, I think it’s time to reassess that stance and move forward to see if it still is accurate today.
Dr. Motov: I wholeheartedly agree with this. Thank you, Ken, for bringing this up. Good point.
Dr. Glatter: This has been a really informative discussion. I think our audience will certainly embrace this. Thank you very much for your time; it’s much appreciated.
Dr. Glatter is an assistant professor of emergency medicine at Zucker School of Medicine at Hofstra/Northwell in Hempstead, New York. He is a medical adviser for Medscape and hosts the Hot Topics in EM series. Dr. Milne is an emergency physician at Strathroy Middlesex General Hospital in Ontario, Canada, and the founder of the well-known podcast The Skeptics’ Guide to Emergency Medicine (SGEM). Dr. Motov is professor of emergency medicine and director of research in the Department of Emergency Medicine at Maimonides Medical Center in Brooklyn, New York. He is passionate about safe and effective pain management in the emergency department, and has numerous publications on the subject of opioid alternatives in pain management. Dr. Glatter, Dr. Milne, and Dr. Motov had no conflicts of interest to disclose.
A version of this article appeared on Medscape.com.
Intermittent Fasting + HIIT: Fitness Fad or Fix?
Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?
Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.
But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?
I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.
First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.
Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.
Third: a combination of the two. Sounds rough to me.
The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.
Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.
Let me walk you through some of the outcomes here.
First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.
Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.
The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.
But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.
The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.
Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.
Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.
Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?
Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.
I’m joking. The truth is that Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?
Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.
But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?
I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.
First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.
Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.
Third: a combination of the two. Sounds rough to me.
The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.
Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.
Let me walk you through some of the outcomes here.
First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.
Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.
The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.
But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.
The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.
Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.
Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.
Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?
Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.
I’m joking. The truth is that Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Let’s be honest: Although as physicians we have the power of the prescription pad, so much of health, in the end, comes down to lifestyle. Of course, taking a pill is often way easier than changing your longstanding habits. And what’s worse, doesn’t it always seem like the lifestyle stuff that is good for your health is unpleasant?
Two recent lifestyle interventions that I have tried and find really not enjoyable are time-restricted eating (also known as intermittent fasting) and high-intensity interval training, or HIIT. The former leaves me hangry for half the day; the latter is, well, it’s just really hard compared with my usual jog.
But given the rule of unpleasant lifestyle changes, I knew as soon as I saw this recent study what the result would be. What if we combined time-restricted eating with high-intensity interval training?
I’m referring to this study, appearing in PLOS ONE from Ranya Ameur and colleagues, which is a small trial that enrolled otherwise healthy women with a BMI > 30 and randomized them to one of three conditions.
First was time-restricted eating. Women in this group could eat whatever they wanted, but only from 8 a.m. to 4 p.m. daily.
Second: high-intensity functional training. This is a variant of high-intensity interval training which focuses a bit more on resistance exercise than on pure cardiovascular stuff but has the same vibe of doing brief bursts of intensive activity followed by a cool-down period.
Third: a combination of the two. Sounds rough to me.
The study was otherwise straightforward. At baseline, researchers collected data on body composition and dietary intake, and measured blood pressure, glucose, insulin, and lipid biomarkers. A 12-week intervention period followed, after which all of this stuff was measured again.
Now, you may have noticed that there is no control group in this study. We’ll come back to that — a few times.
Let me walk you through some of the outcomes here.
First off, body composition metrics. All three groups lost weight — on average, around 10% of body weight which, for a 12-week intervention, is fairly impressive. BMI and waist circumference went down as well, and, interestingly, much of the weight loss here was in fat mass, not fat-free mass.
Most interventions that lead to weight loss — and I’m including some of the newer drugs here — lead to both fat and muscle loss. That might not be as bad as it sounds; the truth is that muscle mass increases as fat increases because of the simple fact that if you’re carrying more weight when you walk around, your leg muscles get bigger. But to preserve muscle mass in the face of fat loss is sort of a Goldilocks finding, and, based on these results, there’s a suggestion that the high-intensity functional training helps to do just that.
The dietary intake findings were really surprising to me. Across the board, caloric intake decreased. It’s no surprise that time-restricted eating reduces calorie intake. That has been shown many times before and is probably the main reason it induces weight loss — less time to eat means you eat less.
But why would high-intensity functional training lead to lower caloric intake? Most people, myself included, get hungry after they exercise. In fact, one of the reasons it’s hard to lose weight with exercise alone is that we end up eating more calories to make up for what we lost during the exercise. This calorie reduction could be a unique effect of this type of exercise, but honestly this could also be something called the Hawthorne effect. Women in the study kept a food diary to track their intake, and the act of doing that might actually make you eat less. It makes it a little more annoying to snack a bit if you know you have to write it down. This is a situation where I would kill for a control group.
The lipid findings are also pretty striking, with around a 40% reduction in LDL across the board, and evidence of synergistic effects of combined TRE and high-intensity training on total cholesterol and triglycerides. This is like a statin level of effect — pretty impressive. Again, my heart pines for a control group, though.
Same story with glucose and insulin measures: an impressive reduction in fasting glucose and good evidence that the combination of time-restricted eating and high-intensity functional training reduces insulin levels and HOMA-IR as well.
Really the only thing that wasn’t very impressive was the change in blood pressure, with only modest decreases across the board.
Okay, so let’s take a breath after this high-intensity cerebral workout and put this all together. This was a small study, lacking a control group, but with large effect sizes in very relevant clinical areas. It confirms what we know about time-restricted eating — that it makes you eat less calories — and introduces the potential that vigorous exercise can not only magnify the benefits of time-restricted eating but maybe even mitigate some of the risks, like the risk for muscle loss. And of course, it comports with my central hypothesis, which is that the more unpleasant a lifestyle intervention is, the better it is for you. No pain, no gain, right?
Of course, I am being overly dogmatic. There are plenty of caveats. Wrestling bears is quite unpleasant and almost certainly bad for you. And there are even some pleasant things that are pretty good for you — like coffee and sex. And there are even people who find time-restricted eating and high-intensity training pleasurable. They are called masochists.
I’m joking. The truth is that Or, at least, much less painful. The trick is getting over the hump of change. If only there were a pill for that.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Remembering the Dead in Unity and Peace
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
Soldiers’ graves are the greatest preachers of peace.
Albert Schweitzer 1
From the window of my room in the house where I grew up, I could see the American flag flying over Fort Sam Houston National Cemetery. I would ride my bicycle around the paths that divided the grassy sections of graves to the blocks where my father and grandfather were buried. I would stand before the gravesites in a state combining prayer, processing, and remembrance. Carved into my grandfather’s headstone were the 2 world wars he fought in and on my father’s, the 3 conflicts in which he served. I would walk up to their headstones and trace the emblems of belief: the engraved Star of David that marked my grandfather’s grave and the simple cross for my father.
My visits and writing about them may strike some readers as morbid. However, for me, the experience and memories are calming and peaceful, like the cemetery. There was something incredibly comforting about the uniformity of the headstones standing out for miles, mirroring the ranks of soldiers in the wars they commemorated. Yet, as with the men and women who fought each conflict, every grave told a succinct Hemingway-like story of their military career etched in stone. I know now that discrimination in the military segregated even the burial of service members.2 It appeared to my younger self that at least compared to civilian cemeteries with their massive monuments to the wealthy and powerful, there was an egalitarian effect: my master sergeant grandfather’s plot was indistinguishable from that of my colonel father.
Memorial Day and military cemeteries have a shared history. While Veterans Day honors all who have worn the uniform, living and dead, Memorial Day, as its name suggests, remembers those who have died in a broadly conceived line of duty. To emphasize the more solemn character of the holiday, the original name, Decoration Day, was changed to emphasize the reverence of remembrance.3 The first widespread observance of Memorial Day was to commemorate those who perished in the Civil War, which remains the conflict with the highest number of casualties in American history. The first national commemoration occurred at Arlington National Cemetery when 5000 volunteers decorated 20,000 Union and Confederate graves in an act of solidarity and reconciliation. The practice struck a chord in a country beleaguered by war and division.2
National cemeteries also emerged from the grief and gratitude that marked the Civil War. President Abraham Lincoln, who gave us the famous US Department of Veterans Affairs (VA) mission motto, also inaugurated national cemeteries. At the beginning of the Civil War, only Union soldiers who sacrificed their lives to end slavery were entitled to burial. Reflective of the rift that divided the country, Confederate soldiers contended that such divisiveness should not continue unto death and were granted the right to be buried beside those they fought against, united in death and memory.4
Today, the country is more divided than ever: more than a few observers of American culture, including the new popular film Civil War, believe we are on the brink of another civil war.5 While we take their warning seriously, there are still signs of unity amongst the people, like those who followed the war between the states. Recently, in that same national cemetery where I first contemplated these themes, justice, delayed too long, was not entirely denied. A ceremony was held to dedicate 17 headstones to honor the memories of Black World War I Army soldiers who were court-martialed and hanged in the wake of the Houston riots of 1917. As a sign of their dishonor, their headstones listed only their dates and names—nothing of their military service. At the urging of their descendants, the US Army reopened the files and found the verdict to have been racially motivated. They set aside their convictions, gave them honorable discharges for their service in life, and replaced their gravesites with ones that enshrined that respect in death.6
Some reading this column may, like me, have had the profound privilege of participating in a burial at a national cemetery. We recall the stirring mix of pride and loss when the honor guard hands the perfectly folded flag to the bereaved family member and bids farewell to their comrade with a salute. Yet, not all families have this privilege. One of the saddest experiences I recall is when I was in a leadership position at a VA facility and unable to help impoverished families who were denied VA burial benefits or payments to transport their deceased veteran closer to home. That sorrow often turned to thankful relief when a veterans service organization or other community group offered to pay the funerary expenses. Fortunately, like eligibility for VA health care, the criteria for burial benefits have steadily expanded to encompass spouses, adult children, and others who served.7
In a similar display of altruism this Memorial Day, veterans service organizations, Boy Scouts, and volunteers will place a flag on every grave to show that some memories are stronger than death. If you have never seen it, I encourage you to visit a VA or a national cemetery this holiday or, even better, volunteer to place flags. Either way, spend a few moments thankfully remembering that we can all engage in those uniquely American Memorial Day pastimes of barbecues and baseball games because so many served and died to protect our way of life. The epigraph at the beginning of this column is attributed to Albert Schweitzer, the physician-theologian of reverence for life. The news today is full of war and rumors of war.8 Let us all hope that the message is heard around the world so there is no need to build more national cemeteries to remember our veterans.
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
1. Cohen R. On Omaha Beach today, where’s the comradeship? The New York Times. June 5, 2024. Accessed April 26, 2024. https://www.nytimes.com/2004/06/05/world/on-omaha-beach-today-where-s-the-comradeship.html
2. Stillwell B. ‘How decoration day’ became memorial day. Military.com. Published May 12, 2020. Accessed April 26, 2024. https://www.military.com/holidays/memorial-day/how-decoration-day-became-memorial-day.html
3. The history of Memorial Day. PBS. Accessed April 26, 2024. https://www.pbs.org/national-memorial-day-concert/memorial-day/history/
4. US Department of Veterans Affairs, National Cemetery Administration. Facts: NCA history and development. Updated October 18, 2023. Accessed April 26, 2024. https://www.cem.va.gov/facts/NCA_History_and_Development_1.asp
5. Lerer L. How the movie ‘civil war’ echoes real political anxieties. The New York Times. April 21, 2024. Accessed April 26, 2024. https://www.nytimes.com/2024/04/21/us/politics/civil-war-movie-politics.html
6. VA’s national cemetery administration dedicates new headstones to honor black soldiers, correcting 1917 injustice. News release. US Department of Veterans Affairs. Published February 22, 2024. Accessed April 26, 2024. https://news.va.gov/press-room/va-headstones-black-soldiers-1917-injustice/
7. US Department of Veterans Affairs, National Cemetery Administration. Burial benefits. Updated September 27, 2023. Accessed April 26, 2024. https://www.cem.va.gov/burial_benefits/
8. Racker M. Why so many politicians are talking about world war III. Time. November 20, 2023. Accessed April 29, 2024. https://time.com/6336897/israel-war-gaza-world-war-iii/
Comment on “Skin Cancer Screening: The Paradox of Melanoma and Improved All-Cause Mortality”
To the Editor:
I was unsurprised and gratified by the information presented in the Viewpoint on skin cancer screening by Ngo1 (Cutis. 2024;113:94-96). In my 30 years as a community dermatologist, I have observed that patients who opt to have periodic full-body skin examinations usually are more health literate, more likely to have a primary care physician (PCP) who has encouraged them to do so (ie, a conscientious practitioner directing their preventive care), more likely to have a strong will to live, and less likely to have multiple stressors that preclude self-care (eg, may be less likely to have a spouse for whom they are a caregiver) compared to those who do not get screened.
Findings on a full-body skin examination may impact patients in many ways, not only by the detection of skin cancers. I have discovered the following:
- evidence of diabetes/insulin resistance in the form of acanthosis nigricans, tinea corporis, erythrasma;
- evidence of rosacea associated with excessive alcohol intake;
- evidence of smoking-related issues such as psoriasis or hidradenitis suppurativa;
- cutaneous evidence of other systemic diseases (eg, autoimmune disease, cancer);
- elucidation of other chronic health problems (eg, psoriasis of the skin as a clue for undiagnosed psoriatic arthritis); and
- detection of parasites on the skin (eg, ticks) or signs of infection that may have notable ramifications (eg, interdigital maceration of a diabetic patient with tinea pedis).
I even saw a patient who had been sent for magnetic resonance imaging for back pain by her internist without any physical examination when she actually had an erosion over the sacrum from a rug burn!
When conducting full-body skin examinations, dermatologists should not underestimate these principles:
- The “magic” of using a relatively noninvasive and sensitive screening tool—comfort and stress reduction for the patient from a thorough visual, tactile, olfactory, and auditory examination.
- Human interaction—especially when the patient is seen annually or even more frequently over a period of years or decades, and especially when an excellent patient-physician rapport has been established.
- The impact of improving a patient’s appearance on their overall sense of well-being (eg, by controlling rosacea).
- The opportunity to introduce concepts (ie, educate patients) such as alcohol avoidance, smoking cessation, weight reduction, hygiene, diet, and exercise in a more tangential way than a PCP, as well as to consider with patients the idea that lifestyle modification may be an adjunct, if not a replacement, for prescription treatments.
- The stress reduction that ensues when a variety of self-identified health issues are addressed, for which the only treatment may be reassurance.
I would add to Dr. Ngo’s argument that stratifying patients into skin cancer risk categories may be a useful measure if the only goal of periodic dermatologic evaluation is skin cancer detection. One size rarely fits all when it comes to health recommendations.
In sum, I believe that periodic full-body skin examination is absolutely beneficial to patient care, and I am not at all surprised that all-cause mortality was lower in patients who have those examinations. Furthermore, when I offer my healthy, low-risk patients the option to return in 2 years rather than 1, the vast majority insist on 1 year. My mother used to say, “It’s better to be looked over than to be overlooked,” and I tell my patients that, too—but it seems they already know that instinctively.
- Ngo BT. Skin cancer screening: the paradox of melanoma and improved all-cause mortality. Cutis. 2024;113:94-96. doi:10.12788/cutis.0948
To the Editor:
I was unsurprised and gratified by the information presented in the Viewpoint on skin cancer screening by Ngo1 (Cutis. 2024;113:94-96). In my 30 years as a community dermatologist, I have observed that patients who opt to have periodic full-body skin examinations usually are more health literate, more likely to have a primary care physician (PCP) who has encouraged them to do so (ie, a conscientious practitioner directing their preventive care), more likely to have a strong will to live, and less likely to have multiple stressors that preclude self-care (eg, may be less likely to have a spouse for whom they are a caregiver) compared to those who do not get screened.
Findings on a full-body skin examination may impact patients in many ways, not only by the detection of skin cancers. I have discovered the following:
- evidence of diabetes/insulin resistance in the form of acanthosis nigricans, tinea corporis, erythrasma;
- evidence of rosacea associated with excessive alcohol intake;
- evidence of smoking-related issues such as psoriasis or hidradenitis suppurativa;
- cutaneous evidence of other systemic diseases (eg, autoimmune disease, cancer);
- elucidation of other chronic health problems (eg, psoriasis of the skin as a clue for undiagnosed psoriatic arthritis); and
- detection of parasites on the skin (eg, ticks) or signs of infection that may have notable ramifications (eg, interdigital maceration of a diabetic patient with tinea pedis).
I even saw a patient who had been sent for magnetic resonance imaging for back pain by her internist without any physical examination when she actually had an erosion over the sacrum from a rug burn!
When conducting full-body skin examinations, dermatologists should not underestimate these principles:
- The “magic” of using a relatively noninvasive and sensitive screening tool—comfort and stress reduction for the patient from a thorough visual, tactile, olfactory, and auditory examination.
- Human interaction—especially when the patient is seen annually or even more frequently over a period of years or decades, and especially when an excellent patient-physician rapport has been established.
- The impact of improving a patient’s appearance on their overall sense of well-being (eg, by controlling rosacea).
- The opportunity to introduce concepts (ie, educate patients) such as alcohol avoidance, smoking cessation, weight reduction, hygiene, diet, and exercise in a more tangential way than a PCP, as well as to consider with patients the idea that lifestyle modification may be an adjunct, if not a replacement, for prescription treatments.
- The stress reduction that ensues when a variety of self-identified health issues are addressed, for which the only treatment may be reassurance.
I would add to Dr. Ngo’s argument that stratifying patients into skin cancer risk categories may be a useful measure if the only goal of periodic dermatologic evaluation is skin cancer detection. One size rarely fits all when it comes to health recommendations.
In sum, I believe that periodic full-body skin examination is absolutely beneficial to patient care, and I am not at all surprised that all-cause mortality was lower in patients who have those examinations. Furthermore, when I offer my healthy, low-risk patients the option to return in 2 years rather than 1, the vast majority insist on 1 year. My mother used to say, “It’s better to be looked over than to be overlooked,” and I tell my patients that, too—but it seems they already know that instinctively.
To the Editor:
I was unsurprised and gratified by the information presented in the Viewpoint on skin cancer screening by Ngo1 (Cutis. 2024;113:94-96). In my 30 years as a community dermatologist, I have observed that patients who opt to have periodic full-body skin examinations usually are more health literate, more likely to have a primary care physician (PCP) who has encouraged them to do so (ie, a conscientious practitioner directing their preventive care), more likely to have a strong will to live, and less likely to have multiple stressors that preclude self-care (eg, may be less likely to have a spouse for whom they are a caregiver) compared to those who do not get screened.
Findings on a full-body skin examination may impact patients in many ways, not only by the detection of skin cancers. I have discovered the following:
- evidence of diabetes/insulin resistance in the form of acanthosis nigricans, tinea corporis, erythrasma;
- evidence of rosacea associated with excessive alcohol intake;
- evidence of smoking-related issues such as psoriasis or hidradenitis suppurativa;
- cutaneous evidence of other systemic diseases (eg, autoimmune disease, cancer);
- elucidation of other chronic health problems (eg, psoriasis of the skin as a clue for undiagnosed psoriatic arthritis); and
- detection of parasites on the skin (eg, ticks) or signs of infection that may have notable ramifications (eg, interdigital maceration of a diabetic patient with tinea pedis).
I even saw a patient who had been sent for magnetic resonance imaging for back pain by her internist without any physical examination when she actually had an erosion over the sacrum from a rug burn!
When conducting full-body skin examinations, dermatologists should not underestimate these principles:
- The “magic” of using a relatively noninvasive and sensitive screening tool—comfort and stress reduction for the patient from a thorough visual, tactile, olfactory, and auditory examination.
- Human interaction—especially when the patient is seen annually or even more frequently over a period of years or decades, and especially when an excellent patient-physician rapport has been established.
- The impact of improving a patient’s appearance on their overall sense of well-being (eg, by controlling rosacea).
- The opportunity to introduce concepts (ie, educate patients) such as alcohol avoidance, smoking cessation, weight reduction, hygiene, diet, and exercise in a more tangential way than a PCP, as well as to consider with patients the idea that lifestyle modification may be an adjunct, if not a replacement, for prescription treatments.
- The stress reduction that ensues when a variety of self-identified health issues are addressed, for which the only treatment may be reassurance.
I would add to Dr. Ngo’s argument that stratifying patients into skin cancer risk categories may be a useful measure if the only goal of periodic dermatologic evaluation is skin cancer detection. One size rarely fits all when it comes to health recommendations.
In sum, I believe that periodic full-body skin examination is absolutely beneficial to patient care, and I am not at all surprised that all-cause mortality was lower in patients who have those examinations. Furthermore, when I offer my healthy, low-risk patients the option to return in 2 years rather than 1, the vast majority insist on 1 year. My mother used to say, “It’s better to be looked over than to be overlooked,” and I tell my patients that, too—but it seems they already know that instinctively.
- Ngo BT. Skin cancer screening: the paradox of melanoma and improved all-cause mortality. Cutis. 2024;113:94-96. doi:10.12788/cutis.0948
- Ngo BT. Skin cancer screening: the paradox of melanoma and improved all-cause mortality. Cutis. 2024;113:94-96. doi:10.12788/cutis.0948
Hereditary Amyloidosis: 5 Things to Know
Amyloidosis is a condition marked by the accumulation of insoluble beta-sheet fibrillar protein aggregates in tissues that can be acquired or hereditary. Hereditary amyloidogenic transthyretin (hATTR) amyloidosis is an autosomal-dominant disease caused by pathogenic variants in the TTR gene. The TTR protein is essential for transporting thyroxine and retinol-binding protein and is primarily synthesized in the liver, becoming unstable as a result of the pathogenic mutations. Inherited pathogenic variants lead to the protein’s misfolding, aggregation, and deposition as amyloid fibrils in different organs, resulting in progressive multisystem dysfunction. hATTR amyloidosis is a heterogenous disease, characterized by a wide range of clinical manifestations affecting the peripheral (both somatic and autonomic) nervous system, heart, kidneys, and central nervous system (CNS); however, the heart and peripheral nerves appear to be the main targets of the TTR-related pathologic process. Without treatment, the prognosis is poor, with an average life expectancy of 7-11 years; however, in recent years, the development of new therapeutics has brought new hope to patients.
Here are five things to know about hereditary amyloidosis.
1. Diagnosis of hereditary amyloidosis requires a high level of suspicion.
The diagnosis of hATTR amyloidosis presents a significant challenge, particularly in nonendemic regions where a lack of family history and heterogeneity of clinical presentation can delay diagnosis by 4-5 years. A timely diagnosis requires clinicians to maintain a high index of suspicion, especially when evaluating patients with neuropathic symptoms. Early diagnosis is crucial to begin patients on recently available disease-modifying therapies that can slow the disease course. Failure to recognize is the major barrier to improved patient outcomes.
Confirming the diagnosis involves detecting amyloid deposits in tissue biopsy specimens from various possible sites, including the skin, nerves, myocardium, and others. However, the diagnosis can be challenging owing to the uneven distribution of amyloid fibrils, sometimes requiring multiple biopsies or alternative diagnostic approaches, such as TTR gene sequencing, to confirm the presence of an amyloidogenic pathogenic variant. Biopsy for hATTR amyloidosis is not required if imaging of the clinical phenotype and genetic testing are consistent.
Once diagnosed, the assessment of organ involvement is essential, using nerve conduction studies, cardiac investigations (eg, echocardiography, ECG, scintigraphy), ophthalmologic assessments, and complete renal function evaluations to fully understand the extent of disease impact.
2. Hereditary amyloidosis diseases are classified into two primary categories.
Hereditary amyloidosis represents a group of diseases caused by inherited gene mutations and is classified into two main types: ATTR (transthyretin-related) and non-TTR. Most cases of hereditary amyloidosis are associated with the TTR gene. Mutations in this protein lead to different forms of ATTR amyloidosis, categorized on the basis of the specific mutation involved, such as hATTR50M (genotype Val50Met), which is the most prevalent form.
ATTR mutations result in a variety of health issues, manifesting in three primary forms:
- Neuropathic ATTR (genotype Val50Met): Early symptoms include sensorimotor polyneuropathy of the legs, carpal tunnel syndrome, autonomic dysfunction, constipation/diarrhea, and impotence; late symptoms include cardiomyopathy, vitreous opacities, glaucoma, nephropathy, and CNS symptoms.
- Cardiac ATTR (genotype Val142Ile): This type is characterized by cardiomegaly, conduction block, arrhythmia, anginal pain, congestive heart failure, and sudden death.
- Leptomeningeal ATTR (genotype Asp38Gly): This is characterized by transient focal neurologic episodes, intracerebral and/or subarachnoid hemorrhages, dementia, ataxia, and psychosis.
Non-TTR amyloidoses are rarer than are ATTR variations and involve mutations in different genes that also have significant health impacts. These include proteins such as apolipoprotein AI, fibrinogen A alpha, lysozyme, apolipoprotein AII, gelsolin, and cystatin C. Each type contributes to a range of symptoms and requires individualized management approaches.
3. Heightened disease awareness has increased the recognized prevalence of hereditary amyloidosis.
hATTR amyloidosis has historically been recognized as a rare disease, with significant clusters in Portugal, Brazil, Sweden, and Japan and alongside smaller foci in regions such as Cyprus and Majorca. This disease›s variable incidence across Europe is now perceived to be on the rise. It is attributed to heightened disease awareness among healthcare providers and the broader availability of genetic testing, extending its recognized impact to at least 29 countries globally. The genetic landscape of hATTR amyloidosis is diverse, with over 140 mutations identified in the TTR gene. Among these, the Val50Met mutation is particularly notable for its association with large patient clusters in the endemic regions.
Morbidity and mortality associated with hATTR amyloidosis are significant, with an average lifespan of 7-11 years post diagnosis; however, survival rates can vary widely depending on the specific genetic variant and organ involvement. Early diagnosis can substantially improve outcomes; yet, for many, the prognosis remains poor, especially in cases dominated by cardiomyopathy. Genetics play a central role in the disease›s transmission, with autosomal-dominant inheritance patterns and high penetrance among carriers of pathogenic mutations. Research continues to uncover the broad spectrum of genetic variations contributing to hATTR amyloidosis, with ongoing studies poised to expand our understanding of its molecular underpinnings and potential treatment options.
4. The effect on quality of life is significant both in patients living with hATTR amyloidosis and their caregivers.
hATTR amyloidosis imposes a multifaceted burden on patients and their caregivers as the disease progresses. Symptoms range from sensorimotor impairment and gastrointestinal or autonomic dysfunction to heart failure, leading to significant health-related quality-of-life deficits. The systemic nature of hATTR amyloidosis significantly affects patients› lifestyles, daily activities, and general well-being, especially because it typically manifests in adulthood — a crucial time for occupational changes. The progression of hATTR amyloidosis exacerbates the challenges in maintaining employment and managing household chores, with symptomatic patients often unable to work and experiencing difficulties with absenteeism and presenteeism when they are able to work.
hATTR amyloidosis leads to physical, mental, occupational, and social limitations for patients, and it also places a considerable strain on their families and caregivers, who report poor mental health, work impairment, and a high time commitment (mean, 45.9 h/wk) to providing care.
5. There have been significant advancements in therapeutic options for early-stage hATTR amyloidosis.
After diagnosis, prompt initiation of treatment is recommended to delay the progression of hATTR amyloidosis; a multidisciplinary approach is essential, incorporating anti-amyloid therapy to inhibit further production and/or deposition of amyloid aggregates. Treatment strategies also include addressing symptomatic therapy and managing cardiac, renal, and ocular involvement. Although many therapies have been developed, especially for the early stages of hATTR amyloidosis, therapeutic benefits for patients with advanced disease remain limited.
Recent advancements in the treatment of hATTR amyloidosis have introduced RNA-targeted therapies including patisiran, vutrisiran, and eplontersen, which have shown efficacy in reducing hepatic TTR synthesis and the aggregation of misfolded monomers into amyloid deposits. These therapies, ranging from small interfering RNA formulations to antisense oligonucleotides, offer benefits in managing both cardiomyopathy and neuropathy associated with hATTR amyloidosis , administered through various methods, including intravenous infusions and subcutaneous injections. In addition, the stabilization of TTR tetramers with the use of drugs such as tafamidis and diflunisal has effectively prevented the formation of amyloidogenic monomers. Moreover, other investigational agents, including TTR stabilizers like acoramidis and tolcapone, as well as novel compounds that inhibit amyloid formation and disrupt fibrils, are expanding the therapeutic landscape for hATTR amyloidosis , providing hope for improved management of this complex condition.
Dr. Gertz is a professor and consultant in the Department of Hematology, Mayo Clinic, Rochester, Minnesota. He has disclosed the following relevant financial relationships: Received income in an amount equal to or greater than $250 from AstraZeneca, Ionis, and Alnylym.
A version of this article appeared on Medscape.com.
Amyloidosis is a condition marked by the accumulation of insoluble beta-sheet fibrillar protein aggregates in tissues that can be acquired or hereditary. Hereditary amyloidogenic transthyretin (hATTR) amyloidosis is an autosomal-dominant disease caused by pathogenic variants in the TTR gene. The TTR protein is essential for transporting thyroxine and retinol-binding protein and is primarily synthesized in the liver, becoming unstable as a result of the pathogenic mutations. Inherited pathogenic variants lead to the protein’s misfolding, aggregation, and deposition as amyloid fibrils in different organs, resulting in progressive multisystem dysfunction. hATTR amyloidosis is a heterogenous disease, characterized by a wide range of clinical manifestations affecting the peripheral (both somatic and autonomic) nervous system, heart, kidneys, and central nervous system (CNS); however, the heart and peripheral nerves appear to be the main targets of the TTR-related pathologic process. Without treatment, the prognosis is poor, with an average life expectancy of 7-11 years; however, in recent years, the development of new therapeutics has brought new hope to patients.
Here are five things to know about hereditary amyloidosis.
1. Diagnosis of hereditary amyloidosis requires a high level of suspicion.
The diagnosis of hATTR amyloidosis presents a significant challenge, particularly in nonendemic regions where a lack of family history and heterogeneity of clinical presentation can delay diagnosis by 4-5 years. A timely diagnosis requires clinicians to maintain a high index of suspicion, especially when evaluating patients with neuropathic symptoms. Early diagnosis is crucial to begin patients on recently available disease-modifying therapies that can slow the disease course. Failure to recognize is the major barrier to improved patient outcomes.
Confirming the diagnosis involves detecting amyloid deposits in tissue biopsy specimens from various possible sites, including the skin, nerves, myocardium, and others. However, the diagnosis can be challenging owing to the uneven distribution of amyloid fibrils, sometimes requiring multiple biopsies or alternative diagnostic approaches, such as TTR gene sequencing, to confirm the presence of an amyloidogenic pathogenic variant. Biopsy for hATTR amyloidosis is not required if imaging of the clinical phenotype and genetic testing are consistent.
Once diagnosed, the assessment of organ involvement is essential, using nerve conduction studies, cardiac investigations (eg, echocardiography, ECG, scintigraphy), ophthalmologic assessments, and complete renal function evaluations to fully understand the extent of disease impact.
2. Hereditary amyloidosis diseases are classified into two primary categories.
Hereditary amyloidosis represents a group of diseases caused by inherited gene mutations and is classified into two main types: ATTR (transthyretin-related) and non-TTR. Most cases of hereditary amyloidosis are associated with the TTR gene. Mutations in this protein lead to different forms of ATTR amyloidosis, categorized on the basis of the specific mutation involved, such as hATTR50M (genotype Val50Met), which is the most prevalent form.
ATTR mutations result in a variety of health issues, manifesting in three primary forms:
- Neuropathic ATTR (genotype Val50Met): Early symptoms include sensorimotor polyneuropathy of the legs, carpal tunnel syndrome, autonomic dysfunction, constipation/diarrhea, and impotence; late symptoms include cardiomyopathy, vitreous opacities, glaucoma, nephropathy, and CNS symptoms.
- Cardiac ATTR (genotype Val142Ile): This type is characterized by cardiomegaly, conduction block, arrhythmia, anginal pain, congestive heart failure, and sudden death.
- Leptomeningeal ATTR (genotype Asp38Gly): This is characterized by transient focal neurologic episodes, intracerebral and/or subarachnoid hemorrhages, dementia, ataxia, and psychosis.
Non-TTR amyloidoses are rarer than are ATTR variations and involve mutations in different genes that also have significant health impacts. These include proteins such as apolipoprotein AI, fibrinogen A alpha, lysozyme, apolipoprotein AII, gelsolin, and cystatin C. Each type contributes to a range of symptoms and requires individualized management approaches.
3. Heightened disease awareness has increased the recognized prevalence of hereditary amyloidosis.
hATTR amyloidosis has historically been recognized as a rare disease, with significant clusters in Portugal, Brazil, Sweden, and Japan and alongside smaller foci in regions such as Cyprus and Majorca. This disease›s variable incidence across Europe is now perceived to be on the rise. It is attributed to heightened disease awareness among healthcare providers and the broader availability of genetic testing, extending its recognized impact to at least 29 countries globally. The genetic landscape of hATTR amyloidosis is diverse, with over 140 mutations identified in the TTR gene. Among these, the Val50Met mutation is particularly notable for its association with large patient clusters in the endemic regions.
Morbidity and mortality associated with hATTR amyloidosis are significant, with an average lifespan of 7-11 years post diagnosis; however, survival rates can vary widely depending on the specific genetic variant and organ involvement. Early diagnosis can substantially improve outcomes; yet, for many, the prognosis remains poor, especially in cases dominated by cardiomyopathy. Genetics play a central role in the disease›s transmission, with autosomal-dominant inheritance patterns and high penetrance among carriers of pathogenic mutations. Research continues to uncover the broad spectrum of genetic variations contributing to hATTR amyloidosis, with ongoing studies poised to expand our understanding of its molecular underpinnings and potential treatment options.
4. The effect on quality of life is significant both in patients living with hATTR amyloidosis and their caregivers.
hATTR amyloidosis imposes a multifaceted burden on patients and their caregivers as the disease progresses. Symptoms range from sensorimotor impairment and gastrointestinal or autonomic dysfunction to heart failure, leading to significant health-related quality-of-life deficits. The systemic nature of hATTR amyloidosis significantly affects patients› lifestyles, daily activities, and general well-being, especially because it typically manifests in adulthood — a crucial time for occupational changes. The progression of hATTR amyloidosis exacerbates the challenges in maintaining employment and managing household chores, with symptomatic patients often unable to work and experiencing difficulties with absenteeism and presenteeism when they are able to work.
hATTR amyloidosis leads to physical, mental, occupational, and social limitations for patients, and it also places a considerable strain on their families and caregivers, who report poor mental health, work impairment, and a high time commitment (mean, 45.9 h/wk) to providing care.
5. There have been significant advancements in therapeutic options for early-stage hATTR amyloidosis.
After diagnosis, prompt initiation of treatment is recommended to delay the progression of hATTR amyloidosis; a multidisciplinary approach is essential, incorporating anti-amyloid therapy to inhibit further production and/or deposition of amyloid aggregates. Treatment strategies also include addressing symptomatic therapy and managing cardiac, renal, and ocular involvement. Although many therapies have been developed, especially for the early stages of hATTR amyloidosis, therapeutic benefits for patients with advanced disease remain limited.
Recent advancements in the treatment of hATTR amyloidosis have introduced RNA-targeted therapies including patisiran, vutrisiran, and eplontersen, which have shown efficacy in reducing hepatic TTR synthesis and the aggregation of misfolded monomers into amyloid deposits. These therapies, ranging from small interfering RNA formulations to antisense oligonucleotides, offer benefits in managing both cardiomyopathy and neuropathy associated with hATTR amyloidosis , administered through various methods, including intravenous infusions and subcutaneous injections. In addition, the stabilization of TTR tetramers with the use of drugs such as tafamidis and diflunisal has effectively prevented the formation of amyloidogenic monomers. Moreover, other investigational agents, including TTR stabilizers like acoramidis and tolcapone, as well as novel compounds that inhibit amyloid formation and disrupt fibrils, are expanding the therapeutic landscape for hATTR amyloidosis , providing hope for improved management of this complex condition.
Dr. Gertz is a professor and consultant in the Department of Hematology, Mayo Clinic, Rochester, Minnesota. He has disclosed the following relevant financial relationships: Received income in an amount equal to or greater than $250 from AstraZeneca, Ionis, and Alnylym.
A version of this article appeared on Medscape.com.
Amyloidosis is a condition marked by the accumulation of insoluble beta-sheet fibrillar protein aggregates in tissues that can be acquired or hereditary. Hereditary amyloidogenic transthyretin (hATTR) amyloidosis is an autosomal-dominant disease caused by pathogenic variants in the TTR gene. The TTR protein is essential for transporting thyroxine and retinol-binding protein and is primarily synthesized in the liver, becoming unstable as a result of the pathogenic mutations. Inherited pathogenic variants lead to the protein’s misfolding, aggregation, and deposition as amyloid fibrils in different organs, resulting in progressive multisystem dysfunction. hATTR amyloidosis is a heterogenous disease, characterized by a wide range of clinical manifestations affecting the peripheral (both somatic and autonomic) nervous system, heart, kidneys, and central nervous system (CNS); however, the heart and peripheral nerves appear to be the main targets of the TTR-related pathologic process. Without treatment, the prognosis is poor, with an average life expectancy of 7-11 years; however, in recent years, the development of new therapeutics has brought new hope to patients.
Here are five things to know about hereditary amyloidosis.
1. Diagnosis of hereditary amyloidosis requires a high level of suspicion.
The diagnosis of hATTR amyloidosis presents a significant challenge, particularly in nonendemic regions where a lack of family history and heterogeneity of clinical presentation can delay diagnosis by 4-5 years. A timely diagnosis requires clinicians to maintain a high index of suspicion, especially when evaluating patients with neuropathic symptoms. Early diagnosis is crucial to begin patients on recently available disease-modifying therapies that can slow the disease course. Failure to recognize is the major barrier to improved patient outcomes.
Confirming the diagnosis involves detecting amyloid deposits in tissue biopsy specimens from various possible sites, including the skin, nerves, myocardium, and others. However, the diagnosis can be challenging owing to the uneven distribution of amyloid fibrils, sometimes requiring multiple biopsies or alternative diagnostic approaches, such as TTR gene sequencing, to confirm the presence of an amyloidogenic pathogenic variant. Biopsy for hATTR amyloidosis is not required if imaging of the clinical phenotype and genetic testing are consistent.
Once diagnosed, the assessment of organ involvement is essential, using nerve conduction studies, cardiac investigations (eg, echocardiography, ECG, scintigraphy), ophthalmologic assessments, and complete renal function evaluations to fully understand the extent of disease impact.
2. Hereditary amyloidosis diseases are classified into two primary categories.
Hereditary amyloidosis represents a group of diseases caused by inherited gene mutations and is classified into two main types: ATTR (transthyretin-related) and non-TTR. Most cases of hereditary amyloidosis are associated with the TTR gene. Mutations in this protein lead to different forms of ATTR amyloidosis, categorized on the basis of the specific mutation involved, such as hATTR50M (genotype Val50Met), which is the most prevalent form.
ATTR mutations result in a variety of health issues, manifesting in three primary forms:
- Neuropathic ATTR (genotype Val50Met): Early symptoms include sensorimotor polyneuropathy of the legs, carpal tunnel syndrome, autonomic dysfunction, constipation/diarrhea, and impotence; late symptoms include cardiomyopathy, vitreous opacities, glaucoma, nephropathy, and CNS symptoms.
- Cardiac ATTR (genotype Val142Ile): This type is characterized by cardiomegaly, conduction block, arrhythmia, anginal pain, congestive heart failure, and sudden death.
- Leptomeningeal ATTR (genotype Asp38Gly): This is characterized by transient focal neurologic episodes, intracerebral and/or subarachnoid hemorrhages, dementia, ataxia, and psychosis.
Non-TTR amyloidoses are rarer than are ATTR variations and involve mutations in different genes that also have significant health impacts. These include proteins such as apolipoprotein AI, fibrinogen A alpha, lysozyme, apolipoprotein AII, gelsolin, and cystatin C. Each type contributes to a range of symptoms and requires individualized management approaches.
3. Heightened disease awareness has increased the recognized prevalence of hereditary amyloidosis.
hATTR amyloidosis has historically been recognized as a rare disease, with significant clusters in Portugal, Brazil, Sweden, and Japan and alongside smaller foci in regions such as Cyprus and Majorca. This disease›s variable incidence across Europe is now perceived to be on the rise. It is attributed to heightened disease awareness among healthcare providers and the broader availability of genetic testing, extending its recognized impact to at least 29 countries globally. The genetic landscape of hATTR amyloidosis is diverse, with over 140 mutations identified in the TTR gene. Among these, the Val50Met mutation is particularly notable for its association with large patient clusters in the endemic regions.
Morbidity and mortality associated with hATTR amyloidosis are significant, with an average lifespan of 7-11 years post diagnosis; however, survival rates can vary widely depending on the specific genetic variant and organ involvement. Early diagnosis can substantially improve outcomes; yet, for many, the prognosis remains poor, especially in cases dominated by cardiomyopathy. Genetics play a central role in the disease›s transmission, with autosomal-dominant inheritance patterns and high penetrance among carriers of pathogenic mutations. Research continues to uncover the broad spectrum of genetic variations contributing to hATTR amyloidosis, with ongoing studies poised to expand our understanding of its molecular underpinnings and potential treatment options.
4. The effect on quality of life is significant both in patients living with hATTR amyloidosis and their caregivers.
hATTR amyloidosis imposes a multifaceted burden on patients and their caregivers as the disease progresses. Symptoms range from sensorimotor impairment and gastrointestinal or autonomic dysfunction to heart failure, leading to significant health-related quality-of-life deficits. The systemic nature of hATTR amyloidosis significantly affects patients› lifestyles, daily activities, and general well-being, especially because it typically manifests in adulthood — a crucial time for occupational changes. The progression of hATTR amyloidosis exacerbates the challenges in maintaining employment and managing household chores, with symptomatic patients often unable to work and experiencing difficulties with absenteeism and presenteeism when they are able to work.
hATTR amyloidosis leads to physical, mental, occupational, and social limitations for patients, and it also places a considerable strain on their families and caregivers, who report poor mental health, work impairment, and a high time commitment (mean, 45.9 h/wk) to providing care.
5. There have been significant advancements in therapeutic options for early-stage hATTR amyloidosis.
After diagnosis, prompt initiation of treatment is recommended to delay the progression of hATTR amyloidosis; a multidisciplinary approach is essential, incorporating anti-amyloid therapy to inhibit further production and/or deposition of amyloid aggregates. Treatment strategies also include addressing symptomatic therapy and managing cardiac, renal, and ocular involvement. Although many therapies have been developed, especially for the early stages of hATTR amyloidosis, therapeutic benefits for patients with advanced disease remain limited.
Recent advancements in the treatment of hATTR amyloidosis have introduced RNA-targeted therapies including patisiran, vutrisiran, and eplontersen, which have shown efficacy in reducing hepatic TTR synthesis and the aggregation of misfolded monomers into amyloid deposits. These therapies, ranging from small interfering RNA formulations to antisense oligonucleotides, offer benefits in managing both cardiomyopathy and neuropathy associated with hATTR amyloidosis , administered through various methods, including intravenous infusions and subcutaneous injections. In addition, the stabilization of TTR tetramers with the use of drugs such as tafamidis and diflunisal has effectively prevented the formation of amyloidogenic monomers. Moreover, other investigational agents, including TTR stabilizers like acoramidis and tolcapone, as well as novel compounds that inhibit amyloid formation and disrupt fibrils, are expanding the therapeutic landscape for hATTR amyloidosis , providing hope for improved management of this complex condition.
Dr. Gertz is a professor and consultant in the Department of Hematology, Mayo Clinic, Rochester, Minnesota. He has disclosed the following relevant financial relationships: Received income in an amount equal to or greater than $250 from AstraZeneca, Ionis, and Alnylym.
A version of this article appeared on Medscape.com.
Sunscreen Safety: 2024 Updates
Sunscreen is a cornerstone of skin cancer prevention. The first commercial sunscreen was developed nearly 100 years ago,1 yet questions and concerns about the safety of these essential topical photoprotective agents continue to occupy our minds. This article serves as an update on some of the big sunscreen questions, as informed by the available evidence.
Are sunscreens safe?
The story of sunscreen regulation in the United States is long and dry. The major pain point is that sunscreens are regulated by the US Food and Drug Administration (FDA) as over-the-counter drugs rather than cosmetics (as in Europe).2 Regulatory hurdles created a situation wherein no new active sunscreen ingredient has been approved by the FDA since 1999, except ecamsule for use in one product line. There is hope that changes enacted under the CARES Act will streamline and expedite the sunscreen approval process in the future.3
Amid the ongoing regulatory slog, the FDA became interested in learning more about sunscreen safety. Specifically, they sought to determine the GRASE (generally regarded as safe and effective) status of the active ingredients in sunscreens. In 2019, only the inorganic (physical/mineral) UV filters zinc oxide and titanium dioxide were considered GRASE.4 Trolamine salicylate and para-aminobenzoic acid were not GRASE, but they currently are not used in sunscreens in the United States. For all the remaining organic (chemical) filters, additional safety data were required to establish GRASE status.4 In 2024, the situation remains largely unchanged. Industry is working with the FDA on testing requirements.5
Why the focus on safety? After all, sunscreens have been used widely for decades without any major safety signals; their only well-established adverse effects are contact dermatitis and staining of clothing.6 Although preclinical studies raised concerns that chemical sunscreens could be associated with endocrine, reproductive, and neurologic toxicities, to date there are no high-quality human studies demonstrating negative effects.7,8
However, exposure patterns have evolved. Sunscreen is recommended to be applied (and reapplied) daily. Also, chemical UV filters are used in many nonsunscreen products such as cosmetics, shampoos, fragrances, and plastics. In the United States, exposure to chemical sunscreens is ubiquitous; according to data from the National Health and Nutrition Examination Survey 2003-2004, oxybenzone was detected in 97% of more than 2500 urine samples, implying systemic absorption but not harm.9
The FDA confirmed the implication of systemic absorption via 2 maximal usage trials published in 2019 and 2020.10,11 In both studies, several chemical sunscreens were applied at the recommended density of 2 mg/cm2 to 75% of the body surface area multiple times over 4 days. For all tested organic UV filters, blood levels exceeded the predetermined FDA cutoff (0.5 ng/mL), even after one application.10,11 What’s the takeaway? Simply that the FDA now requires additional safety data for chemical sunscreen filters5; the findings in no way imply any associated harm. Two potential mitigating factors are that no one applies sunscreen at 2 mg/cm2, and the FDA’s blood level cutoff was a general estimate not specific to sunscreens.4,12
Nevertheless, a good long-term safety record for sunscreens does not negate the need for enhanced safety data when there is clear evidence of systemic absorption. In the meantime, concerned patients should be counseled that the physical/mineral sunscreens containing zinc oxide and titanium dioxide are considered GRASE by the FDA; even in nanoparticle form, they generally have not been found to penetrate beneath the stratum corneum.7,13
Does sunscreen cause frontal fibrosing alopecia?
Dermatologists are confronting the conundrum of rising cases of frontal fibrosing alopecia (FFA). Several theories on the pathogenesis of this idiopathic scarring alopecia have been raised, one of which involves increased use of sunscreen. Proposed explanations for sunscreen’s role in FFA include a lichenoid reaction inducing hair follicle autoimmunity through an unclear mechanism; a T cell–mediated allergic reaction, which is unlikely according to contact dermatitis experts14; reactive oxygen species production by titanium nanoparticles, yet titanium has been detected in hair follicles of both patients with FFA and controls15; and endocrine disruption following systemic absorption, which has not been supported by any high-quality human studies.7
An association between facial sunscreen use and FFA has been reported in case-control studies16; however, they have been criticized due to methodologic issues and biases, and they provide no evidence of causality.17,18 The jury remains out on the controversial association between sunscreen and FFA, with a need for more convincing data.
Does sunscreen impact coral reef health?
Coral reefs—crucial sources of aquatic biodiversity—are under attack from several different directions including climate change and pollution. As much as 14,000 tons of sunscreen enter coral reefs each year, and chemical sunscreen filters are detectable in waterways throughout the world—even in the Arctic.19,20 Thus, sunscreen has come under scrutiny as a potential environmental threat, particularly with coral bleaching.
Bleaching is a process in which corals exposed to an environmental stressor expel their symbiotic photosynthetic algae and turn white; if conditions fail to improve, the corals are vulnerable to death. In a highly cited 2016 study, coral larvae exposed to oxybenzone in artificial laboratory conditions displayed concentration-dependent mortality and decreased chlorophyll fluorescence, which suggested bleaching.19 These findings influenced legislation in Hawaii and other localities banning sunscreens containing oxybenzone. Problematically, the study has been criticized for acutely exposing the most susceptible coral life-forms to unrealistic oxybenzone concentrations; more broadly, there is no standardized approach to coral toxicity testing.21
The bigger picture (and elephant in the room) is that the primary cause of coral bleaching is undoubtedly climate change/ocean warming.7 More recent studies suggest that oxybenzone probably adds insult to injury for corals already debilitated by ocean warming.22,23
It has been posited that a narrow focus on sunscreens detracts attention from the climate issue.24 Individuals can take a number of actions to reduce their carbon footprint in an effort to preserve our environment, specifically coral reefs.25 Concerned patients should be counseled to use sunscreens containing the physical/mineral UV filters zinc oxide and titanium dioxide, which are unlikely to contribute to coral bleaching as commercially formulated.7
Ongoing Questions
A lot of unknowns about sunscreen safety remain, and much hubbub has been made over studies that often are preliminary at best. At the time of this writing, absent a crystal ball, this author continues to wear chemical sunscreens; spends a lot more time worrying about their carbon footprint than what type of sunscreen to use at the beach; and believes the association of FFA with sunscreen is unlikely to be causal. Hopefully much-needed rigorous evidence will guide our future approach to sunscreen formulation and use.
- Ma Y, Yoo J. History of sunscreen: an updated view. J Cosmet Dermatol. 2021;20:1044-1049.
- Pantelic MN, Wong N, Kwa M, et al. Ultraviolet filters in the United States and European Union: a review of safety and implications for the future of US sunscreens. J Am Acad Dermatol. 2023;88:632-646.
- Mohammad TF, Lim HW. The important role of dermatologists in public education on sunscreens. JAMA Dermatol. 2021;157:509-511.
- Sunscreen drug products for over-the-counter human use: proposed rule. Fed Regist. 2019;84:6204-6275.
- Lim HW, Mohammad TF, Wang SQ. Food and Drug Administration’s proposed sunscreen final administrative order: how does it affect sunscreens in the United States? J Am Acad Dermatol. 2022;86:E83-E84.
- Ekstein SF, Hylwa S. Sunscreens: a review of UV filters and their allergic potential. Dermatitis. 2023;34:176-190.
- Adler BL, DeLeo VA. Sunscreen safety: a review of recent studies on humans and the environment. Curr Dermatol Rep. 2020;9:1-9.
- Suh S, Pham C, Smith J, et al. The banned sunscreen ingredients and their impact on human health: a systematic review. Int J Dermatol. 2020;59:1033-1042.
- Calafat AM, Wong LY, Ye X, et al. Concentrations of the sunscreen agent benzophenone-3 in residents of the United States: National Health and Nutrition Examination Survey 2003-2004. Environ Health Perspect. 2008;116:893-897.
- Matta MK, Florian J, Zusterzeel R, et al. Effect of sunscreen application on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2020;323:256-267.
- Matta MK, Zusterzeel R, Pilli NR, et al. Effect of sunscreen application under maximal use conditions on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2019;321:2082-2091.
- Petersen B, Wulf HC. Application of sunscreen—theory and reality. Photodermatol Photoimmunol Photomed. 2014;30:96-101.
- Mohammed YH, Holmes A, Haridass IN, et al. Support for the safe use of zinc oxide nanoparticle sunscreens: lack of skin penetration or cellular toxicity after repeated application in volunteers. J Invest Dermatol. 2019;139:308-315.
- Felmingham C, Yip L, Tam M, et al. Allergy to sunscreen and leave-on facial products is not a likely causative mechanism in frontal fibrosing alopecia: perspective from contact allergy experts. Br J Dermatol. 2020;182:481-482.
- Thompson CT, Chen ZQ, Kolivras A, et al. Identification of titanium dioxide on the hair shaft of patients with and without frontal fibrosing alopecia: a pilot study of 20 patients. Br J Dermatol. 2019;181:216-217.
- Maghfour J, Ceresnie M, Olson J, et al. The association between frontal fibrosing alopecia, sunscreen, and moisturizers: a systematic review and meta-analysis. J Am Acad Dermatol. 2022;87:395-396.
- Seegobin SD, Tziotzios C, Stefanato CM, et al. Frontal fibrosing alopecia:there is no statistically significant association with leave-on facial skin care products and sunscreens. Br J Dermatol. 2016;175:1407-1408.
- Ramos PM, Anzai A, Duque-Estrada B, et al. Regarding methodologic concerns in clinical studies on frontal fibrosing alopecia. J Am Acad Dermatol. 2021;84:E207-E208.
- Downs CA, Kramarsky-Winter E, Segal R, et al. Toxicopathological effects of the sunscreen UV filter, oxybenzone (benzophenone-3), on coral planulae and cultured primary cells and its environmental contamination in Hawaii and the US Virgin Islands. Arch Environ Contam Toxicol. 2016;70:265-288.
- National Academies of Sciences, Engineering, and Medicine. Review of Fate, Exposure, and Effects of Sunscreens in Aquatic Environments and Implications for Sunscreen Usage and Human Health. The National Academies Press; 2022.
- Mitchelmore CL, Burns EE, Conway A, et al. A critical review of organic ultraviolet filter exposure, hazard, and risk to corals. Environ Toxicol Chem. 2021;40:967-988.
- Vuckovic D, Tinoco AI, Ling L, et al. Conversion of oxybenzone sunscreen to phototoxic glucoside conjugates by sea anemones and corals. Science. 2022;376:644-648.
- Wijgerde T, van Ballegooijen M, Nijland R, et al. Adding insult to injury: effects of chronic oxybenzone exposure and elevated temperature on two reef-building corals. Sci Total Environ. 2020;733:139030.
- Sirois J. Examine all available evidence before making decisions on sunscreen ingredient bans. Sci Total Environ. 2019;674:211-212.
- United Nations. Actions for a healthy planet. Accessed April 15, 2024. https://www.un.org/en/actnow/ten-actions
Sunscreen is a cornerstone of skin cancer prevention. The first commercial sunscreen was developed nearly 100 years ago,1 yet questions and concerns about the safety of these essential topical photoprotective agents continue to occupy our minds. This article serves as an update on some of the big sunscreen questions, as informed by the available evidence.
Are sunscreens safe?
The story of sunscreen regulation in the United States is long and dry. The major pain point is that sunscreens are regulated by the US Food and Drug Administration (FDA) as over-the-counter drugs rather than cosmetics (as in Europe).2 Regulatory hurdles created a situation wherein no new active sunscreen ingredient has been approved by the FDA since 1999, except ecamsule for use in one product line. There is hope that changes enacted under the CARES Act will streamline and expedite the sunscreen approval process in the future.3
Amid the ongoing regulatory slog, the FDA became interested in learning more about sunscreen safety. Specifically, they sought to determine the GRASE (generally regarded as safe and effective) status of the active ingredients in sunscreens. In 2019, only the inorganic (physical/mineral) UV filters zinc oxide and titanium dioxide were considered GRASE.4 Trolamine salicylate and para-aminobenzoic acid were not GRASE, but they currently are not used in sunscreens in the United States. For all the remaining organic (chemical) filters, additional safety data were required to establish GRASE status.4 In 2024, the situation remains largely unchanged. Industry is working with the FDA on testing requirements.5
Why the focus on safety? After all, sunscreens have been used widely for decades without any major safety signals; their only well-established adverse effects are contact dermatitis and staining of clothing.6 Although preclinical studies raised concerns that chemical sunscreens could be associated with endocrine, reproductive, and neurologic toxicities, to date there are no high-quality human studies demonstrating negative effects.7,8
However, exposure patterns have evolved. Sunscreen is recommended to be applied (and reapplied) daily. Also, chemical UV filters are used in many nonsunscreen products such as cosmetics, shampoos, fragrances, and plastics. In the United States, exposure to chemical sunscreens is ubiquitous; according to data from the National Health and Nutrition Examination Survey 2003-2004, oxybenzone was detected in 97% of more than 2500 urine samples, implying systemic absorption but not harm.9
The FDA confirmed the implication of systemic absorption via 2 maximal usage trials published in 2019 and 2020.10,11 In both studies, several chemical sunscreens were applied at the recommended density of 2 mg/cm2 to 75% of the body surface area multiple times over 4 days. For all tested organic UV filters, blood levels exceeded the predetermined FDA cutoff (0.5 ng/mL), even after one application.10,11 What’s the takeaway? Simply that the FDA now requires additional safety data for chemical sunscreen filters5; the findings in no way imply any associated harm. Two potential mitigating factors are that no one applies sunscreen at 2 mg/cm2, and the FDA’s blood level cutoff was a general estimate not specific to sunscreens.4,12
Nevertheless, a good long-term safety record for sunscreens does not negate the need for enhanced safety data when there is clear evidence of systemic absorption. In the meantime, concerned patients should be counseled that the physical/mineral sunscreens containing zinc oxide and titanium dioxide are considered GRASE by the FDA; even in nanoparticle form, they generally have not been found to penetrate beneath the stratum corneum.7,13
Does sunscreen cause frontal fibrosing alopecia?
Dermatologists are confronting the conundrum of rising cases of frontal fibrosing alopecia (FFA). Several theories on the pathogenesis of this idiopathic scarring alopecia have been raised, one of which involves increased use of sunscreen. Proposed explanations for sunscreen’s role in FFA include a lichenoid reaction inducing hair follicle autoimmunity through an unclear mechanism; a T cell–mediated allergic reaction, which is unlikely according to contact dermatitis experts14; reactive oxygen species production by titanium nanoparticles, yet titanium has been detected in hair follicles of both patients with FFA and controls15; and endocrine disruption following systemic absorption, which has not been supported by any high-quality human studies.7
An association between facial sunscreen use and FFA has been reported in case-control studies16; however, they have been criticized due to methodologic issues and biases, and they provide no evidence of causality.17,18 The jury remains out on the controversial association between sunscreen and FFA, with a need for more convincing data.
Does sunscreen impact coral reef health?
Coral reefs—crucial sources of aquatic biodiversity—are under attack from several different directions including climate change and pollution. As much as 14,000 tons of sunscreen enter coral reefs each year, and chemical sunscreen filters are detectable in waterways throughout the world—even in the Arctic.19,20 Thus, sunscreen has come under scrutiny as a potential environmental threat, particularly with coral bleaching.
Bleaching is a process in which corals exposed to an environmental stressor expel their symbiotic photosynthetic algae and turn white; if conditions fail to improve, the corals are vulnerable to death. In a highly cited 2016 study, coral larvae exposed to oxybenzone in artificial laboratory conditions displayed concentration-dependent mortality and decreased chlorophyll fluorescence, which suggested bleaching.19 These findings influenced legislation in Hawaii and other localities banning sunscreens containing oxybenzone. Problematically, the study has been criticized for acutely exposing the most susceptible coral life-forms to unrealistic oxybenzone concentrations; more broadly, there is no standardized approach to coral toxicity testing.21
The bigger picture (and elephant in the room) is that the primary cause of coral bleaching is undoubtedly climate change/ocean warming.7 More recent studies suggest that oxybenzone probably adds insult to injury for corals already debilitated by ocean warming.22,23
It has been posited that a narrow focus on sunscreens detracts attention from the climate issue.24 Individuals can take a number of actions to reduce their carbon footprint in an effort to preserve our environment, specifically coral reefs.25 Concerned patients should be counseled to use sunscreens containing the physical/mineral UV filters zinc oxide and titanium dioxide, which are unlikely to contribute to coral bleaching as commercially formulated.7
Ongoing Questions
A lot of unknowns about sunscreen safety remain, and much hubbub has been made over studies that often are preliminary at best. At the time of this writing, absent a crystal ball, this author continues to wear chemical sunscreens; spends a lot more time worrying about their carbon footprint than what type of sunscreen to use at the beach; and believes the association of FFA with sunscreen is unlikely to be causal. Hopefully much-needed rigorous evidence will guide our future approach to sunscreen formulation and use.
Sunscreen is a cornerstone of skin cancer prevention. The first commercial sunscreen was developed nearly 100 years ago,1 yet questions and concerns about the safety of these essential topical photoprotective agents continue to occupy our minds. This article serves as an update on some of the big sunscreen questions, as informed by the available evidence.
Are sunscreens safe?
The story of sunscreen regulation in the United States is long and dry. The major pain point is that sunscreens are regulated by the US Food and Drug Administration (FDA) as over-the-counter drugs rather than cosmetics (as in Europe).2 Regulatory hurdles created a situation wherein no new active sunscreen ingredient has been approved by the FDA since 1999, except ecamsule for use in one product line. There is hope that changes enacted under the CARES Act will streamline and expedite the sunscreen approval process in the future.3
Amid the ongoing regulatory slog, the FDA became interested in learning more about sunscreen safety. Specifically, they sought to determine the GRASE (generally regarded as safe and effective) status of the active ingredients in sunscreens. In 2019, only the inorganic (physical/mineral) UV filters zinc oxide and titanium dioxide were considered GRASE.4 Trolamine salicylate and para-aminobenzoic acid were not GRASE, but they currently are not used in sunscreens in the United States. For all the remaining organic (chemical) filters, additional safety data were required to establish GRASE status.4 In 2024, the situation remains largely unchanged. Industry is working with the FDA on testing requirements.5
Why the focus on safety? After all, sunscreens have been used widely for decades without any major safety signals; their only well-established adverse effects are contact dermatitis and staining of clothing.6 Although preclinical studies raised concerns that chemical sunscreens could be associated with endocrine, reproductive, and neurologic toxicities, to date there are no high-quality human studies demonstrating negative effects.7,8
However, exposure patterns have evolved. Sunscreen is recommended to be applied (and reapplied) daily. Also, chemical UV filters are used in many nonsunscreen products such as cosmetics, shampoos, fragrances, and plastics. In the United States, exposure to chemical sunscreens is ubiquitous; according to data from the National Health and Nutrition Examination Survey 2003-2004, oxybenzone was detected in 97% of more than 2500 urine samples, implying systemic absorption but not harm.9
The FDA confirmed the implication of systemic absorption via 2 maximal usage trials published in 2019 and 2020.10,11 In both studies, several chemical sunscreens were applied at the recommended density of 2 mg/cm2 to 75% of the body surface area multiple times over 4 days. For all tested organic UV filters, blood levels exceeded the predetermined FDA cutoff (0.5 ng/mL), even after one application.10,11 What’s the takeaway? Simply that the FDA now requires additional safety data for chemical sunscreen filters5; the findings in no way imply any associated harm. Two potential mitigating factors are that no one applies sunscreen at 2 mg/cm2, and the FDA’s blood level cutoff was a general estimate not specific to sunscreens.4,12
Nevertheless, a good long-term safety record for sunscreens does not negate the need for enhanced safety data when there is clear evidence of systemic absorption. In the meantime, concerned patients should be counseled that the physical/mineral sunscreens containing zinc oxide and titanium dioxide are considered GRASE by the FDA; even in nanoparticle form, they generally have not been found to penetrate beneath the stratum corneum.7,13
Does sunscreen cause frontal fibrosing alopecia?
Dermatologists are confronting the conundrum of rising cases of frontal fibrosing alopecia (FFA). Several theories on the pathogenesis of this idiopathic scarring alopecia have been raised, one of which involves increased use of sunscreen. Proposed explanations for sunscreen’s role in FFA include a lichenoid reaction inducing hair follicle autoimmunity through an unclear mechanism; a T cell–mediated allergic reaction, which is unlikely according to contact dermatitis experts14; reactive oxygen species production by titanium nanoparticles, yet titanium has been detected in hair follicles of both patients with FFA and controls15; and endocrine disruption following systemic absorption, which has not been supported by any high-quality human studies.7
An association between facial sunscreen use and FFA has been reported in case-control studies16; however, they have been criticized due to methodologic issues and biases, and they provide no evidence of causality.17,18 The jury remains out on the controversial association between sunscreen and FFA, with a need for more convincing data.
Does sunscreen impact coral reef health?
Coral reefs—crucial sources of aquatic biodiversity—are under attack from several different directions including climate change and pollution. As much as 14,000 tons of sunscreen enter coral reefs each year, and chemical sunscreen filters are detectable in waterways throughout the world—even in the Arctic.19,20 Thus, sunscreen has come under scrutiny as a potential environmental threat, particularly with coral bleaching.
Bleaching is a process in which corals exposed to an environmental stressor expel their symbiotic photosynthetic algae and turn white; if conditions fail to improve, the corals are vulnerable to death. In a highly cited 2016 study, coral larvae exposed to oxybenzone in artificial laboratory conditions displayed concentration-dependent mortality and decreased chlorophyll fluorescence, which suggested bleaching.19 These findings influenced legislation in Hawaii and other localities banning sunscreens containing oxybenzone. Problematically, the study has been criticized for acutely exposing the most susceptible coral life-forms to unrealistic oxybenzone concentrations; more broadly, there is no standardized approach to coral toxicity testing.21
The bigger picture (and elephant in the room) is that the primary cause of coral bleaching is undoubtedly climate change/ocean warming.7 More recent studies suggest that oxybenzone probably adds insult to injury for corals already debilitated by ocean warming.22,23
It has been posited that a narrow focus on sunscreens detracts attention from the climate issue.24 Individuals can take a number of actions to reduce their carbon footprint in an effort to preserve our environment, specifically coral reefs.25 Concerned patients should be counseled to use sunscreens containing the physical/mineral UV filters zinc oxide and titanium dioxide, which are unlikely to contribute to coral bleaching as commercially formulated.7
Ongoing Questions
A lot of unknowns about sunscreen safety remain, and much hubbub has been made over studies that often are preliminary at best. At the time of this writing, absent a crystal ball, this author continues to wear chemical sunscreens; spends a lot more time worrying about their carbon footprint than what type of sunscreen to use at the beach; and believes the association of FFA with sunscreen is unlikely to be causal. Hopefully much-needed rigorous evidence will guide our future approach to sunscreen formulation and use.
- Ma Y, Yoo J. History of sunscreen: an updated view. J Cosmet Dermatol. 2021;20:1044-1049.
- Pantelic MN, Wong N, Kwa M, et al. Ultraviolet filters in the United States and European Union: a review of safety and implications for the future of US sunscreens. J Am Acad Dermatol. 2023;88:632-646.
- Mohammad TF, Lim HW. The important role of dermatologists in public education on sunscreens. JAMA Dermatol. 2021;157:509-511.
- Sunscreen drug products for over-the-counter human use: proposed rule. Fed Regist. 2019;84:6204-6275.
- Lim HW, Mohammad TF, Wang SQ. Food and Drug Administration’s proposed sunscreen final administrative order: how does it affect sunscreens in the United States? J Am Acad Dermatol. 2022;86:E83-E84.
- Ekstein SF, Hylwa S. Sunscreens: a review of UV filters and their allergic potential. Dermatitis. 2023;34:176-190.
- Adler BL, DeLeo VA. Sunscreen safety: a review of recent studies on humans and the environment. Curr Dermatol Rep. 2020;9:1-9.
- Suh S, Pham C, Smith J, et al. The banned sunscreen ingredients and their impact on human health: a systematic review. Int J Dermatol. 2020;59:1033-1042.
- Calafat AM, Wong LY, Ye X, et al. Concentrations of the sunscreen agent benzophenone-3 in residents of the United States: National Health and Nutrition Examination Survey 2003-2004. Environ Health Perspect. 2008;116:893-897.
- Matta MK, Florian J, Zusterzeel R, et al. Effect of sunscreen application on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2020;323:256-267.
- Matta MK, Zusterzeel R, Pilli NR, et al. Effect of sunscreen application under maximal use conditions on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2019;321:2082-2091.
- Petersen B, Wulf HC. Application of sunscreen—theory and reality. Photodermatol Photoimmunol Photomed. 2014;30:96-101.
- Mohammed YH, Holmes A, Haridass IN, et al. Support for the safe use of zinc oxide nanoparticle sunscreens: lack of skin penetration or cellular toxicity after repeated application in volunteers. J Invest Dermatol. 2019;139:308-315.
- Felmingham C, Yip L, Tam M, et al. Allergy to sunscreen and leave-on facial products is not a likely causative mechanism in frontal fibrosing alopecia: perspective from contact allergy experts. Br J Dermatol. 2020;182:481-482.
- Thompson CT, Chen ZQ, Kolivras A, et al. Identification of titanium dioxide on the hair shaft of patients with and without frontal fibrosing alopecia: a pilot study of 20 patients. Br J Dermatol. 2019;181:216-217.
- Maghfour J, Ceresnie M, Olson J, et al. The association between frontal fibrosing alopecia, sunscreen, and moisturizers: a systematic review and meta-analysis. J Am Acad Dermatol. 2022;87:395-396.
- Seegobin SD, Tziotzios C, Stefanato CM, et al. Frontal fibrosing alopecia:there is no statistically significant association with leave-on facial skin care products and sunscreens. Br J Dermatol. 2016;175:1407-1408.
- Ramos PM, Anzai A, Duque-Estrada B, et al. Regarding methodologic concerns in clinical studies on frontal fibrosing alopecia. J Am Acad Dermatol. 2021;84:E207-E208.
- Downs CA, Kramarsky-Winter E, Segal R, et al. Toxicopathological effects of the sunscreen UV filter, oxybenzone (benzophenone-3), on coral planulae and cultured primary cells and its environmental contamination in Hawaii and the US Virgin Islands. Arch Environ Contam Toxicol. 2016;70:265-288.
- National Academies of Sciences, Engineering, and Medicine. Review of Fate, Exposure, and Effects of Sunscreens in Aquatic Environments and Implications for Sunscreen Usage and Human Health. The National Academies Press; 2022.
- Mitchelmore CL, Burns EE, Conway A, et al. A critical review of organic ultraviolet filter exposure, hazard, and risk to corals. Environ Toxicol Chem. 2021;40:967-988.
- Vuckovic D, Tinoco AI, Ling L, et al. Conversion of oxybenzone sunscreen to phototoxic glucoside conjugates by sea anemones and corals. Science. 2022;376:644-648.
- Wijgerde T, van Ballegooijen M, Nijland R, et al. Adding insult to injury: effects of chronic oxybenzone exposure and elevated temperature on two reef-building corals. Sci Total Environ. 2020;733:139030.
- Sirois J. Examine all available evidence before making decisions on sunscreen ingredient bans. Sci Total Environ. 2019;674:211-212.
- United Nations. Actions for a healthy planet. Accessed April 15, 2024. https://www.un.org/en/actnow/ten-actions
- Ma Y, Yoo J. History of sunscreen: an updated view. J Cosmet Dermatol. 2021;20:1044-1049.
- Pantelic MN, Wong N, Kwa M, et al. Ultraviolet filters in the United States and European Union: a review of safety and implications for the future of US sunscreens. J Am Acad Dermatol. 2023;88:632-646.
- Mohammad TF, Lim HW. The important role of dermatologists in public education on sunscreens. JAMA Dermatol. 2021;157:509-511.
- Sunscreen drug products for over-the-counter human use: proposed rule. Fed Regist. 2019;84:6204-6275.
- Lim HW, Mohammad TF, Wang SQ. Food and Drug Administration’s proposed sunscreen final administrative order: how does it affect sunscreens in the United States? J Am Acad Dermatol. 2022;86:E83-E84.
- Ekstein SF, Hylwa S. Sunscreens: a review of UV filters and their allergic potential. Dermatitis. 2023;34:176-190.
- Adler BL, DeLeo VA. Sunscreen safety: a review of recent studies on humans and the environment. Curr Dermatol Rep. 2020;9:1-9.
- Suh S, Pham C, Smith J, et al. The banned sunscreen ingredients and their impact on human health: a systematic review. Int J Dermatol. 2020;59:1033-1042.
- Calafat AM, Wong LY, Ye X, et al. Concentrations of the sunscreen agent benzophenone-3 in residents of the United States: National Health and Nutrition Examination Survey 2003-2004. Environ Health Perspect. 2008;116:893-897.
- Matta MK, Florian J, Zusterzeel R, et al. Effect of sunscreen application on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2020;323:256-267.
- Matta MK, Zusterzeel R, Pilli NR, et al. Effect of sunscreen application under maximal use conditions on plasma concentration of sunscreen active ingredients: a randomized clinical trial. JAMA. 2019;321:2082-2091.
- Petersen B, Wulf HC. Application of sunscreen—theory and reality. Photodermatol Photoimmunol Photomed. 2014;30:96-101.
- Mohammed YH, Holmes A, Haridass IN, et al. Support for the safe use of zinc oxide nanoparticle sunscreens: lack of skin penetration or cellular toxicity after repeated application in volunteers. J Invest Dermatol. 2019;139:308-315.
- Felmingham C, Yip L, Tam M, et al. Allergy to sunscreen and leave-on facial products is not a likely causative mechanism in frontal fibrosing alopecia: perspective from contact allergy experts. Br J Dermatol. 2020;182:481-482.
- Thompson CT, Chen ZQ, Kolivras A, et al. Identification of titanium dioxide on the hair shaft of patients with and without frontal fibrosing alopecia: a pilot study of 20 patients. Br J Dermatol. 2019;181:216-217.
- Maghfour J, Ceresnie M, Olson J, et al. The association between frontal fibrosing alopecia, sunscreen, and moisturizers: a systematic review and meta-analysis. J Am Acad Dermatol. 2022;87:395-396.
- Seegobin SD, Tziotzios C, Stefanato CM, et al. Frontal fibrosing alopecia:there is no statistically significant association with leave-on facial skin care products and sunscreens. Br J Dermatol. 2016;175:1407-1408.
- Ramos PM, Anzai A, Duque-Estrada B, et al. Regarding methodologic concerns in clinical studies on frontal fibrosing alopecia. J Am Acad Dermatol. 2021;84:E207-E208.
- Downs CA, Kramarsky-Winter E, Segal R, et al. Toxicopathological effects of the sunscreen UV filter, oxybenzone (benzophenone-3), on coral planulae and cultured primary cells and its environmental contamination in Hawaii and the US Virgin Islands. Arch Environ Contam Toxicol. 2016;70:265-288.
- National Academies of Sciences, Engineering, and Medicine. Review of Fate, Exposure, and Effects of Sunscreens in Aquatic Environments and Implications for Sunscreen Usage and Human Health. The National Academies Press; 2022.
- Mitchelmore CL, Burns EE, Conway A, et al. A critical review of organic ultraviolet filter exposure, hazard, and risk to corals. Environ Toxicol Chem. 2021;40:967-988.
- Vuckovic D, Tinoco AI, Ling L, et al. Conversion of oxybenzone sunscreen to phototoxic glucoside conjugates by sea anemones and corals. Science. 2022;376:644-648.
- Wijgerde T, van Ballegooijen M, Nijland R, et al. Adding insult to injury: effects of chronic oxybenzone exposure and elevated temperature on two reef-building corals. Sci Total Environ. 2020;733:139030.
- Sirois J. Examine all available evidence before making decisions on sunscreen ingredient bans. Sci Total Environ. 2019;674:211-212.
- United Nations. Actions for a healthy planet. Accessed April 15, 2024. https://www.un.org/en/actnow/ten-actions
Artificial Intelligence in GI and Hepatology
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.
Dear colleagues,
Since our prior Perspectives piece on artificial intelligence (AI) in GI and Hepatology in 2022, the field has seen almost exponential growth. Expectations are high that AI will revolutionize our field and significantly improve patient care. But as the global discussion on AI has shown, there are real challenges with adoption, including issues with accuracy, reliability, and privacy.
In this issue, Dr. Nabil M. Mansour and Dr. Thomas R. McCarty explore the current and future impact of AI on gastroenterology, while Dr. Basile Njei and Yazan A. Al Ajlouni assess its role in hepatology. We hope these pieces will help your discussions in incorporating or researching AI for use in your own practices. We welcome your thoughts on this issue on X @AGA_GIHN.
Gyanprakash A. Ketwaroo, MD, MSc, is associate professor of medicine, Yale University, New Haven, Conn., and chief of endoscopy at West Haven (Conn.) VA Medical Center. He is an associate editor for GI & Hepatology News.
Artificial Intelligence in Gastrointestinal Endoscopy
BY THOMAS R. MCCARTY, MD, MPH; NABIL M. MANSOUR, MD
The last few decades have seen an exponential increase and interest in the role of artificial intelligence (AI) and adoption of deep learning algorithms within healthcare and patient care services. The field of gastroenterology and endoscopy has similarly seen a tremendous uptake in acceptance and implementation of AI for a variety of gastrointestinal conditions. The spectrum of AI-based applications includes detection or diagnostic-based as well as therapeutic assistance tools. From the first US Food and Drug Administration (FDA)-approved device that uses machine learning to assist clinicians in detecting lesions during colonoscopy, to other more innovative machine learning techniques for small bowel, esophageal, and hepatobiliary conditions, AI has dramatically changed the landscape of gastrointestinal endoscopy.
Approved applications for colorectal cancer
In an attempt to improve colorectal cancer screening and outcomes related to screening and surveillance, efforts have been focused on procedural performance metrics, quality indicators, and tools to aid in lesion detection and improve quality of care. One such tool has been computer-aided detection (CADe), with early randomized controlled trial (RCT) data showing significantly increased adenoma detection rate (ADR) and adenomas per colonoscopy (APC).1-3
Ultimately, this data led to FDA approval of the CADe system GI Genius (Medtronic, Dublin, Ireland) in 2021.4 Additional systems have since been FDA approved or 510(k) cleared including Endoscreener (Wision AI, Shanghai, China), SKOUT (Iterative Health, Cambridge, Massachusetts), MAGENTIQ-COLO (MAGENTIQ-EYE LTD, Haifa, Israel), and CAD EYE (Fujifilm, Tokyo), all of which have shown increased ADR and/or increased APC and/or reduced adenoma miss rates in randomized trials.5
Yet despite the promise of improved quality and subsequent translation to better patient outcomes, there has been a noticeable disconnect between RCT data and more real-world literature.6 In a recent study, no improvement was seen in ADR after implementation of a CADe system for colorectal cancer screening — including both higher and lower-ADR performers. Looking at change over time after implementation, CADe had no positive effect in any group over time, divergent from early RCT data. In a more recent multicenter, community-based RCT study, again CADe did not result in a statistically significant difference in the number of adenomas detected.7 The differences between some of these more recent “real-world” studies vs the majority of data from RCTs raise important questions regarding the potential of bias (due to unblinding) in prospective trials, as well as the role of the human-AI interaction.
Importantly for RCT data, both cohorts in these studies met adequate ADR benchmarks, though it remains unclear whether a truly increased ADR necessitates better patient outcomes — is higher always better? In addition, an important consideration with evaluating any AI/CADe system is that they often undergo frequent updates, each promising improved accuracy, sensitivity, and specificity. This is an interesting dilemma and raises questions about the enduring relevance of studies conducted using an outdated version of a CADe system.
Additional unanswered questions regarding an ideal ADR for implementation, preferred patient populations for screening (especially for younger individuals), and the role and adoption of computer-aided polyp diagnosis/characterization (CADx) within the United States remain. Furthermore, questions regarding procedural withdrawal time, impact on sessile serrated lesion detection, cost-effectiveness, and preferred adoption strategies have begun to be explored, though require more data to better define a best practice approach. Ultimately, answers to some of these unknowns may explain the discordant results and help guide future implementation measures.
Innovative applications for alternative gastrointestinal conditions
Given the fervor and excitement, as well as the outcomes associated with AI-based colorectal screening, it is not surprising these techniques have been expanded to other gastrointestinal conditions. At this time, all of these are fledgling, mostly single-center tools, not yet ready for widespread adoption. Nonetheless, these represent a potentially important step forward for difficult-to-manage gastrointestinal diseases.
Machine learning CADe systems have been developed to help identify early Barrett’s neoplasia, depth and invasion of gastric cancer, as well as lesion detection in small bowel video capsule endoscopy.8-10 Endoscopic retrograde cholangiopancreatography (ERCP)-based applications for cholangiocarcinoma and indeterminate stricture diagnosis have also been studied.11 Additional AI-based algorithms have been employed for complex procedures such as endoscopic submucosal dissection (ESD) or peroral endoscopic myotomy (POEM) to delineate vessels, better define tissue planes for dissection, and visualize landmark structures.12,13 Furthermore, AI-based scope guidance/manipulation, bleeding detection, landmark identification, and lesion detection have the potential to revolutionize endoscopic training and education. The impact that generative AI can potentially have on clinical practice is also an exciting prospect that warrants further investigation.
Artificial intelligence adoption in clinical practice
Clinical practice with regard to AI and colorectal cancer screening largely mirrors the disconnect in the current literature, with “believers” and “non-believers” as well as innovators and early adopters alongside laggards. In our own academic practices, we continue to struggle with the adoption and standardized implementation of AI-based colorectal cancer CADe systems, despite the RCT data showing positive results. It is likely that AI uptake will follow the technology predictions of Amara’s Law — i.e., individuals tend to overestimate the short-term impact of new technologies while underestimating long-term effects. In the end, more widespread adoption in community practice and larger scale real-world clinical outcomes studies are likely to determine the true impact of these exciting technologies. For other, less established AI-based tools, more data are currently required.
Conclusions
Ultimately, AI-based algorithms are likely here to stay, with continued improvement and evolution to occur based on provider feedback and patient care needs. Current tools, while not all-encompassing, have the potential to dramatically change the landscape of endoscopic training, diagnostic evaluation, and therapeutic care. It is critically important that relevant stakeholders, both endoscopists and patients, be involved in future applications and design to improve efficiency and quality outcomes overall.
Dr. McCarty is based in the Lynda K. and David M. Underwood Center for Digestive Disorders, Houston Methodist Hospital. Dr. Mansour is based in the section of gastroenterology, Baylor College of Medicine, Houston. Dr. McCarty reports no conflicts of interest. Dr. Mansour reports having been a consultant for Iterative Health.
References
1. Repici A, et al. Efficacy of real-time computer-aided detection of colorectal neoplasia in a randomized trial. Gastroenterology. 2020 Aug. doi: 10.1053/j.gastro.2020.04.062.
2. Repici A, et al. Artificial intelligence and colonoscopy experience: Lessons from two randomised trials. Gut. Apr 2022. doi: 10.1136/gutjnl-2021-324471.
3. Wallace MB, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022 Jul. doi: 10.1053/j.gastro.2022.03.007.
4. United States Food and Drug Administration (FDA). GI Genius FDA Approval [April 9, 2021]. Accessed January 5, 2022. Available at: www.accessdata.fda.gov/cdrh_docs/pdf21/K211951.pdf.
5. Maas MHJ, et al. A computer-aided polyp detection system in screening and surveillance colonoscopy: An international, multicentre, randomised, tandem trial. Lancet Digit Health. 2024 Mar. doi: 10.1016/S2589-7500(23)00242-X.
6. Ladabaum U, et al. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology. 2023 Mar. doi: 10.1053/j.gastro.2022.12.004.
7. Wei MT, et al. Evaluation of computer-aided detection during colonoscopy in the community (AI-SEE): A multicenter randomized clinical trial. Am J Gastroenterol. 2023 Oct. doi: 10.14309/ajg.0000000000002239.
8. de Groof J, et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United European Gastroenterol J. 2019 May. doi: 10.1177/2050640619837443.
9. Kanesaka T, et al. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest Endosc. 2018 May. doi: 10.1016/j.gie.2017.11.029.
10. Sahafi A, et al. Edge artificial intelligence wireless video capsule endoscopy. Sci Rep. 2022 Aug. doi: 10.1038/s41598-022-17502-7.
11. Njei B, et al. Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: A systematic review. Ann Gastroenterol. 2023 Mar-Apr. doi: 10.20524/aog.2023.0779.
12. Ebigbo A, et al. Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm. Gut. 2022 Dec. doi: 10.1136/gutjnl-2021-326470.
13. Cao J, et al. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun. 2023 Oct. doi: 10.1038/s41467-023-42451-8.
The Promise and Challenges of AI in Hepatology
BY BASILE NJEI, MD, MPH, PHD; YAZAN A. AL-AJLOUNI, MPHIL
In the dynamic realm of medicine, artificial intelligence (AI) emerges as a transformative force, notably within hepatology. The discipline of hepatology, dedicated to liver and related organ diseases, is ripe for AI’s promise to revolutionize diagnostics and treatment, pushing toward a future of precision medicine. Yet, the path to fully realizing AI’s potential in hepatology is laced with data, ethical, and integration challenges.
The application of AI, particularly in histopathology, significantly enhances disease diagnosis and staging in hepatology. AI-driven approaches remedy traditional histopathological challenges, such as interpretative variability, providing more consistent and accurate disease analyses. This is especially evident in conditions like metabolic dysfunction-associated steatohepatitis (MASH) and hepatocellular carcinoma (HCC), where AI aids in identifying critical gene signatures, thereby refining therapy selection.
Similarly, deep learning (DL), a branch of AI, has attracted significant interest globally, particularly in image recognition. AI’s incorporation into medical imaging marks a significant advancement, enabling early detection of malignancies like HCC and improving diagnostics in steatotic liver disease through enhanced imaging analyses using convolutional neural networks (CNN). The abundance of imaging data alongside clinical outcomes has catalyzed AI’s integration into radiology, leading to the swift growth of radiomics as a novel domain in medical research.
AI has also been shown to identify nuanced alterations in electrocardiograms (EKGs) associated with liver conditions, potentially detecting the progression of liver diseases at an earlier stage than currently possible. By leveraging complex algorithms and machine learning, AI can analyze EKG patterns with a precision and depth unattainable through traditional manual interpretation. Given that liver diseases, such as cirrhosis or hepatitis, can induce subtle cardiac changes long before other clinical symptoms manifest, early detection through AI-enhanced EKG analysis could lead to timely interventions, potentially halting or reversing disease progression. This approach further enriches our understanding of the intricate interplay between liver function and cardiac health, highlighting the potential for AI to transform not just liver disease diagnostics but also to foster a more integrated approach to patient care.
Beyond diagnostics, the burgeoning field of generative AI introduces groundbreaking possibilities in treatment planning and patient education, particularly for chronic conditions like cirrhosis. Generative AI produces original content, including text, visuals, and music, by identifying and learning patterns from its training data. When it leverages large language models (LLMs), it entails training on vast collections of textual data and using AI models characterized by many parameters. A notable instance of generative AI employing LLMs is ChatGPT (General Pretrained Transformers). By simulating disease progression and treatment outcomes, generative AI can foster personalized treatment strategies and empower patients with knowledge about their health trajectories. Yet, realizing these potential demands requires overcoming data quality and interpretability challenges, and ensuring AI outputs are accessible and actionable for clinicians and patients.
Despite these advancements, leveraging AI in hepatology is not devoid of hurdles. The development and training of AI models require extensive and diverse datasets, raising concerns about data privacy and ethical use. Addressing these concerns is paramount for successfully integrating AI into clinical hepatology practice, necessitating transparent algorithmic processes and stringent ethical standards. Ethical considerations are central to AI’s integration into hepatology. Algorithmic biases, patient privacy, and the impact of AI-driven decisions underscore the need for cautious AI deployment. Developing transparent, understandable algorithms and establishing ethical guidelines for AI use are critical steps towards ethically leveraging AI in patient care.
In conclusion, AI’s integration into hepatology holds tremendous promise for advancing patient care through enhanced diagnostics, treatment planning, and patient education. Overcoming the associated challenges, including ethical concerns, data diversity, and algorithm interpretability, is crucial. As the hepatology community navigates this technological evolution, a balanced approach that marries technological advancements with ethical stewardship will be key to harnessing AI’s full potential, ensuring it serves the best interests of patients and propels the field of hepatology into the future.
We predict a trajectory of increased use and adoption of AI in hepatology. AI in hepatology is likely to meet the test of pervasiveness, improvement, and innovation. The adoption of AI in routine hepatology diagnosis and management will likely follow Amara’s law and the five stages of the hype cycle. We believe that we are still in the infant stages of adopting AI technology in hepatology, and this phase may last 5 years before there is a peak of inflated expectations. The trough of disillusionment and slopes of enlightenment may only be observed in the next decades.
Dr. Njei is based in the Section of Digestive Diseases, Yale School of Medicine, New Haven, Conn. Mr. Al-Ajlouni is a senior medical student at New York Medical College School of Medicine, Valhalla, N.Y. They have no conflicts of interest to declare.
Sources
Taylor-Weiner A, et al. A Machine Learning Approach Enables Quantitative Measurement of Liver Histology and Disease Monitoring in NASH. Hepatology. 2021 Jul. doi: 10.1002/hep.31750.
Zeng Q, et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol. 2022 Jul. doi: 10.1016/j.jhep.2022.01.018.
Ahn JC, et al. Development of the AI-Cirrhosis-ECG Score: An Electrocardiogram-Based Deep Learning Model in Cirrhosis. Am J Gastroenterol. 2022 Mar. doi: 10.14309/ajg.0000000000001617.
Nduma BN, et al. The Application of Artificial Intelligence (AI)-Based Ultrasound for the Diagnosis of Fatty Liver Disease: A Systematic Review. Cureus. 2023 Dec 15. doi: 10.7759/cureus.50601.