User login
Patient Navigation Boosts Follow-Up Colonoscopy Completion
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
The intervention led to a significant 13-point increase in follow-up colonoscopy completion at 1 year, compared with usual care (55.1% vs 42.1%), according the study, which was published online in Annals of Internal Medicine.
“Patients with an abnormal fecal test results have about a 1 in 20 chance of having colorectal cancer found, and many more will be found to have advanced adenomas that can be removed to prevent cancer,” Gloria Coronado, PhD, of Kaiser Permanente Center for Health Research, Portland, Oregon, and University of Arizona Cancer Center, Tucson, said in an interview.
“It is critical that these patients get a follow-up colonoscopy,” she said. “Patient navigation can accomplish this goal.”
‘Highly Effective’ Intervention
Researchers compared the effectiveness of a patient navigation program with that of usual care outreach in increasing follow-up colonoscopy completion after an abnormal stool test. They also developed a risk-prediction model that calculated a patient’s probability of obtaining a follow-up colonoscopy without navigation to determine if the addition of this intervention had a greater impact on those determined to be less likely to follow through.
The study included 967 patients from a community health center in Washington State who received an abnormal fecal test result within the prior month. The mean age of participants was 61 years, approximately 45% were women and 77% were White, and 18% preferred a Spanish-language intervention. In total, 479 patients received the intervention and 488 received usual care.
The intervention was delivered by a patient navigator who mailed introductory letters, sent text messages, and made live phone calls. In the calls, the navigators addressed the topics of barrier assessment and resolution, bowel preparation instruction and reminders, colonoscopy check-in, and understanding colonoscopy results and retesting intervals.
Patients in the usual-care group were contacted by a referral coordinator to schedule a follow-up colonoscopy appointment. If they couldn’t be reached initially, up to two follow-up attempts were made at 30 and 45 days after the referral date.
Patient navigation resulted in a significant 13% increase in follow-up, and those in this group completed a colonoscopy 27 days sooner than those in the usual care group (mean, 229 days vs 256 days).
Contrary to the authors’ expectation, the effectiveness of the intervention did not vary by patients’ predicted likelihood of obtaining a colonoscopy without navigation.
Notably, 20.3% of patients were unreachable or lost to follow-up, and 29.7% did not receive navigation. Among the 479 patients assigned to navigation, 79 (16.5%) declined participation and 56 (11.7%) were never reached.
The study was primarily conducted during the height of the COVID-19 pandemic, which created additional systemic and individual barriers to completing colonoscopies.
Nevertheless, the authors wrote, “our findings suggest that patient navigation is highly effective for patients eligible for colonoscopy.”
“Most patients who were reached were contacted with six or fewer phone attempts,” Coronado noted. “Further efforts are needed to determine how to reach and motivate patients [who did not participate] to get a follow-up colonoscopy.”
Coronado and colleagues are exploring ways to leverage artificial intelligence and virtual approaches to augment patient navigation programs — for example, by using a virtual navigator or low-cost automated tools to provide education to build patient confidence in getting a colonoscopy.
‘A Promising Tool’
“Colonoscopy completion after positive stool-based testing is critical to mitigating the impact of colon cancer,” commented Rajiv Bhuta, MD, assistant professor of clinical gastroenterology & hepatology, Lewis Katz School of Medicine, Temple University, Philadelphia, who was not involved in the study. “While prior studies assessing navigation have demonstrated improvements, none were as large enrollment-wise or as generalizable as the current study.”
That said, Bhuta said in an interview that the study could have provided more detail about coordination and communication with local gastrointestinal practices.
“Local ordering and prescribing practices vary and can significantly impact compliance rates. Were colonoscopies completed via an open access pathway or were the patients required to see a gastroenterologist first? How long was the average wait time for colonoscopy once scheduled? What were the local policies on requiring an escort after the procedure?”
He also noted that some aspects of the study — such as access to reduced-cost specialty care and free ride-share services — may limit generalizable to settings without such resources.
He added: “Although patient navigators for cancer treatment have mandated reimbursement, there is no current reimbursement for navigators for abnormal screening tests, another barrier to wide-spread implementation.”
Bhuta said that the dropout rate in the study mirrors that of his own real-world practice, which serves a high-risk, low-resource community. “I would specifically like to see research that provides behavioral insights on why patients respond positively to navigation — whether it is due to reminders, emotional support, or logistical assistance. Is it systemic barriers or patient disinterest or both that drives noncompliance?”
Despite these uncertainties and the need to refine implementation logistics, Bhuta concluded, “this strategy is a promising tool to reduce disparities and improve colorectal cancer outcomes. Clinicians should advocate for or implement structured follow-up systems, particularly in high-risk populations.”
The study was funded by the US National Cancer Institute. Coronado received a grant/contract from Guardant Health. Bhuta declared no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Intermittent Fasting Outperforms Daily Calorie Cutting for Weight Loss
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
a randomized study found.
A 4:3 IMF program produced modestly superior weight loss than DCR of 2.89 kg over 12 months in the context of a guidelines-based, high-intensity, comprehensive behavioral weight loss program, according to Danielle M. Ostendorf, PhD, MS, co–lead author and an assistant professor at the University of Tennessee, Knoxville, and Victoria Catenacci, MD, study principal investigator, co–lead author, and an associate professor located at the University of Colorado Anschutz Medical Campus, Aurora.
The study, published in Annals of Internal Medicine, found that objectively measured percentage caloric restriction was greater in the 4:3 IMF group, whereas there was no between-group difference in change in total moderate to vigorous physical activity, suggesting that differences in weight loss may have been caused by greater adherence to 4:3 IMF. The 4:3 IMF program was well tolerated and attrition was lower in this group: 19% for IMF group vs 30% for DCR group.
The authors noted that alternative patterns for restricting dietary energy intake are gaining attention owing to the difficulty of adhering to a reduced-calorie diet daily, with most adults who lose weight through DCR showing significant weight regain a year later.
According to Ostendorf and Catenacci, fasting strategies “come in two different flavors and oftentimes get confused in the lay press and by patients and researchers. And there is a difference between IMF and time-restricted eating (TRE),” they said in an interview. “TRE involves limiting the daily window of food intake to 8-10 hours or less on most days of the week — for example, 16:8 or 14:10 strategies. TRE is done every day, consistently and involves eating in the predefined window, and fasting outside of that window.”
IMF is a more periodic and significant fast and involves cycling between complete or near-complete (> 75%) energy restriction on fast days and ad libitum energy intake on nonfast days.
An appealing feature of IMF is that dieters do not have to focus on counting calories and restricting intake every day as they do with DCR, the authors wrote. Furthermore, the periodic nature of fasting is simpler and may mitigate the constant hunger associated with DCR.
Some said the diet was dreadful, but many said it was the easiest diet they had ever been on. “But it did take time for people to adjust to this strategy,” Catenacci said. “It was reassuring to see no evidence of increased binge-eating behaviors.”
Although objectively measured adherence to the targeted energy deficit (percentage caloric restriction from baseline) was below the target of 34.3% in both groups, the 4:3 IMF group showed greater percentage caloric restriction over 12 months. This suggests that, on average, the 4:3 IMF group may be more sustainable over a year than the DCR group. However, weight loss varied in both groups. Future studies should evaluate biological and behavioral predictors of response to both 4:3 IMF and DCR groups in order to personalize recommendations for weight loss.
Study Details
The investigators randomized 165 patients at the University of Colorado Anschutz Medical Campus, with a mean age of 42 years (18-60), a mean baseline weight of 97.4 kg, and a mean baseline body mass index (BMI) of 34.1 to IMF (n = 84) or DCR (n = 81). Of these, 74% were women and 86% were White individuals, and 125 (76%) completed the trial.
The 4:3 IMF group restricted energy intake by 80% on 3 nonconsecutive fast days per week, with ad libitum intake on the other 4 days (4:3 IMF). The 80% calorie reduction fasting corresponded to about 400-600 kcals/d for women and 500-700 kcals/d for men.
“Participants were only required to count calories on their fast days, which is part of the appeal,” Ostendorf said. Although permitted to eat what they wanted on nonfast days, participants were encouraged to make healthy food choices and consume healthy portion sizes.
For its part, the DCR group reduced daily energy intake by 34% to match the weekly energy deficit of 4:3 IMF.
Both groups participated in a high-intensity comprehensive weight loss program with group-based behavioral support and a recommended increase in moderate-intensity physical activity to 300 min/wk.
On the primary endpoint, the 4:3 IMF group showed a weight loss of 7.7 kg (95% CI, –9.6 to –5.9 kg) compared with 4.8 kg (95% CI, –6.8 to –2.8 kg, P =.040) in the DCR group at 12 months. The percentage change in body weight from baseline was –7.6% (95% CI, –9.5% to –5.7%) in the 4:3 IMF group and –5% (95% CI, –6.9% to –3.1%) in the DCR group.
At 12 months, 58% (n = 50) of participants in the 4:3 IMF group achieved weight loss of at least 5% vs 47% (n = 27) of those in the DCR group. In addition, 38% (n = 26) of participants in the 4:3 IMF group achieved weight loss of at least 10% at 12 months vs 16% (n = 9) of those in the DCR group. Changes in body composition, BMI, and waist circumference also tended to favor the 4:3 IMF group.
On other 12-month measures, point estimates of change in systolic blood pressure, total and low-density lipoprotein cholesterol levels, triglyceride level, homeostasis model assessment of insulin resistance, fasting glucose level, and hemoglobin A1c level favored 4:3 IMF. Point estimates of change in diastolic blood pressure and high-density lipoprotein cholesterol level favored DCR.
Currently lacking, the authors said, are data on safety in children and older adults, and adults affected by a long list of conditions: Diabetes, cardiovascular disease, kidney disease (stage 4 or 5), cancer, and eating disorders. Also, people of normal weight or only mild overweight, and pregnant or lactating women. “There have been concerns about IMF causing eating disorders, so we did not include people with eating disorders in our study,” Ostendorf and Catenacci said.
Offering an outside perspective on the findings, James O. Hill, PhD, director of the Nutrition Obesity Research Center and a professor at the University of Alabama at Birmingham believes IMF is a viable option for people trying to lose weight and has prescribed this approach for some in his practice. “But there is no one strategy that works for everyone,” he said in an interview. “I recommend IMF as a science-based strategy that can be effective for some people, and I think it should be on the list of science-based tools that people can consider using.” But as it won’t work for everyone, “we need to consider both metabolic success and behavioral success. In other words, would it be more effective if people could do it and how easy or hard is it for people to do?”
Audra Wilson, MS, RD, a bariatric dietitian at Northwestern Medicine Delnor Hospital in Geneva, Illinois, who was not involved in the study, expressed more reservations. “We do not specifically recommend intermittent fasting at Northwestern Medicine. There is no set protocol for this diet, and it can vary in ways that can limit nutrition to the point where we are not meeting needs on a regular basis,” she said in an interview.
Moreover, this study did not specify exact nutritional recommendations for participants but merely reduced overall caloric intake. “Although intermittent fasting may be helpful to some, in my nearly 10 years of experience I have not seen it be effective for many and especially not long term,” Wilson added.
Concerningly, IMF can foster disordered eating patterns of restriction followed by binging. “Although a balanced diet is more difficult to achieve, guidance from professionals like dietitians can give patients the tools to achieve balance, meet all nutrient needs, achieve satiety, and maybe most importantly, have a better relationship with food,” she said.
As for the influence of metabolic factors that may be associated with better weight loss, Ostendorf said, “be on the lookout for future publications in this area. We are analyzing data around changes in energy expenditure and changes in hunger-related hormones, among others.” A colleague is collecting biological samples to study genetics in this context. “However, in general, it appeared that the difference in weight loss was due to a greater caloric deficit in the 4:3 IMF group.”
Ostendorf and Catenacci are currently conducting a pilot study testing 4:3 IMF in breast cancer survivors. “We think this is a promising strategy for weight loss in breast cancer survivors who struggle with overweight/obesity in addition to their cancer diagnosis,” Ostendorf said.
This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases. Ostendorf, Catenacci, Hill, and Wilson disclosed no relevant financial conflicts of interest.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Wearable Devices May Predict IBD Flares Weeks in Advance
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”
“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”
In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.
“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”
Still, Lukin predicted challenges with widespread adoption.
“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”
He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care.
“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”
Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
according to investigators.
These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.
“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”
Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD.
Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.
Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.
Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.
Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.
These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.
In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.
“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.
The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.
FROM GASTROENTEROLOGY
Low-Quality Food Environments Increase MASLD-related Mortality
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.
The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.
MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.
Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
according to investigators.
These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.
“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”
To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.
Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.
Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).
Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.
In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).
Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality.
“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”
This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Infrequent HDV Testing Raises Concern for Worse Liver Outcomes
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.
Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.
Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.
This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.
Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
—according to new findings.
The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.
“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).
Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.
The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.
To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.
Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.
Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.
Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.
In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.
“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”
The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.
FROM GASTRO HEP ADVANCES
Stretcher vs Table for Operative Hand Surgery
Stretcher vs Table for Operative Hand Surgery
US Department of Veterans Affairs (VA) health care facilities have not recovered from staff shortages that occurred during the COVID-19 pandemic.1 Veterans Health Administration operating rooms (ORs) lost many valuable clinicians during the pandemic due to illness, relocation, burnout, and retirement, and remain below prepandemic levels. The staffing shortage has resulted in lost OR time, leading to longer wait times for surgery. In October 2021, the Malcom Randall VA Medical Center (MRVAMC) Plastic Surgery Service implemented a surgery-on-stretcher initiative, in which patients arriving in the OR remained on the stretcher throughout surgery rather than being transferred to the operating table. Avoiding patient transfers was identified as a strategy to increase the number of procedures performed while providing additional benefits to the patients and staff.
The intent of the surgery-on-stretcher initiative was to reduce OR turnover time and in-room time, decrease supply costs, and improve patient and staff safety. The objective of this study was to evaluate the new process in terms of time efficiency, cost savings, and safety.
METHODS
The University of Florida Institutional Review Board (IRB) and North Florida/South Georgia Veterans Health System Research and Development Committee (IRB.net) approved a retrospective chart review of hand surgery cases performed in the same OR by the same surgeon over 2 year-long periods: October 1, 2020, through September 30, 2021, when surgeries were performed on the operating table (Figure 1), and June 1, 2022, through May 31, 2023, when surgeries were performed on the stretcher (Figure 2). Time intervals were obtained from the Nurse Intraoperative Report found in the electronic medical record. They ranged from “patient in OR” to “operation begin,” “operation end” to “patient out OR,” and “patient out OR” to next “patient in OR.” The median time intervals were obtained for the 3 different time intervals in each study period and compared.


A Mann-Whitney U test was used to determine statistical significance between the groups. We queried the Patient Safety Manager (Jason Ringlehan, BSN, RN, oral communication, 2023) and the Employee Health Nurse (Ivan Cool, BSN, RN, oral communication, June 16, 2023) for reported patient or employee–patient transfer injuries. We requested Inventory Supply personnel to provide the cost of materials used in the transfer process. There was no cost for surgeries performed on the stretcher.
RESULTS
A total of 306 hand surgeries were performed on a table and 191 were performed on a stretcher during the study periods. The median patient in OR to operation begin time interval was 25 minutes for the table and 23 minutes for the stretcher. The median operation end to patient out OR time was 4 minutes for the table and 3 minutes for the stretcher. Time savings was statistically significant (P < .001) for both ends of the surgery. The median room turnover time was 27 minutes for both time periods and was not statistically significant (P = .70). There were no reported employee or patient injuries attributed to OR transfers during either time period. Supply cost savings was $111.28 per case when surgery was performed on the stretcher (Table).

DISCUSSION
The new process of doing surgery on the stretcher was introduced to improve OR time efficiency. This improved efficiency has been reported in the hand surgery literature; however, the authors anticipated resistance to implementing a new process to seasoned OR staff.2,3 Once the idea was conceived, the plan was reviewed with the Anesthesia Service to confirm they had no safety concerns. The rest of the OR staff, including nurses and surgical technicians, agreed to participate. No resistance was encountered. The anesthesia, nursing, and scrub staff were happy to skip a potentially hazardous step at the beginning and end of each hand surgery case. The anesthesiologists communicated that the OR bed is preferred for intubating, but our hand surgeries are performed under local or regional block and intravenous sedation. The table was removed from the room to avoid any confusion with changes in staff during the day.
Compared with table use, surgery on the stretcher saved a median of 3 minutes of in-room time per case, with no significant difference in turnover time. The time savings reported here were consistent with what has been reported in other studies. Garras et al saved 7.5 minutes per case using a rolling hand table for their hand surgeries,2 while Gonzalez et al reported a 4-minute reduction per case when using a stretcher-based hand table for carpal tunnel and trigger finger surgeries.3 Lause et al found a 2-minute time savings at the start of their foot and ankle surgeries.4
Although 3 minutes per case may seem minimal, when applied to a conservative number of 5 hand cases twice a week, this time savings translates to an additional 15-minute nursing break each day, a 30-minute lunch break each week, and 26 extra hours each year. This efficiency can reduce direct costs in overtime. Consistently ending the day on time and allowing time for scheduled breaks can facilitate retention and improve morale in our current environment of chronically short-staffed surgical services. Recent literature estimates the cost of 1 OR minute to be about $36 to $46.5,6
Lateral transfers, in which a patient is moved horizontally, take place throughout the day in the OR and are a known risk factor for musculoskeletal disorders among the nursing staff. Contributing factors include patient obesity, environmental barriers in the OR, uneven patient weight distribution, and height differences among surgical team members. The Association of periOperative Registered Nurses recommends use of a lateral transfer device such as a friction-reducing sheet, slider board, or air-assisted device.7 The single-use Hover- Sling Repositioning Sheet is the transfer assist device used in our OR. It is an inflatable transfer mattress that reduces the amount of force used in patient transfer. The mattress is inflated with air from a small motor. While the HoverSling is inflated, escaping air from little holes on the underside of the mattress acts as a lubricant between the patient and transfer surface. This air reduces the force needed to move the patient.8
Patient transfers are a known risk for both patient and staff injuries.9,10 We suspected that not transferring our surgical patients between the stretcher and bed would improve patient and staff safety. A review of Patient Safety and Employee Health services found no reported patient or staff injuries during either timeframe. This finding led to the conclusion that effective safety precautions were already in place before the surgery-on-stretcher initiative. The MRVAMC routinely uses patient transfer equipment and the standard procedure in the OR is for 5 people to participate in 1 patient transfer between bed and table. The patient transfer device plus multiple staff involvement with patient transfers could explain the lack of patient and staff injury that predated the surgery-on-stretcher initiative and continued throughout the study period.
The inventory required to facilitate patient transfers at MRVAMC cost on average $111.28 per patient based on a search of the inventory database. This amount includes the HoverSling priced at $97 and the Medline OR Turnover Kit (table sheet, draw sheet, arm board covers, head positioning cover, and positioning foam strap) priced at $14.28. The Plastic Surgery Service routinely performs a minimum of 10 hand cases per week. If $111.28 per case is multiplied by the average of 10 cases each week over 52 weeks, the annualized savings could be about $57,866. This direct cost savings can potentially be applied to necessary equipment expenditures, educational training, or staff salaries.
Hand surgery literature has encouraged initiatives to reduce waste and develop more environmentally responsible practices.11-13 Eliminating the single-use patient transfer device and the turnover kit would avoid generating additional trash from the OR. Fewer sheets would have to be washed when patients stay on the same stretcher throughout their surgery day, which saves electricity and water.
Strengths and Limitations
A strength of this study is the consistency of the data, which were obtained from observing the same surgeon performing the same surgeries in the same OR. The data were logged into the electronic medical record in real time and easily accessible for data collection and comparison when reviewed retrospectively. A weakness of the study is the inconsistency in logging the in/out and start/end times by the OR circulating nurses who were involved in the patient transfers. The OR circulating nurses can vary from day to day, depending on the staffing assignments, which could affect the speed of each part of the procedure.
CONCLUSIONS
Hand surgery performed on the stretcher saves OR time and supply costs. This added efficiency translates to a savings of 26 hours of OR time and $57,866 in supply costs over the course of a year. Turnover time and staff and patient safety were not affected. This process can be introduced to other surgical specialties that do not need the accessories or various positions the OR table allows.
- Hersey LF. COVID-19 worsened staff shortages at veterans’ medical facilities, IG report finds. Stars and Stripes. October 13, 2023. Accessed February 28, 2025. https:// www.stripes.com/theaters/us/2023-10-13/veterans-affairs-health-care-staff-shortages-11695546.html
- Garras DN, Beredjiklian PK, Leinberry CF Jr. Operating on a stretcher: a cost analysis. J Hand Surg Am. 2011;36(12):2078-2079. doi:10.1016/j.jhsa.2011.09.006
- Gonzalez TA, Stanbury SJ, Mora AN, Floyd WE IV, Blazar PE, Earp BE. The effect of stretcher-based hand tables on operating room efficiency at an outpatient surgery center. Orthop J Harv Med Sch. 2017;18:20-24.
- Lause GE, Parker EB, Farid A, et al. Efficiency and perceived safety of foot and ankle procedures performed on the preoperative stretcher versus operating room table. J Perioper Pract. 2024;34(9):268-273. doi:10.1177/17504589231215939
- Childers CP, Maggard-Gibbons M. Understanding costs of care in the operating room. JAMA Surg. 2018;153(4):e176233. doi:10.1001/jamasurg.2017.6233
- Smith TS, Evans J, Moriel K, et al. Cost of operating room time is $46.04 dollars per minute. J Orthop Bus. 2022;2(4):10-13. doi:10.55576/job.v2i4.23
- Waters T, Baptiste A, Short M, Plante-Mallon L, Nelson A. AORN ergonomic tool 1: lateral transfer of a patient from a stretcher to an OR bed. AORN J. 2011;93(3):334-339. doi:10.1016/j.aorn.2010.08.025
- Barry J. The HoverMatt system for patient transfer: enhancing productivity, efficiency, and safety. J Nurs Adm. 2006;36(3):114-117. doi:10.1097/00005110-200603000-00003
- Apple B, Letvak S. Ergonomic challenges in the perioperative setting. AORN J. 2021;113(4):339-348. doi:10.1002/aorn.13345
- Tan J, Krishnan S, Vacanti JC, et al. Patient falls in the operating room setting: an analysis of reported safety events. J Healthc Risk Manag. 2022;42(1):9-14. doi:10.1002/jhrm.21503
- Van Demark RE Jr, Smith VJS, Fiegen A. Lean and green hand surgery. J Hand Surg Am. 2018;43(2):179-181. doi:10.1016/j.jhsa.2017.11.007
- Bravo D, Gaston RG, Melamed E. Environmentally responsible hand surgery: past, present, and future. J Hand Surg Am. 2020;45(5):444-448. doi:10.1016/j.jhsa.2019.10.031
- Tevlin R, Panton JA, Fox PM. Greening hand surgery: targeted measures to reduce waste in ambulatory trigger finger and carpal tunnel decompression. Hand (N Y). 2023;15589447231220412. doi:10.1177/15589447231220412
US Department of Veterans Affairs (VA) health care facilities have not recovered from staff shortages that occurred during the COVID-19 pandemic.1 Veterans Health Administration operating rooms (ORs) lost many valuable clinicians during the pandemic due to illness, relocation, burnout, and retirement, and remain below prepandemic levels. The staffing shortage has resulted in lost OR time, leading to longer wait times for surgery. In October 2021, the Malcom Randall VA Medical Center (MRVAMC) Plastic Surgery Service implemented a surgery-on-stretcher initiative, in which patients arriving in the OR remained on the stretcher throughout surgery rather than being transferred to the operating table. Avoiding patient transfers was identified as a strategy to increase the number of procedures performed while providing additional benefits to the patients and staff.
The intent of the surgery-on-stretcher initiative was to reduce OR turnover time and in-room time, decrease supply costs, and improve patient and staff safety. The objective of this study was to evaluate the new process in terms of time efficiency, cost savings, and safety.
METHODS
The University of Florida Institutional Review Board (IRB) and North Florida/South Georgia Veterans Health System Research and Development Committee (IRB.net) approved a retrospective chart review of hand surgery cases performed in the same OR by the same surgeon over 2 year-long periods: October 1, 2020, through September 30, 2021, when surgeries were performed on the operating table (Figure 1), and June 1, 2022, through May 31, 2023, when surgeries were performed on the stretcher (Figure 2). Time intervals were obtained from the Nurse Intraoperative Report found in the electronic medical record. They ranged from “patient in OR” to “operation begin,” “operation end” to “patient out OR,” and “patient out OR” to next “patient in OR.” The median time intervals were obtained for the 3 different time intervals in each study period and compared.


A Mann-Whitney U test was used to determine statistical significance between the groups. We queried the Patient Safety Manager (Jason Ringlehan, BSN, RN, oral communication, 2023) and the Employee Health Nurse (Ivan Cool, BSN, RN, oral communication, June 16, 2023) for reported patient or employee–patient transfer injuries. We requested Inventory Supply personnel to provide the cost of materials used in the transfer process. There was no cost for surgeries performed on the stretcher.
RESULTS
A total of 306 hand surgeries were performed on a table and 191 were performed on a stretcher during the study periods. The median patient in OR to operation begin time interval was 25 minutes for the table and 23 minutes for the stretcher. The median operation end to patient out OR time was 4 minutes for the table and 3 minutes for the stretcher. Time savings was statistically significant (P < .001) for both ends of the surgery. The median room turnover time was 27 minutes for both time periods and was not statistically significant (P = .70). There were no reported employee or patient injuries attributed to OR transfers during either time period. Supply cost savings was $111.28 per case when surgery was performed on the stretcher (Table).

DISCUSSION
The new process of doing surgery on the stretcher was introduced to improve OR time efficiency. This improved efficiency has been reported in the hand surgery literature; however, the authors anticipated resistance to implementing a new process to seasoned OR staff.2,3 Once the idea was conceived, the plan was reviewed with the Anesthesia Service to confirm they had no safety concerns. The rest of the OR staff, including nurses and surgical technicians, agreed to participate. No resistance was encountered. The anesthesia, nursing, and scrub staff were happy to skip a potentially hazardous step at the beginning and end of each hand surgery case. The anesthesiologists communicated that the OR bed is preferred for intubating, but our hand surgeries are performed under local or regional block and intravenous sedation. The table was removed from the room to avoid any confusion with changes in staff during the day.
Compared with table use, surgery on the stretcher saved a median of 3 minutes of in-room time per case, with no significant difference in turnover time. The time savings reported here were consistent with what has been reported in other studies. Garras et al saved 7.5 minutes per case using a rolling hand table for their hand surgeries,2 while Gonzalez et al reported a 4-minute reduction per case when using a stretcher-based hand table for carpal tunnel and trigger finger surgeries.3 Lause et al found a 2-minute time savings at the start of their foot and ankle surgeries.4
Although 3 minutes per case may seem minimal, when applied to a conservative number of 5 hand cases twice a week, this time savings translates to an additional 15-minute nursing break each day, a 30-minute lunch break each week, and 26 extra hours each year. This efficiency can reduce direct costs in overtime. Consistently ending the day on time and allowing time for scheduled breaks can facilitate retention and improve morale in our current environment of chronically short-staffed surgical services. Recent literature estimates the cost of 1 OR minute to be about $36 to $46.5,6
Lateral transfers, in which a patient is moved horizontally, take place throughout the day in the OR and are a known risk factor for musculoskeletal disorders among the nursing staff. Contributing factors include patient obesity, environmental barriers in the OR, uneven patient weight distribution, and height differences among surgical team members. The Association of periOperative Registered Nurses recommends use of a lateral transfer device such as a friction-reducing sheet, slider board, or air-assisted device.7 The single-use Hover- Sling Repositioning Sheet is the transfer assist device used in our OR. It is an inflatable transfer mattress that reduces the amount of force used in patient transfer. The mattress is inflated with air from a small motor. While the HoverSling is inflated, escaping air from little holes on the underside of the mattress acts as a lubricant between the patient and transfer surface. This air reduces the force needed to move the patient.8
Patient transfers are a known risk for both patient and staff injuries.9,10 We suspected that not transferring our surgical patients between the stretcher and bed would improve patient and staff safety. A review of Patient Safety and Employee Health services found no reported patient or staff injuries during either timeframe. This finding led to the conclusion that effective safety precautions were already in place before the surgery-on-stretcher initiative. The MRVAMC routinely uses patient transfer equipment and the standard procedure in the OR is for 5 people to participate in 1 patient transfer between bed and table. The patient transfer device plus multiple staff involvement with patient transfers could explain the lack of patient and staff injury that predated the surgery-on-stretcher initiative and continued throughout the study period.
The inventory required to facilitate patient transfers at MRVAMC cost on average $111.28 per patient based on a search of the inventory database. This amount includes the HoverSling priced at $97 and the Medline OR Turnover Kit (table sheet, draw sheet, arm board covers, head positioning cover, and positioning foam strap) priced at $14.28. The Plastic Surgery Service routinely performs a minimum of 10 hand cases per week. If $111.28 per case is multiplied by the average of 10 cases each week over 52 weeks, the annualized savings could be about $57,866. This direct cost savings can potentially be applied to necessary equipment expenditures, educational training, or staff salaries.
Hand surgery literature has encouraged initiatives to reduce waste and develop more environmentally responsible practices.11-13 Eliminating the single-use patient transfer device and the turnover kit would avoid generating additional trash from the OR. Fewer sheets would have to be washed when patients stay on the same stretcher throughout their surgery day, which saves electricity and water.
Strengths and Limitations
A strength of this study is the consistency of the data, which were obtained from observing the same surgeon performing the same surgeries in the same OR. The data were logged into the electronic medical record in real time and easily accessible for data collection and comparison when reviewed retrospectively. A weakness of the study is the inconsistency in logging the in/out and start/end times by the OR circulating nurses who were involved in the patient transfers. The OR circulating nurses can vary from day to day, depending on the staffing assignments, which could affect the speed of each part of the procedure.
CONCLUSIONS
Hand surgery performed on the stretcher saves OR time and supply costs. This added efficiency translates to a savings of 26 hours of OR time and $57,866 in supply costs over the course of a year. Turnover time and staff and patient safety were not affected. This process can be introduced to other surgical specialties that do not need the accessories or various positions the OR table allows.
US Department of Veterans Affairs (VA) health care facilities have not recovered from staff shortages that occurred during the COVID-19 pandemic.1 Veterans Health Administration operating rooms (ORs) lost many valuable clinicians during the pandemic due to illness, relocation, burnout, and retirement, and remain below prepandemic levels. The staffing shortage has resulted in lost OR time, leading to longer wait times for surgery. In October 2021, the Malcom Randall VA Medical Center (MRVAMC) Plastic Surgery Service implemented a surgery-on-stretcher initiative, in which patients arriving in the OR remained on the stretcher throughout surgery rather than being transferred to the operating table. Avoiding patient transfers was identified as a strategy to increase the number of procedures performed while providing additional benefits to the patients and staff.
The intent of the surgery-on-stretcher initiative was to reduce OR turnover time and in-room time, decrease supply costs, and improve patient and staff safety. The objective of this study was to evaluate the new process in terms of time efficiency, cost savings, and safety.
METHODS
The University of Florida Institutional Review Board (IRB) and North Florida/South Georgia Veterans Health System Research and Development Committee (IRB.net) approved a retrospective chart review of hand surgery cases performed in the same OR by the same surgeon over 2 year-long periods: October 1, 2020, through September 30, 2021, when surgeries were performed on the operating table (Figure 1), and June 1, 2022, through May 31, 2023, when surgeries were performed on the stretcher (Figure 2). Time intervals were obtained from the Nurse Intraoperative Report found in the electronic medical record. They ranged from “patient in OR” to “operation begin,” “operation end” to “patient out OR,” and “patient out OR” to next “patient in OR.” The median time intervals were obtained for the 3 different time intervals in each study period and compared.


A Mann-Whitney U test was used to determine statistical significance between the groups. We queried the Patient Safety Manager (Jason Ringlehan, BSN, RN, oral communication, 2023) and the Employee Health Nurse (Ivan Cool, BSN, RN, oral communication, June 16, 2023) for reported patient or employee–patient transfer injuries. We requested Inventory Supply personnel to provide the cost of materials used in the transfer process. There was no cost for surgeries performed on the stretcher.
RESULTS
A total of 306 hand surgeries were performed on a table and 191 were performed on a stretcher during the study periods. The median patient in OR to operation begin time interval was 25 minutes for the table and 23 minutes for the stretcher. The median operation end to patient out OR time was 4 minutes for the table and 3 minutes for the stretcher. Time savings was statistically significant (P < .001) for both ends of the surgery. The median room turnover time was 27 minutes for both time periods and was not statistically significant (P = .70). There were no reported employee or patient injuries attributed to OR transfers during either time period. Supply cost savings was $111.28 per case when surgery was performed on the stretcher (Table).

DISCUSSION
The new process of doing surgery on the stretcher was introduced to improve OR time efficiency. This improved efficiency has been reported in the hand surgery literature; however, the authors anticipated resistance to implementing a new process to seasoned OR staff.2,3 Once the idea was conceived, the plan was reviewed with the Anesthesia Service to confirm they had no safety concerns. The rest of the OR staff, including nurses and surgical technicians, agreed to participate. No resistance was encountered. The anesthesia, nursing, and scrub staff were happy to skip a potentially hazardous step at the beginning and end of each hand surgery case. The anesthesiologists communicated that the OR bed is preferred for intubating, but our hand surgeries are performed under local or regional block and intravenous sedation. The table was removed from the room to avoid any confusion with changes in staff during the day.
Compared with table use, surgery on the stretcher saved a median of 3 minutes of in-room time per case, with no significant difference in turnover time. The time savings reported here were consistent with what has been reported in other studies. Garras et al saved 7.5 minutes per case using a rolling hand table for their hand surgeries,2 while Gonzalez et al reported a 4-minute reduction per case when using a stretcher-based hand table for carpal tunnel and trigger finger surgeries.3 Lause et al found a 2-minute time savings at the start of their foot and ankle surgeries.4
Although 3 minutes per case may seem minimal, when applied to a conservative number of 5 hand cases twice a week, this time savings translates to an additional 15-minute nursing break each day, a 30-minute lunch break each week, and 26 extra hours each year. This efficiency can reduce direct costs in overtime. Consistently ending the day on time and allowing time for scheduled breaks can facilitate retention and improve morale in our current environment of chronically short-staffed surgical services. Recent literature estimates the cost of 1 OR minute to be about $36 to $46.5,6
Lateral transfers, in which a patient is moved horizontally, take place throughout the day in the OR and are a known risk factor for musculoskeletal disorders among the nursing staff. Contributing factors include patient obesity, environmental barriers in the OR, uneven patient weight distribution, and height differences among surgical team members. The Association of periOperative Registered Nurses recommends use of a lateral transfer device such as a friction-reducing sheet, slider board, or air-assisted device.7 The single-use Hover- Sling Repositioning Sheet is the transfer assist device used in our OR. It is an inflatable transfer mattress that reduces the amount of force used in patient transfer. The mattress is inflated with air from a small motor. While the HoverSling is inflated, escaping air from little holes on the underside of the mattress acts as a lubricant between the patient and transfer surface. This air reduces the force needed to move the patient.8
Patient transfers are a known risk for both patient and staff injuries.9,10 We suspected that not transferring our surgical patients between the stretcher and bed would improve patient and staff safety. A review of Patient Safety and Employee Health services found no reported patient or staff injuries during either timeframe. This finding led to the conclusion that effective safety precautions were already in place before the surgery-on-stretcher initiative. The MRVAMC routinely uses patient transfer equipment and the standard procedure in the OR is for 5 people to participate in 1 patient transfer between bed and table. The patient transfer device plus multiple staff involvement with patient transfers could explain the lack of patient and staff injury that predated the surgery-on-stretcher initiative and continued throughout the study period.
The inventory required to facilitate patient transfers at MRVAMC cost on average $111.28 per patient based on a search of the inventory database. This amount includes the HoverSling priced at $97 and the Medline OR Turnover Kit (table sheet, draw sheet, arm board covers, head positioning cover, and positioning foam strap) priced at $14.28. The Plastic Surgery Service routinely performs a minimum of 10 hand cases per week. If $111.28 per case is multiplied by the average of 10 cases each week over 52 weeks, the annualized savings could be about $57,866. This direct cost savings can potentially be applied to necessary equipment expenditures, educational training, or staff salaries.
Hand surgery literature has encouraged initiatives to reduce waste and develop more environmentally responsible practices.11-13 Eliminating the single-use patient transfer device and the turnover kit would avoid generating additional trash from the OR. Fewer sheets would have to be washed when patients stay on the same stretcher throughout their surgery day, which saves electricity and water.
Strengths and Limitations
A strength of this study is the consistency of the data, which were obtained from observing the same surgeon performing the same surgeries in the same OR. The data were logged into the electronic medical record in real time and easily accessible for data collection and comparison when reviewed retrospectively. A weakness of the study is the inconsistency in logging the in/out and start/end times by the OR circulating nurses who were involved in the patient transfers. The OR circulating nurses can vary from day to day, depending on the staffing assignments, which could affect the speed of each part of the procedure.
CONCLUSIONS
Hand surgery performed on the stretcher saves OR time and supply costs. This added efficiency translates to a savings of 26 hours of OR time and $57,866 in supply costs over the course of a year. Turnover time and staff and patient safety were not affected. This process can be introduced to other surgical specialties that do not need the accessories or various positions the OR table allows.
- Hersey LF. COVID-19 worsened staff shortages at veterans’ medical facilities, IG report finds. Stars and Stripes. October 13, 2023. Accessed February 28, 2025. https:// www.stripes.com/theaters/us/2023-10-13/veterans-affairs-health-care-staff-shortages-11695546.html
- Garras DN, Beredjiklian PK, Leinberry CF Jr. Operating on a stretcher: a cost analysis. J Hand Surg Am. 2011;36(12):2078-2079. doi:10.1016/j.jhsa.2011.09.006
- Gonzalez TA, Stanbury SJ, Mora AN, Floyd WE IV, Blazar PE, Earp BE. The effect of stretcher-based hand tables on operating room efficiency at an outpatient surgery center. Orthop J Harv Med Sch. 2017;18:20-24.
- Lause GE, Parker EB, Farid A, et al. Efficiency and perceived safety of foot and ankle procedures performed on the preoperative stretcher versus operating room table. J Perioper Pract. 2024;34(9):268-273. doi:10.1177/17504589231215939
- Childers CP, Maggard-Gibbons M. Understanding costs of care in the operating room. JAMA Surg. 2018;153(4):e176233. doi:10.1001/jamasurg.2017.6233
- Smith TS, Evans J, Moriel K, et al. Cost of operating room time is $46.04 dollars per minute. J Orthop Bus. 2022;2(4):10-13. doi:10.55576/job.v2i4.23
- Waters T, Baptiste A, Short M, Plante-Mallon L, Nelson A. AORN ergonomic tool 1: lateral transfer of a patient from a stretcher to an OR bed. AORN J. 2011;93(3):334-339. doi:10.1016/j.aorn.2010.08.025
- Barry J. The HoverMatt system for patient transfer: enhancing productivity, efficiency, and safety. J Nurs Adm. 2006;36(3):114-117. doi:10.1097/00005110-200603000-00003
- Apple B, Letvak S. Ergonomic challenges in the perioperative setting. AORN J. 2021;113(4):339-348. doi:10.1002/aorn.13345
- Tan J, Krishnan S, Vacanti JC, et al. Patient falls in the operating room setting: an analysis of reported safety events. J Healthc Risk Manag. 2022;42(1):9-14. doi:10.1002/jhrm.21503
- Van Demark RE Jr, Smith VJS, Fiegen A. Lean and green hand surgery. J Hand Surg Am. 2018;43(2):179-181. doi:10.1016/j.jhsa.2017.11.007
- Bravo D, Gaston RG, Melamed E. Environmentally responsible hand surgery: past, present, and future. J Hand Surg Am. 2020;45(5):444-448. doi:10.1016/j.jhsa.2019.10.031
- Tevlin R, Panton JA, Fox PM. Greening hand surgery: targeted measures to reduce waste in ambulatory trigger finger and carpal tunnel decompression. Hand (N Y). 2023;15589447231220412. doi:10.1177/15589447231220412
- Hersey LF. COVID-19 worsened staff shortages at veterans’ medical facilities, IG report finds. Stars and Stripes. October 13, 2023. Accessed February 28, 2025. https:// www.stripes.com/theaters/us/2023-10-13/veterans-affairs-health-care-staff-shortages-11695546.html
- Garras DN, Beredjiklian PK, Leinberry CF Jr. Operating on a stretcher: a cost analysis. J Hand Surg Am. 2011;36(12):2078-2079. doi:10.1016/j.jhsa.2011.09.006
- Gonzalez TA, Stanbury SJ, Mora AN, Floyd WE IV, Blazar PE, Earp BE. The effect of stretcher-based hand tables on operating room efficiency at an outpatient surgery center. Orthop J Harv Med Sch. 2017;18:20-24.
- Lause GE, Parker EB, Farid A, et al. Efficiency and perceived safety of foot and ankle procedures performed on the preoperative stretcher versus operating room table. J Perioper Pract. 2024;34(9):268-273. doi:10.1177/17504589231215939
- Childers CP, Maggard-Gibbons M. Understanding costs of care in the operating room. JAMA Surg. 2018;153(4):e176233. doi:10.1001/jamasurg.2017.6233
- Smith TS, Evans J, Moriel K, et al. Cost of operating room time is $46.04 dollars per minute. J Orthop Bus. 2022;2(4):10-13. doi:10.55576/job.v2i4.23
- Waters T, Baptiste A, Short M, Plante-Mallon L, Nelson A. AORN ergonomic tool 1: lateral transfer of a patient from a stretcher to an OR bed. AORN J. 2011;93(3):334-339. doi:10.1016/j.aorn.2010.08.025
- Barry J. The HoverMatt system for patient transfer: enhancing productivity, efficiency, and safety. J Nurs Adm. 2006;36(3):114-117. doi:10.1097/00005110-200603000-00003
- Apple B, Letvak S. Ergonomic challenges in the perioperative setting. AORN J. 2021;113(4):339-348. doi:10.1002/aorn.13345
- Tan J, Krishnan S, Vacanti JC, et al. Patient falls in the operating room setting: an analysis of reported safety events. J Healthc Risk Manag. 2022;42(1):9-14. doi:10.1002/jhrm.21503
- Van Demark RE Jr, Smith VJS, Fiegen A. Lean and green hand surgery. J Hand Surg Am. 2018;43(2):179-181. doi:10.1016/j.jhsa.2017.11.007
- Bravo D, Gaston RG, Melamed E. Environmentally responsible hand surgery: past, present, and future. J Hand Surg Am. 2020;45(5):444-448. doi:10.1016/j.jhsa.2019.10.031
- Tevlin R, Panton JA, Fox PM. Greening hand surgery: targeted measures to reduce waste in ambulatory trigger finger and carpal tunnel decompression. Hand (N Y). 2023;15589447231220412. doi:10.1177/15589447231220412
Stretcher vs Table for Operative Hand Surgery
Stretcher vs Table for Operative Hand Surgery
Could Statins Prevent Hepatocellular Carcinoma?
, emerging research, including several large cohort studies, suggested.
The most recent study, published in JAMA Internal Medicine, showed a lower incidence of hepatic decompensation among statin users in a registry for adults aged 40 years or older with baseline chronic liver disease.
“Our findings support the idea that statins may offer benefits beyond lipid-lowering in patients with [chronic liver disease], and clinicians may be more confident in prescribing statins when indicated,” even in these patients, said corresponding Co-author Raymond T. Chung, MD, gastroenterology investigator at Mass General Research Institute, Boston, in an interview.
“While prior studies have suggested an association between statin use and reduced hepatocellular carcinoma risk, our study aimed to build on that evidence by using a large, real-world, hospital-based cohort inclusive of all etiologies of chronic liver disease,” Chung told GI & Hepatology News.
Chung, along with Jonggi Choi, MD, of the University of Ulsan College of Medicine, Seoul, South Korea, and colleagues, reviewed data from the Research Patient Data Registry from 2000 to 2023 for 16,501 participantsaged 40 years or older with baseline chronic liver disease and baseline Fibrosis-4 (FIB-4) scores ≥ 1.3.
The study population had a mean age of 59.7 years, and 40.9% were women. The researchers divided the population into statin users (n = 3610) and nonusers (n = 12,891). Statin use was defined as a cumulative defined daily dose ≥ 30 mg.
The primary outcome was the cumulative incidence of hepatocellular carcinoma and hepatic decompensation.
At 10 years follow-up, statin users showed a significantly reduced incidence of hepatocellular carcinoma vs nonusers (3.8% vs 8.0%; P < .001) as well as a significantly reduced incidence of hepatic decompensation (10.6% vs 19.5%; P < .001).
Incorporating FIB-4 scores, a surrogate marker for liver fibrosis, also showed that statin users were less likely to experience fibrosis progression, offering a potential mechanism of action for the observed reduction in adverse liver outcomes, Chung told GI & Hepatology News.
“Similar trends have been observed in prior observational studies, but our findings now support a real effect of statin use on fibrosis progression,” he said. “However, what strengthened our study was that the association remained consistent across multiple subgroups and sensitivity analyses.”
Another study published in Clinical Gastroenterology and Hepatology showed a reduced risk of developing severe liver disease in a Swedish cohort of noncirrhotic adults with chronic liver disease who used statins (n = 3862) compared with control patients with chronic liver disease (matched 1:1) and who did not use statins (hazard ratio [HR], 0.60).
In that study, Rajani Sharma, MD, and colleagues found a protective association in both prefibrosis and fibrosis stages at diagnosis, and statin use was associated with reduced rates of progression to both cirrhosis and hepatocellular carcinoma (HR, 0.62 and 0.44, respectively).
Exciting and Necessary Research
The research by Choi and colleagues is “exciting,” said Bubu Banini, MD, PhD, an assistant professor in digestive diseases at Yale School of Medicine, New Haven, Connecticut, in an interview.
Liver cancer prevalence has risen over the past few decades in the United States and worldwide, and the 5-year overall survival rate of liver cancer is less than 20%, Banini told GI & Hepatology News.
Clinicians often withhold statins out of fear of liver injury in persons with chronic liver disease; however, a takeaway from this study is that for persons with chronic liver disease who have indications for statin use, the medication should not be withheld, she said.
Of course, prospective studies are needed to replicate the results, Banini added.
The study findings were limited by several factors, including the inability to adjust for all potential confounding variables, lack of data on post-index treatments, and the use of wide, cumulative, defined daily dose categories to ensure statistical power, the researchers noted.
“Moving forward, randomized controlled trials are essential to establish a causal relationship and clarify the molecular and clinical pathways through which statins exert hepatoprotective effects,” Chung added.
Randomized controlled trials are also needed to determine whether statins can actually reduce the risk for hepatocellular carcinoma and hepatic decompensation in patients with chronic liver disease, and cost-effectiveness analyses may be essential for translating this evidence into clinical guidelines, he added.
Statins and HCC Risk in the General Population
A large cohort study, published in JAMA Network Open by Mara Sophie Vell, PhD, and colleagues, showed an association between reduced risk for hepatocellular carcinoma and statin use in the general population and in those at increased risk for liver disease.
The study, which included data for individuals aged 37-73 years from the UK Biobank, found a 15% reduced risk for new-onset liver disease and a 28% reduced risk for liver-related death among regular statin users than among nonusers (HR, 0.85 and 0.72, respectively).
In addition, regular statin users showed a 74% reduced risk (P = .003) of developing hepatocellular carcinoma compared with those not using statins. The researchers identified a particular impact on liver disease risk reduction among men, individuals with diabetes, and patients with high levels of liver scarring at baseline based on the FIB-4 index.
A meta-analysis of 24 studies, previously published in the journal Cancers, showed a significant reduction of 46% in hepatocellular carcinoma risk among statins users compared with nonusers.
The researchers found this risk reduction was significant in subgroups of patients with diabetes, liver cirrhosis, and those on antiviral therapy, and they suggested that the antiangiogenic, immunomodulatory, antiproliferative, and antifibrotic properties of statins may contribute to their potential to reduce tumor growth or hepatocellular carcinoma development.
The meta-analysis authors noted that although most studies have reported a low risk for statin-induced hepatotoxicity, clinicians should proceed with caution in some patients with existing cirrhosis.
“If the patients are diagnosed with decompensated cirrhosis, then statins should be prescribed with caution at low doses,” they wrote.
Advocating statin use solely for chemoprevention may be premature based on observational data, Chung told GI & Hepatology News.
“However, in patients with [chronic liver disease] who already meet indications for statin therapy, the potential added benefit of reducing liver-related complications strengthens the rationale for their use,” he said. Future randomized clinical trials will be key to defining the risk-benefit profile in this context.
The study by Choi and colleagues was supported by the National Institutes of Health.
The study by Sharma and colleagues was supported by the Karolinska Institutet, Stockholm, Sweden, and the Columbia University Irving Medical Center, New York City; researchers were supported by grants from the Swedish Research Council, Center for Innovative Medicine, the Swedish Cancer Society, and the National Institutes of Health.
The study by Vell and colleagues had no outside funding.
The study by Mohaimenul Islam and colleagues was supported by the Ministry of Education and Ministry of Science and Technology, Taiwan.
Chung and Banini had no financial conflicts to disclose.
A version of this article appeared on Medscape.com.
, emerging research, including several large cohort studies, suggested.
The most recent study, published in JAMA Internal Medicine, showed a lower incidence of hepatic decompensation among statin users in a registry for adults aged 40 years or older with baseline chronic liver disease.
“Our findings support the idea that statins may offer benefits beyond lipid-lowering in patients with [chronic liver disease], and clinicians may be more confident in prescribing statins when indicated,” even in these patients, said corresponding Co-author Raymond T. Chung, MD, gastroenterology investigator at Mass General Research Institute, Boston, in an interview.
“While prior studies have suggested an association between statin use and reduced hepatocellular carcinoma risk, our study aimed to build on that evidence by using a large, real-world, hospital-based cohort inclusive of all etiologies of chronic liver disease,” Chung told GI & Hepatology News.
Chung, along with Jonggi Choi, MD, of the University of Ulsan College of Medicine, Seoul, South Korea, and colleagues, reviewed data from the Research Patient Data Registry from 2000 to 2023 for 16,501 participantsaged 40 years or older with baseline chronic liver disease and baseline Fibrosis-4 (FIB-4) scores ≥ 1.3.
The study population had a mean age of 59.7 years, and 40.9% were women. The researchers divided the population into statin users (n = 3610) and nonusers (n = 12,891). Statin use was defined as a cumulative defined daily dose ≥ 30 mg.
The primary outcome was the cumulative incidence of hepatocellular carcinoma and hepatic decompensation.
At 10 years follow-up, statin users showed a significantly reduced incidence of hepatocellular carcinoma vs nonusers (3.8% vs 8.0%; P < .001) as well as a significantly reduced incidence of hepatic decompensation (10.6% vs 19.5%; P < .001).
Incorporating FIB-4 scores, a surrogate marker for liver fibrosis, also showed that statin users were less likely to experience fibrosis progression, offering a potential mechanism of action for the observed reduction in adverse liver outcomes, Chung told GI & Hepatology News.
“Similar trends have been observed in prior observational studies, but our findings now support a real effect of statin use on fibrosis progression,” he said. “However, what strengthened our study was that the association remained consistent across multiple subgroups and sensitivity analyses.”
Another study published in Clinical Gastroenterology and Hepatology showed a reduced risk of developing severe liver disease in a Swedish cohort of noncirrhotic adults with chronic liver disease who used statins (n = 3862) compared with control patients with chronic liver disease (matched 1:1) and who did not use statins (hazard ratio [HR], 0.60).
In that study, Rajani Sharma, MD, and colleagues found a protective association in both prefibrosis and fibrosis stages at diagnosis, and statin use was associated with reduced rates of progression to both cirrhosis and hepatocellular carcinoma (HR, 0.62 and 0.44, respectively).
Exciting and Necessary Research
The research by Choi and colleagues is “exciting,” said Bubu Banini, MD, PhD, an assistant professor in digestive diseases at Yale School of Medicine, New Haven, Connecticut, in an interview.
Liver cancer prevalence has risen over the past few decades in the United States and worldwide, and the 5-year overall survival rate of liver cancer is less than 20%, Banini told GI & Hepatology News.
Clinicians often withhold statins out of fear of liver injury in persons with chronic liver disease; however, a takeaway from this study is that for persons with chronic liver disease who have indications for statin use, the medication should not be withheld, she said.
Of course, prospective studies are needed to replicate the results, Banini added.
The study findings were limited by several factors, including the inability to adjust for all potential confounding variables, lack of data on post-index treatments, and the use of wide, cumulative, defined daily dose categories to ensure statistical power, the researchers noted.
“Moving forward, randomized controlled trials are essential to establish a causal relationship and clarify the molecular and clinical pathways through which statins exert hepatoprotective effects,” Chung added.
Randomized controlled trials are also needed to determine whether statins can actually reduce the risk for hepatocellular carcinoma and hepatic decompensation in patients with chronic liver disease, and cost-effectiveness analyses may be essential for translating this evidence into clinical guidelines, he added.
Statins and HCC Risk in the General Population
A large cohort study, published in JAMA Network Open by Mara Sophie Vell, PhD, and colleagues, showed an association between reduced risk for hepatocellular carcinoma and statin use in the general population and in those at increased risk for liver disease.
The study, which included data for individuals aged 37-73 years from the UK Biobank, found a 15% reduced risk for new-onset liver disease and a 28% reduced risk for liver-related death among regular statin users than among nonusers (HR, 0.85 and 0.72, respectively).
In addition, regular statin users showed a 74% reduced risk (P = .003) of developing hepatocellular carcinoma compared with those not using statins. The researchers identified a particular impact on liver disease risk reduction among men, individuals with diabetes, and patients with high levels of liver scarring at baseline based on the FIB-4 index.
A meta-analysis of 24 studies, previously published in the journal Cancers, showed a significant reduction of 46% in hepatocellular carcinoma risk among statins users compared with nonusers.
The researchers found this risk reduction was significant in subgroups of patients with diabetes, liver cirrhosis, and those on antiviral therapy, and they suggested that the antiangiogenic, immunomodulatory, antiproliferative, and antifibrotic properties of statins may contribute to their potential to reduce tumor growth or hepatocellular carcinoma development.
The meta-analysis authors noted that although most studies have reported a low risk for statin-induced hepatotoxicity, clinicians should proceed with caution in some patients with existing cirrhosis.
“If the patients are diagnosed with decompensated cirrhosis, then statins should be prescribed with caution at low doses,” they wrote.
Advocating statin use solely for chemoprevention may be premature based on observational data, Chung told GI & Hepatology News.
“However, in patients with [chronic liver disease] who already meet indications for statin therapy, the potential added benefit of reducing liver-related complications strengthens the rationale for their use,” he said. Future randomized clinical trials will be key to defining the risk-benefit profile in this context.
The study by Choi and colleagues was supported by the National Institutes of Health.
The study by Sharma and colleagues was supported by the Karolinska Institutet, Stockholm, Sweden, and the Columbia University Irving Medical Center, New York City; researchers were supported by grants from the Swedish Research Council, Center for Innovative Medicine, the Swedish Cancer Society, and the National Institutes of Health.
The study by Vell and colleagues had no outside funding.
The study by Mohaimenul Islam and colleagues was supported by the Ministry of Education and Ministry of Science and Technology, Taiwan.
Chung and Banini had no financial conflicts to disclose.
A version of this article appeared on Medscape.com.
, emerging research, including several large cohort studies, suggested.
The most recent study, published in JAMA Internal Medicine, showed a lower incidence of hepatic decompensation among statin users in a registry for adults aged 40 years or older with baseline chronic liver disease.
“Our findings support the idea that statins may offer benefits beyond lipid-lowering in patients with [chronic liver disease], and clinicians may be more confident in prescribing statins when indicated,” even in these patients, said corresponding Co-author Raymond T. Chung, MD, gastroenterology investigator at Mass General Research Institute, Boston, in an interview.
“While prior studies have suggested an association between statin use and reduced hepatocellular carcinoma risk, our study aimed to build on that evidence by using a large, real-world, hospital-based cohort inclusive of all etiologies of chronic liver disease,” Chung told GI & Hepatology News.
Chung, along with Jonggi Choi, MD, of the University of Ulsan College of Medicine, Seoul, South Korea, and colleagues, reviewed data from the Research Patient Data Registry from 2000 to 2023 for 16,501 participantsaged 40 years or older with baseline chronic liver disease and baseline Fibrosis-4 (FIB-4) scores ≥ 1.3.
The study population had a mean age of 59.7 years, and 40.9% were women. The researchers divided the population into statin users (n = 3610) and nonusers (n = 12,891). Statin use was defined as a cumulative defined daily dose ≥ 30 mg.
The primary outcome was the cumulative incidence of hepatocellular carcinoma and hepatic decompensation.
At 10 years follow-up, statin users showed a significantly reduced incidence of hepatocellular carcinoma vs nonusers (3.8% vs 8.0%; P < .001) as well as a significantly reduced incidence of hepatic decompensation (10.6% vs 19.5%; P < .001).
Incorporating FIB-4 scores, a surrogate marker for liver fibrosis, also showed that statin users were less likely to experience fibrosis progression, offering a potential mechanism of action for the observed reduction in adverse liver outcomes, Chung told GI & Hepatology News.
“Similar trends have been observed in prior observational studies, but our findings now support a real effect of statin use on fibrosis progression,” he said. “However, what strengthened our study was that the association remained consistent across multiple subgroups and sensitivity analyses.”
Another study published in Clinical Gastroenterology and Hepatology showed a reduced risk of developing severe liver disease in a Swedish cohort of noncirrhotic adults with chronic liver disease who used statins (n = 3862) compared with control patients with chronic liver disease (matched 1:1) and who did not use statins (hazard ratio [HR], 0.60).
In that study, Rajani Sharma, MD, and colleagues found a protective association in both prefibrosis and fibrosis stages at diagnosis, and statin use was associated with reduced rates of progression to both cirrhosis and hepatocellular carcinoma (HR, 0.62 and 0.44, respectively).
Exciting and Necessary Research
The research by Choi and colleagues is “exciting,” said Bubu Banini, MD, PhD, an assistant professor in digestive diseases at Yale School of Medicine, New Haven, Connecticut, in an interview.
Liver cancer prevalence has risen over the past few decades in the United States and worldwide, and the 5-year overall survival rate of liver cancer is less than 20%, Banini told GI & Hepatology News.
Clinicians often withhold statins out of fear of liver injury in persons with chronic liver disease; however, a takeaway from this study is that for persons with chronic liver disease who have indications for statin use, the medication should not be withheld, she said.
Of course, prospective studies are needed to replicate the results, Banini added.
The study findings were limited by several factors, including the inability to adjust for all potential confounding variables, lack of data on post-index treatments, and the use of wide, cumulative, defined daily dose categories to ensure statistical power, the researchers noted.
“Moving forward, randomized controlled trials are essential to establish a causal relationship and clarify the molecular and clinical pathways through which statins exert hepatoprotective effects,” Chung added.
Randomized controlled trials are also needed to determine whether statins can actually reduce the risk for hepatocellular carcinoma and hepatic decompensation in patients with chronic liver disease, and cost-effectiveness analyses may be essential for translating this evidence into clinical guidelines, he added.
Statins and HCC Risk in the General Population
A large cohort study, published in JAMA Network Open by Mara Sophie Vell, PhD, and colleagues, showed an association between reduced risk for hepatocellular carcinoma and statin use in the general population and in those at increased risk for liver disease.
The study, which included data for individuals aged 37-73 years from the UK Biobank, found a 15% reduced risk for new-onset liver disease and a 28% reduced risk for liver-related death among regular statin users than among nonusers (HR, 0.85 and 0.72, respectively).
In addition, regular statin users showed a 74% reduced risk (P = .003) of developing hepatocellular carcinoma compared with those not using statins. The researchers identified a particular impact on liver disease risk reduction among men, individuals with diabetes, and patients with high levels of liver scarring at baseline based on the FIB-4 index.
A meta-analysis of 24 studies, previously published in the journal Cancers, showed a significant reduction of 46% in hepatocellular carcinoma risk among statins users compared with nonusers.
The researchers found this risk reduction was significant in subgroups of patients with diabetes, liver cirrhosis, and those on antiviral therapy, and they suggested that the antiangiogenic, immunomodulatory, antiproliferative, and antifibrotic properties of statins may contribute to their potential to reduce tumor growth or hepatocellular carcinoma development.
The meta-analysis authors noted that although most studies have reported a low risk for statin-induced hepatotoxicity, clinicians should proceed with caution in some patients with existing cirrhosis.
“If the patients are diagnosed with decompensated cirrhosis, then statins should be prescribed with caution at low doses,” they wrote.
Advocating statin use solely for chemoprevention may be premature based on observational data, Chung told GI & Hepatology News.
“However, in patients with [chronic liver disease] who already meet indications for statin therapy, the potential added benefit of reducing liver-related complications strengthens the rationale for their use,” he said. Future randomized clinical trials will be key to defining the risk-benefit profile in this context.
The study by Choi and colleagues was supported by the National Institutes of Health.
The study by Sharma and colleagues was supported by the Karolinska Institutet, Stockholm, Sweden, and the Columbia University Irving Medical Center, New York City; researchers were supported by grants from the Swedish Research Council, Center for Innovative Medicine, the Swedish Cancer Society, and the National Institutes of Health.
The study by Vell and colleagues had no outside funding.
The study by Mohaimenul Islam and colleagues was supported by the Ministry of Education and Ministry of Science and Technology, Taiwan.
Chung and Banini had no financial conflicts to disclose.
A version of this article appeared on Medscape.com.