User login
PPI Prophylaxis Prevents GI Bleed in Ventilated Patients
according to a randomized trial and a systematic review led by researchers at McMaster University, Hamilton, Ontario, Canada.
Patients in the intensive care unit (ICU) who need mechanical ventilation typically are given a PPI, such as pantoprazole, to prevent upper GI bleeding caused by stress-induced stomach ulcers, but some evidence suggested that their use might increase the risk for pneumonia and death in the most severely ill patients.
As a result, recent guidelines have issued only weak recommendations for stress ulcer prophylaxis, especially with PPIs, in critically ill patients at a high risk for bleeding, Deborah Cook, MD, professor of medicine at McMaster University, and colleagues noted.
To address clinical questions, they investigated the efficacy and safety of PPIs to prevent upper GI bleeding in critically ill patients.
Both the randomized trial in The New England Journal of Medicine and the systematic review in NEJM Evidence were published online in June.
Significantly Lower Bleeding Risk
The REVISE trial, conducted in eight countries, compared pantoprazole 40 mg daily with placebo in critically ill adults on mechanical ventilation.
The primary efficacy outcome was clinically important upper GI bleeding in the ICU at 90 days, and the primary safety outcome was death from any cause at 90 days.
A total of 4821 patients in 68 ICUs were randomly assigned to the pantoprazole group or placebo group.
Clinically important upper GI bleeding occurred in 25 patients (1%) receiving pantoprazole and in 84 patients (3.5%) receiving placebo. At 90 days, 696 patients (29.1%) in the pantoprazole group died, as did 734 (30.9%) in the placebo group.
No significant differences were found on key secondary outcomes, including ventilator-associated pneumonia and Clostridioides difficile infection in the hospital.
The authors concluded that pantoprazole resulted in a significantly lower risk for clinically important upper GI bleeding than placebo, and it had no significant effect on mortality.
Disease Severity as a Possible Factor
The systematic review included 12 randomized controlled trials comparing PPIs with placebo or no prophylaxis for stress ulcers in a total of 9533 critically ill adults. The researchers performed meta-analyses and assessed the certainty of the evidence. They also conducted a subgroup analysis combining within-trial subgroup data from the two largest trials.
They found that PPIs were associated with a reduced incidence of clinically important upper GI bleeding (relative risk [RR], 0.51, with high certainty evidence) and may have little or no effect on mortality (RR, 0.99, with low-certainty evidence).
However, the within-trial subgroup analysis with intermediate credibility suggested that the effect of PPIs on mortality may differ based on disease severity. The results also raised the possibility that PPI use may decrease 90-day mortality in less severely ill patients (RR, 0.89) and increase mortality in more severely ill patients (RR, 1.08). The mechanisms behind this possible signal are likely multifactorial, the authors noted.
In addition, the review found that PPIs may have no effect on pneumonia, duration of ICU stay, or duration of hospital stay, and little or no effect on C difficile infection or duration of mechanical ventilation (low-certainty evidence).
“Physicians, nurses, and pharmacists working in the ICU setting will use this information in practice right away, and the trial results and the updated meta-analysis will be incorporated into international practice guidelines,” Dr. Cook said.
Both studies had limitations. The REVISE trial did not include patient-reported disability outcomes, and the results may not be generalizable to patients with unassisted breathing. The systematic review included studies with diverse definitions of bleeding and pneumonia, and with mortality reported at different milestones, without considering competing risk analyses. Patient-important GI bleeding was available in only one trial. Other potential side effects of PPIs, such as infection with multidrug-resistant organisms, were not reported.
In an editorial accompanying both studies, Samuel M. Brown, MD, a pulmonologist and vice president of research at Intermountain Health, Salt Lake City, Utah, said that the REVISE trial was “well designed and executed, with generalizable eligibility criteria and excellent experimental separation.” He said the researchers had shown that PPIs “slightly but significantly” decrease the risk of important GI bleeding and have a “decent chance” of slightly decreasing mortality in less severely ill patients during mechanical ventilation. At the same time, he noted, PPIs “do not decrease — and may slightly increase — mortality” in severely ill patients.
Dr. Brown wrote that, in his own practice, he intends to prescribe prophylactic PPIs to patients during mechanical ventilation “if they have an APACHE II score of less than 25” or a reasonable equivalent. The APACHE II scoring system is a point-based system that estimates a patient’s risk of death while in an ICU.
“For sicker patients, I would probably reserve the use of proton-pump inhibitors for those who are being treated with antiplatelet agents, especially in the presence of therapeutic anticoagulants,” he added.
REVISE was supported by numerous grants from organizations in several countries. No funding was specified for the systematic review. Author disclosures and other supplementary materials are available with the full text of the article.
A version of this article first appeared on Medscape.com.
according to a randomized trial and a systematic review led by researchers at McMaster University, Hamilton, Ontario, Canada.
Patients in the intensive care unit (ICU) who need mechanical ventilation typically are given a PPI, such as pantoprazole, to prevent upper GI bleeding caused by stress-induced stomach ulcers, but some evidence suggested that their use might increase the risk for pneumonia and death in the most severely ill patients.
As a result, recent guidelines have issued only weak recommendations for stress ulcer prophylaxis, especially with PPIs, in critically ill patients at a high risk for bleeding, Deborah Cook, MD, professor of medicine at McMaster University, and colleagues noted.
To address clinical questions, they investigated the efficacy and safety of PPIs to prevent upper GI bleeding in critically ill patients.
Both the randomized trial in The New England Journal of Medicine and the systematic review in NEJM Evidence were published online in June.
Significantly Lower Bleeding Risk
The REVISE trial, conducted in eight countries, compared pantoprazole 40 mg daily with placebo in critically ill adults on mechanical ventilation.
The primary efficacy outcome was clinically important upper GI bleeding in the ICU at 90 days, and the primary safety outcome was death from any cause at 90 days.
A total of 4821 patients in 68 ICUs were randomly assigned to the pantoprazole group or placebo group.
Clinically important upper GI bleeding occurred in 25 patients (1%) receiving pantoprazole and in 84 patients (3.5%) receiving placebo. At 90 days, 696 patients (29.1%) in the pantoprazole group died, as did 734 (30.9%) in the placebo group.
No significant differences were found on key secondary outcomes, including ventilator-associated pneumonia and Clostridioides difficile infection in the hospital.
The authors concluded that pantoprazole resulted in a significantly lower risk for clinically important upper GI bleeding than placebo, and it had no significant effect on mortality.
Disease Severity as a Possible Factor
The systematic review included 12 randomized controlled trials comparing PPIs with placebo or no prophylaxis for stress ulcers in a total of 9533 critically ill adults. The researchers performed meta-analyses and assessed the certainty of the evidence. They also conducted a subgroup analysis combining within-trial subgroup data from the two largest trials.
They found that PPIs were associated with a reduced incidence of clinically important upper GI bleeding (relative risk [RR], 0.51, with high certainty evidence) and may have little or no effect on mortality (RR, 0.99, with low-certainty evidence).
However, the within-trial subgroup analysis with intermediate credibility suggested that the effect of PPIs on mortality may differ based on disease severity. The results also raised the possibility that PPI use may decrease 90-day mortality in less severely ill patients (RR, 0.89) and increase mortality in more severely ill patients (RR, 1.08). The mechanisms behind this possible signal are likely multifactorial, the authors noted.
In addition, the review found that PPIs may have no effect on pneumonia, duration of ICU stay, or duration of hospital stay, and little or no effect on C difficile infection or duration of mechanical ventilation (low-certainty evidence).
“Physicians, nurses, and pharmacists working in the ICU setting will use this information in practice right away, and the trial results and the updated meta-analysis will be incorporated into international practice guidelines,” Dr. Cook said.
Both studies had limitations. The REVISE trial did not include patient-reported disability outcomes, and the results may not be generalizable to patients with unassisted breathing. The systematic review included studies with diverse definitions of bleeding and pneumonia, and with mortality reported at different milestones, without considering competing risk analyses. Patient-important GI bleeding was available in only one trial. Other potential side effects of PPIs, such as infection with multidrug-resistant organisms, were not reported.
In an editorial accompanying both studies, Samuel M. Brown, MD, a pulmonologist and vice president of research at Intermountain Health, Salt Lake City, Utah, said that the REVISE trial was “well designed and executed, with generalizable eligibility criteria and excellent experimental separation.” He said the researchers had shown that PPIs “slightly but significantly” decrease the risk of important GI bleeding and have a “decent chance” of slightly decreasing mortality in less severely ill patients during mechanical ventilation. At the same time, he noted, PPIs “do not decrease — and may slightly increase — mortality” in severely ill patients.
Dr. Brown wrote that, in his own practice, he intends to prescribe prophylactic PPIs to patients during mechanical ventilation “if they have an APACHE II score of less than 25” or a reasonable equivalent. The APACHE II scoring system is a point-based system that estimates a patient’s risk of death while in an ICU.
“For sicker patients, I would probably reserve the use of proton-pump inhibitors for those who are being treated with antiplatelet agents, especially in the presence of therapeutic anticoagulants,” he added.
REVISE was supported by numerous grants from organizations in several countries. No funding was specified for the systematic review. Author disclosures and other supplementary materials are available with the full text of the article.
A version of this article first appeared on Medscape.com.
according to a randomized trial and a systematic review led by researchers at McMaster University, Hamilton, Ontario, Canada.
Patients in the intensive care unit (ICU) who need mechanical ventilation typically are given a PPI, such as pantoprazole, to prevent upper GI bleeding caused by stress-induced stomach ulcers, but some evidence suggested that their use might increase the risk for pneumonia and death in the most severely ill patients.
As a result, recent guidelines have issued only weak recommendations for stress ulcer prophylaxis, especially with PPIs, in critically ill patients at a high risk for bleeding, Deborah Cook, MD, professor of medicine at McMaster University, and colleagues noted.
To address clinical questions, they investigated the efficacy and safety of PPIs to prevent upper GI bleeding in critically ill patients.
Both the randomized trial in The New England Journal of Medicine and the systematic review in NEJM Evidence were published online in June.
Significantly Lower Bleeding Risk
The REVISE trial, conducted in eight countries, compared pantoprazole 40 mg daily with placebo in critically ill adults on mechanical ventilation.
The primary efficacy outcome was clinically important upper GI bleeding in the ICU at 90 days, and the primary safety outcome was death from any cause at 90 days.
A total of 4821 patients in 68 ICUs were randomly assigned to the pantoprazole group or placebo group.
Clinically important upper GI bleeding occurred in 25 patients (1%) receiving pantoprazole and in 84 patients (3.5%) receiving placebo. At 90 days, 696 patients (29.1%) in the pantoprazole group died, as did 734 (30.9%) in the placebo group.
No significant differences were found on key secondary outcomes, including ventilator-associated pneumonia and Clostridioides difficile infection in the hospital.
The authors concluded that pantoprazole resulted in a significantly lower risk for clinically important upper GI bleeding than placebo, and it had no significant effect on mortality.
Disease Severity as a Possible Factor
The systematic review included 12 randomized controlled trials comparing PPIs with placebo or no prophylaxis for stress ulcers in a total of 9533 critically ill adults. The researchers performed meta-analyses and assessed the certainty of the evidence. They also conducted a subgroup analysis combining within-trial subgroup data from the two largest trials.
They found that PPIs were associated with a reduced incidence of clinically important upper GI bleeding (relative risk [RR], 0.51, with high certainty evidence) and may have little or no effect on mortality (RR, 0.99, with low-certainty evidence).
However, the within-trial subgroup analysis with intermediate credibility suggested that the effect of PPIs on mortality may differ based on disease severity. The results also raised the possibility that PPI use may decrease 90-day mortality in less severely ill patients (RR, 0.89) and increase mortality in more severely ill patients (RR, 1.08). The mechanisms behind this possible signal are likely multifactorial, the authors noted.
In addition, the review found that PPIs may have no effect on pneumonia, duration of ICU stay, or duration of hospital stay, and little or no effect on C difficile infection or duration of mechanical ventilation (low-certainty evidence).
“Physicians, nurses, and pharmacists working in the ICU setting will use this information in practice right away, and the trial results and the updated meta-analysis will be incorporated into international practice guidelines,” Dr. Cook said.
Both studies had limitations. The REVISE trial did not include patient-reported disability outcomes, and the results may not be generalizable to patients with unassisted breathing. The systematic review included studies with diverse definitions of bleeding and pneumonia, and with mortality reported at different milestones, without considering competing risk analyses. Patient-important GI bleeding was available in only one trial. Other potential side effects of PPIs, such as infection with multidrug-resistant organisms, were not reported.
In an editorial accompanying both studies, Samuel M. Brown, MD, a pulmonologist and vice president of research at Intermountain Health, Salt Lake City, Utah, said that the REVISE trial was “well designed and executed, with generalizable eligibility criteria and excellent experimental separation.” He said the researchers had shown that PPIs “slightly but significantly” decrease the risk of important GI bleeding and have a “decent chance” of slightly decreasing mortality in less severely ill patients during mechanical ventilation. At the same time, he noted, PPIs “do not decrease — and may slightly increase — mortality” in severely ill patients.
Dr. Brown wrote that, in his own practice, he intends to prescribe prophylactic PPIs to patients during mechanical ventilation “if they have an APACHE II score of less than 25” or a reasonable equivalent. The APACHE II scoring system is a point-based system that estimates a patient’s risk of death while in an ICU.
“For sicker patients, I would probably reserve the use of proton-pump inhibitors for those who are being treated with antiplatelet agents, especially in the presence of therapeutic anticoagulants,” he added.
REVISE was supported by numerous grants from organizations in several countries. No funding was specified for the systematic review. Author disclosures and other supplementary materials are available with the full text of the article.
A version of this article first appeared on Medscape.com.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
How Does ‘Eat Less, Move More’ Promote Obesity Bias?
Experts are debating whether and how to define obesity, but clinicians’ attitudes and behavior toward patients with obesity don’t seem to be undergoing similar scrutiny.
“Despite scientific evidence to the contrary, the prevailing view in society is that obesity is a choice that can be reversed by voluntary decisions to eat less and exercise more,” a multidisciplinary group of 36 international experts wrote in a joint consensus statement for ending the stigma of obesity, published a few years ago in Nature Medicine. “These assumptions mislead public health policies, confuse messages in popular media, undermine access to evidence-based treatments, and compromise advances in research.”
These assumptions also affect how clinicians view and treat their patients.
A systematic review and meta-analysis from Australia using 27 different outcomes to assess weight bias found that “medical doctors, nurses, dietitians, psychologists, physiotherapists, occupational therapists, speech pathologists, podiatrists, and exercise physiologists hold implicit and/or explicit weight-biased attitudes toward people with obesity.”
Another recent systematic review, this one from Brazil, found that obesity bias affected both clinical decision-making and quality of care. Patients with obesity had fewer screening exams for cancer, less-frequent treatment intensification in the management of obesity, and fewer pelvic exams. The authors concluded that their findings “reveal the urgent necessity for reflection and development of strategies to mitigate the adverse impacts” of obesity bias.
“Weight is one of those things that gets judged because it can be seen,” Obesity Society Spokesperson Peminda Cabandugama, MD, of Cleveland Clinic, told this news organization. “People just look at someone with overweight and say, ‘That person needs to eat less and exercise more.’ ”
How Obesity Bias Manifests
The Obesity Action Coalition (OAC), a partner organization to the consensus statement, defines weight bias as “negative attitudes, beliefs, judgments, stereotypes, and discriminatory acts aimed at individuals simply because of their weight. It can be overt or subtle and occur in any setting, including employment, healthcare, education, mass media, and relationships with family and friends.”
The organization notes that weight bias takes many forms, including verbal, written, media, and online.
The consensus statement authors offer these definitions, which encompass the manifestations of obesity bias: Weight stigma refers to “social devaluation and denigration of individuals because of their excess body weight and can lead to negative attitudes, stereotypes, prejudice, and discrimination.”
Weight discrimination refers to “overt forms of weight-based prejudice and unfair treatment (biased behaviors) toward individuals with overweight or obesity.” The authors noted that some public health efforts “openly embrace stigmatization of individuals with obesity based on the assumption that shame will motivate them to change behavior and achieve weight loss through a self-directed diet and increased physical exercise.”
The result: “Individuals with obesity face not only increased risk of serious medical complications but also a pervasive, resilient form of social stigma. Often perceived (without evidence) as lazy, gluttonous, lacking will power and self-discipline, individuals with overweight or obesity are vulnerable to stigma and discrimination in the workplace, education, healthcare settings, and society in general.”
“Obesity bias is so pervasive that the most common thing I hear when I ask a patient why they’re referred to me is ‘my doctor wants me to lose weight,’” Dr. Cabandugama said. “And the first thing I ask them is ‘what do you want to do?’ They come in because they’ve already been judged, and more often than not, in ways that come across as derogatory or punitive — like it’s their fault.”
Why It Persists
Experts say a big part of the problem is the lack of obesity education in medical school. A recent survey study found that medical schools are not prioritizing obesity in their curricula. Among 40 medical schools responding to the survey, only 10% said they believed their students were “very prepared” to manage patients with obesity, and one third had no obesity education program in place with no plans to develop one.
“Most healthcare providers do not get much meaningful education on obesity during medical school or postgraduate training, and many of their opinions may be influenced by the pervasive weight bias that exists in society,” affirmed Jaime Almandoz, MD, medical director of Weight Wellness Program and associate professor of internal medicine at UT Southwestern Medical Center in Dallas. “We need to prioritize updating education and certification curricula to reflect the current science.”
Small wonder that a recent comparison of explicit weight bias among US resident physicians from 49 medical schools across 16 clinical specialties found “problematic levels” of weight bias — eg, anti-fat blame, anti-fat dislike, and other negative attitudes toward patients — in all specialties.
What to Do
To counteract the stigma, when working with patients who have overweight, “We need to be respectful of them, their bodies, and their health wishes,” Dr. Almandoz told this news organization. “Clinicians should always ask for permission to discuss their weight and frame weight or BMI in the context of health, not just an arbitrary number or goal.”
“Many people with obesity have had traumatic and stigmatizing experiences with well-intentioned healthcare providers,” he noted. “This can lead to the avoidance of routine healthcare and screenings and potential exacerbations and maladaptive health behaviors.”
“Be mindful of the environment that you and your office create for people with obesity,” he advised. “Consider getting additional education and information about weight bias.”
The OAC has resources on obesity bias, including steps clinicians can take to reduce the impact. These include, among others: Encouraging patients to share their experiences of stigma to help them feel less isolated in these experiences; helping them identify ways to effectively cope with stigma, such as using positive “self-talk” and obtaining social support from others; and encouraging participation in activities that they may have restricted due to feelings of shame about their weight.
Clinicians can also improve the physical and social environment of their practice by having bathrooms that are easily negotiated by heavier individuals, sturdy armless chairs in waiting rooms, offices with large exam tables, gowns and blood pressure cuffs in appropriate sizes, and “weight-friendly” reading materials rather than fashion magazines with thin supermodels.
Importantly, clinicians need to address the issue of weight bias within themselves, their medical staff, and colleagues, according to the OAC. To be effective and empathic with individuals affected by obesity “requires honest self-examination of one’s own attitudes and weight bias.”
Dr. Almandoz reported being a consultant/advisory board member for Novo Nordisk, Boehringer Ingelheim, and Eli Lilly and Company. Dr. Cabandugama reported no competing interests.
A version of this article first appeared on Medscape.com.
Experts are debating whether and how to define obesity, but clinicians’ attitudes and behavior toward patients with obesity don’t seem to be undergoing similar scrutiny.
“Despite scientific evidence to the contrary, the prevailing view in society is that obesity is a choice that can be reversed by voluntary decisions to eat less and exercise more,” a multidisciplinary group of 36 international experts wrote in a joint consensus statement for ending the stigma of obesity, published a few years ago in Nature Medicine. “These assumptions mislead public health policies, confuse messages in popular media, undermine access to evidence-based treatments, and compromise advances in research.”
These assumptions also affect how clinicians view and treat their patients.
A systematic review and meta-analysis from Australia using 27 different outcomes to assess weight bias found that “medical doctors, nurses, dietitians, psychologists, physiotherapists, occupational therapists, speech pathologists, podiatrists, and exercise physiologists hold implicit and/or explicit weight-biased attitudes toward people with obesity.”
Another recent systematic review, this one from Brazil, found that obesity bias affected both clinical decision-making and quality of care. Patients with obesity had fewer screening exams for cancer, less-frequent treatment intensification in the management of obesity, and fewer pelvic exams. The authors concluded that their findings “reveal the urgent necessity for reflection and development of strategies to mitigate the adverse impacts” of obesity bias.
“Weight is one of those things that gets judged because it can be seen,” Obesity Society Spokesperson Peminda Cabandugama, MD, of Cleveland Clinic, told this news organization. “People just look at someone with overweight and say, ‘That person needs to eat less and exercise more.’ ”
How Obesity Bias Manifests
The Obesity Action Coalition (OAC), a partner organization to the consensus statement, defines weight bias as “negative attitudes, beliefs, judgments, stereotypes, and discriminatory acts aimed at individuals simply because of their weight. It can be overt or subtle and occur in any setting, including employment, healthcare, education, mass media, and relationships with family and friends.”
The organization notes that weight bias takes many forms, including verbal, written, media, and online.
The consensus statement authors offer these definitions, which encompass the manifestations of obesity bias: Weight stigma refers to “social devaluation and denigration of individuals because of their excess body weight and can lead to negative attitudes, stereotypes, prejudice, and discrimination.”
Weight discrimination refers to “overt forms of weight-based prejudice and unfair treatment (biased behaviors) toward individuals with overweight or obesity.” The authors noted that some public health efforts “openly embrace stigmatization of individuals with obesity based on the assumption that shame will motivate them to change behavior and achieve weight loss through a self-directed diet and increased physical exercise.”
The result: “Individuals with obesity face not only increased risk of serious medical complications but also a pervasive, resilient form of social stigma. Often perceived (without evidence) as lazy, gluttonous, lacking will power and self-discipline, individuals with overweight or obesity are vulnerable to stigma and discrimination in the workplace, education, healthcare settings, and society in general.”
“Obesity bias is so pervasive that the most common thing I hear when I ask a patient why they’re referred to me is ‘my doctor wants me to lose weight,’” Dr. Cabandugama said. “And the first thing I ask them is ‘what do you want to do?’ They come in because they’ve already been judged, and more often than not, in ways that come across as derogatory or punitive — like it’s their fault.”
Why It Persists
Experts say a big part of the problem is the lack of obesity education in medical school. A recent survey study found that medical schools are not prioritizing obesity in their curricula. Among 40 medical schools responding to the survey, only 10% said they believed their students were “very prepared” to manage patients with obesity, and one third had no obesity education program in place with no plans to develop one.
“Most healthcare providers do not get much meaningful education on obesity during medical school or postgraduate training, and many of their opinions may be influenced by the pervasive weight bias that exists in society,” affirmed Jaime Almandoz, MD, medical director of Weight Wellness Program and associate professor of internal medicine at UT Southwestern Medical Center in Dallas. “We need to prioritize updating education and certification curricula to reflect the current science.”
Small wonder that a recent comparison of explicit weight bias among US resident physicians from 49 medical schools across 16 clinical specialties found “problematic levels” of weight bias — eg, anti-fat blame, anti-fat dislike, and other negative attitudes toward patients — in all specialties.
What to Do
To counteract the stigma, when working with patients who have overweight, “We need to be respectful of them, their bodies, and their health wishes,” Dr. Almandoz told this news organization. “Clinicians should always ask for permission to discuss their weight and frame weight or BMI in the context of health, not just an arbitrary number or goal.”
“Many people with obesity have had traumatic and stigmatizing experiences with well-intentioned healthcare providers,” he noted. “This can lead to the avoidance of routine healthcare and screenings and potential exacerbations and maladaptive health behaviors.”
“Be mindful of the environment that you and your office create for people with obesity,” he advised. “Consider getting additional education and information about weight bias.”
The OAC has resources on obesity bias, including steps clinicians can take to reduce the impact. These include, among others: Encouraging patients to share their experiences of stigma to help them feel less isolated in these experiences; helping them identify ways to effectively cope with stigma, such as using positive “self-talk” and obtaining social support from others; and encouraging participation in activities that they may have restricted due to feelings of shame about their weight.
Clinicians can also improve the physical and social environment of their practice by having bathrooms that are easily negotiated by heavier individuals, sturdy armless chairs in waiting rooms, offices with large exam tables, gowns and blood pressure cuffs in appropriate sizes, and “weight-friendly” reading materials rather than fashion magazines with thin supermodels.
Importantly, clinicians need to address the issue of weight bias within themselves, their medical staff, and colleagues, according to the OAC. To be effective and empathic with individuals affected by obesity “requires honest self-examination of one’s own attitudes and weight bias.”
Dr. Almandoz reported being a consultant/advisory board member for Novo Nordisk, Boehringer Ingelheim, and Eli Lilly and Company. Dr. Cabandugama reported no competing interests.
A version of this article first appeared on Medscape.com.
Experts are debating whether and how to define obesity, but clinicians’ attitudes and behavior toward patients with obesity don’t seem to be undergoing similar scrutiny.
“Despite scientific evidence to the contrary, the prevailing view in society is that obesity is a choice that can be reversed by voluntary decisions to eat less and exercise more,” a multidisciplinary group of 36 international experts wrote in a joint consensus statement for ending the stigma of obesity, published a few years ago in Nature Medicine. “These assumptions mislead public health policies, confuse messages in popular media, undermine access to evidence-based treatments, and compromise advances in research.”
These assumptions also affect how clinicians view and treat their patients.
A systematic review and meta-analysis from Australia using 27 different outcomes to assess weight bias found that “medical doctors, nurses, dietitians, psychologists, physiotherapists, occupational therapists, speech pathologists, podiatrists, and exercise physiologists hold implicit and/or explicit weight-biased attitudes toward people with obesity.”
Another recent systematic review, this one from Brazil, found that obesity bias affected both clinical decision-making and quality of care. Patients with obesity had fewer screening exams for cancer, less-frequent treatment intensification in the management of obesity, and fewer pelvic exams. The authors concluded that their findings “reveal the urgent necessity for reflection and development of strategies to mitigate the adverse impacts” of obesity bias.
“Weight is one of those things that gets judged because it can be seen,” Obesity Society Spokesperson Peminda Cabandugama, MD, of Cleveland Clinic, told this news organization. “People just look at someone with overweight and say, ‘That person needs to eat less and exercise more.’ ”
How Obesity Bias Manifests
The Obesity Action Coalition (OAC), a partner organization to the consensus statement, defines weight bias as “negative attitudes, beliefs, judgments, stereotypes, and discriminatory acts aimed at individuals simply because of their weight. It can be overt or subtle and occur in any setting, including employment, healthcare, education, mass media, and relationships with family and friends.”
The organization notes that weight bias takes many forms, including verbal, written, media, and online.
The consensus statement authors offer these definitions, which encompass the manifestations of obesity bias: Weight stigma refers to “social devaluation and denigration of individuals because of their excess body weight and can lead to negative attitudes, stereotypes, prejudice, and discrimination.”
Weight discrimination refers to “overt forms of weight-based prejudice and unfair treatment (biased behaviors) toward individuals with overweight or obesity.” The authors noted that some public health efforts “openly embrace stigmatization of individuals with obesity based on the assumption that shame will motivate them to change behavior and achieve weight loss through a self-directed diet and increased physical exercise.”
The result: “Individuals with obesity face not only increased risk of serious medical complications but also a pervasive, resilient form of social stigma. Often perceived (without evidence) as lazy, gluttonous, lacking will power and self-discipline, individuals with overweight or obesity are vulnerable to stigma and discrimination in the workplace, education, healthcare settings, and society in general.”
“Obesity bias is so pervasive that the most common thing I hear when I ask a patient why they’re referred to me is ‘my doctor wants me to lose weight,’” Dr. Cabandugama said. “And the first thing I ask them is ‘what do you want to do?’ They come in because they’ve already been judged, and more often than not, in ways that come across as derogatory or punitive — like it’s their fault.”
Why It Persists
Experts say a big part of the problem is the lack of obesity education in medical school. A recent survey study found that medical schools are not prioritizing obesity in their curricula. Among 40 medical schools responding to the survey, only 10% said they believed their students were “very prepared” to manage patients with obesity, and one third had no obesity education program in place with no plans to develop one.
“Most healthcare providers do not get much meaningful education on obesity during medical school or postgraduate training, and many of their opinions may be influenced by the pervasive weight bias that exists in society,” affirmed Jaime Almandoz, MD, medical director of Weight Wellness Program and associate professor of internal medicine at UT Southwestern Medical Center in Dallas. “We need to prioritize updating education and certification curricula to reflect the current science.”
Small wonder that a recent comparison of explicit weight bias among US resident physicians from 49 medical schools across 16 clinical specialties found “problematic levels” of weight bias — eg, anti-fat blame, anti-fat dislike, and other negative attitudes toward patients — in all specialties.
What to Do
To counteract the stigma, when working with patients who have overweight, “We need to be respectful of them, their bodies, and their health wishes,” Dr. Almandoz told this news organization. “Clinicians should always ask for permission to discuss their weight and frame weight or BMI in the context of health, not just an arbitrary number or goal.”
“Many people with obesity have had traumatic and stigmatizing experiences with well-intentioned healthcare providers,” he noted. “This can lead to the avoidance of routine healthcare and screenings and potential exacerbations and maladaptive health behaviors.”
“Be mindful of the environment that you and your office create for people with obesity,” he advised. “Consider getting additional education and information about weight bias.”
The OAC has resources on obesity bias, including steps clinicians can take to reduce the impact. These include, among others: Encouraging patients to share their experiences of stigma to help them feel less isolated in these experiences; helping them identify ways to effectively cope with stigma, such as using positive “self-talk” and obtaining social support from others; and encouraging participation in activities that they may have restricted due to feelings of shame about their weight.
Clinicians can also improve the physical and social environment of their practice by having bathrooms that are easily negotiated by heavier individuals, sturdy armless chairs in waiting rooms, offices with large exam tables, gowns and blood pressure cuffs in appropriate sizes, and “weight-friendly” reading materials rather than fashion magazines with thin supermodels.
Importantly, clinicians need to address the issue of weight bias within themselves, their medical staff, and colleagues, according to the OAC. To be effective and empathic with individuals affected by obesity “requires honest self-examination of one’s own attitudes and weight bias.”
Dr. Almandoz reported being a consultant/advisory board member for Novo Nordisk, Boehringer Ingelheim, and Eli Lilly and Company. Dr. Cabandugama reported no competing interests.
A version of this article first appeared on Medscape.com.
Gut Microbiota Tied to Food Addiction Vulnerability
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
Shortage of Blood Bottles Could Disrupt Care
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Compounded Semaglutide Overdoses Tied to Hospitalizations
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Will Treating High Blood Pressure Curb Dementia Risk?
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular Sleep Patterns Increase Type 2 Diabetes Risk
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM DIABETES CARE
Treatable Condition Misdiagnosed as Dementia in Almost 13% of Cases
, one of the main causes of the condition, new research suggests.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
, one of the main causes of the condition, new research suggests.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
, one of the main causes of the condition, new research suggests.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
From the American Journal of Medicine
High-Fiber Foods Release Appetite-Suppressing Gut Hormone
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
A Fitbit for the Gut May Aid in Detection of GI Disorders
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
FROM CELL REPORTS PHYSICAL SCIENCE