User login
Gut Microbiota Tied to Food Addiction Vulnerability
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Food addiction, characterized by a loss of control over food intake, may promote obesity and alter gut microbiota composition.
- Researchers used the Yale Food Addiction Scale 2.0 criteria to classify extreme food addiction and nonaddiction in mouse models and humans.
- The gut microbiota between addicted and nonaddicted mice were compared to identify factors related to food addiction in the murine model. Researchers subsequently gave mice drinking water with the prebiotics lactulose or rhamnose and the bacterium Blautia wexlerae, which has been associated with a reduced risk for obesity and diabetes.
- Gut microbiota signatures were also analyzed in 15 individuals with food addiction and 13 matched controls.
TAKEAWAY:
- In both humans and mice, gut microbiome signatures suggested possible nonbeneficial effects of bacteria in the Proteobacteria phylum and potential protective effects of Actinobacteria against the development of food addiction.
- In correlational analyses, decreased relative abundance of the species B wexlerae was observed in addicted humans and of the Blautia genus in addicted mice.
- Administration of the nondigestible carbohydrates lactulose and rhamnose, known to favor Blautia growth, led to increased relative abundance of Blautia in mouse feces, as well as “dramatic improvements” in food addiction.
- In functional validation experiments, oral administration of B wexlerae in mice led to similar improvement.
IN PRACTICE:
“This novel understanding of the role of gut microbiota in the development of food addiction may open new approaches for developing biomarkers and innovative therapies for food addiction and related eating disorders,” the authors wrote.
SOURCE:
The study, led by Solveiga Samulėnaitė, a doctoral student at Vilnius University, Vilnius, Lithuania, was published online in Gut.
LIMITATIONS:
Further research is needed to elucidate the exact mechanisms underlying the potential use of gut microbiota for treating food addiction and to test the safety and efficacy in humans.
DISCLOSURES:
This work was supported by La Caixa Health and numerous grants from Spanish ministries and institutions and the European Union. No competing interests were declared.
A version of this article first appeared on Medscape.com.
Shortage of Blood Bottles Could Disrupt Care
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Hospitals and laboratories across the United States are grappling with a shortage of Becton Dickinson BACTEC blood culture bottles that threatens to extend at least until September.
In a health advisory, the Centers for Disease Control and Prevention (CDC) warned that the critical shortage could lead to “delays in diagnosis, misdiagnosis, or other challenges” in the management of patients with infectious diseases.
Healthcare providers, laboratories, healthcare facility administrators, and state, tribal, local, and territorial health departments affected by the shortage “should immediately begin to assess their situations and develop plans and options to mitigate the potential impact,” according to the health advisory.
What to Do
To reduce the impact of the shortage, facilities are urged to:
- Determine the type of blood culture bottles they have
- Optimize the use of blood cultures at their facility
- Take steps to prevent blood culture contamination
- Ensure that the appropriate volume of blood is collected for culture
- Assess alternate options for blood cultures
- Work with a nearby facility or send samples to another laboratory
Health departments are advised to contact hospitals and laboratories in their jurisdictions to determine whether the shortage will affect them. Health departments are also encouraged to educate others on the supply shortage, optimal use of blood cultures, and mechanisms for reporting supply chain shortages or interruptions to the Food and Drug Administration (FDA), as well as to help with communication between laboratories and facilities willing to assist others in need.
To further assist affected providers, the CDC, in collaboration with the Infectious Diseases Society of America, hosted a webinar with speakers from Johns Hopkins University, Massachusetts General Hospital, and Vanderbilt University, who shared what their institutions are doing to cope with the shortage and protect patients.
Why It Happened
In June, Becton Dickinson warned its customers that they may experience “intermittent delays” in the supply of some BACTEC blood culture media over the coming months because of reduced availability of plastic bottles from its supplier.
In a July 22 update, the company said the supplier issues were “more complex” than originally communicated and it is taking steps to “resolve this challenge as quickly as possible.”
In July, the FDA published a letter to healthcare providers acknowledging the supply disruptions and recommended strategies to preserve the supply for patients at highest risk.
Becton Dickinson has promised an update by September to this “dynamic and evolving situation.”
A version of this article appeared on Medscape.com.
Compounded Semaglutide Overdoses Tied to Hospitalizations
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Patients are overdosing on compounded semaglutide due to errors in measuring and self-administering the drug and due to clinicians miscalculating doses that may differ from US Food and Drug Administration (FDA)–approved products.
The FDA published an alert on July 26 after receiving reports of dosing errors involving compounded semaglutide injectable products dispensed in multidose vials. Adverse events included gastrointestinal effects, fainting, dehydration, headache, gallstones, and acute pancreatitis. Some patients required hospitalization.
Why the Risks?
FDA-approved semaglutide injectable products are dosed in milligrams, have standard concentrations, and are currently only available in prefilled pens.
Compounded semaglutide products may differ from approved products in ways that contribute to potential errors — for example, in multidose vials and prefilled syringes. In addition, product concentrations may vary depending on the compounder, and even a single compounder may offer multiple concentrations of semaglutide.
Instructions for a compounded drug, if provided, may tell users to administer semaglutide injections in “units,” the volume of which may vary depending on the concentration — rather than in milligrams. In some instances, patients received syringes significantly larger than the prescribed volume.
Common Errors
The FDA has received reports related to patients mistakenly taking more than the prescribed dose from a multidose vial — sometimes 5-20 times more than the intended dose.
Several reports described clinicians incorrectly calculating the intended dose when converting from milligrams to units or milliliters. In one case, a patient couldn’t get clarity on dosing instructions from the telemedicine provider who prescribed the compounded semaglutide, leading the patient to search online for medical advice. This resulted in the patient taking five times the intended dose.
In another example, one clinician prescribed 20 units instead of two units, affecting three patients who, after receiving 10 times the intended dose, experienced nausea and vomiting.
Another clinician, who also takes semaglutide himself, tried to recalculate his own dose in units and ended up self-administering a dose 10 times higher than intended.
The FDA previously warned about potential risks from the use of compounded drugs during a shortage as is the case with semaglutide. While compounded drugs can “sometimes” be helpful, according to the agency, “compounded drugs pose a higher risk to patients than FDA-approved drugs because compounded drugs do not undergo FDA premarket review for safety, effectiveness, or quality.”
Will Treating High Blood Pressure Curb Dementia Risk?
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
High blood pressure is an established risk factor for neurodegeneration and cognitive decline.
Valentin Fuster, MD, president of Mount Sinai Fuster Heart Hospital in New York City, told this news organization. “There is no question in the literature that untreated high blood pressure may lead to dementia,” he said. “The open question is whether treating blood pressure is sufficient to decrease or stop the progress of dementia.”
Studies are mixed, but recent research suggests that addressing hypertension does affect the risk for dementia. A secondary analysis of the China Rural Hypertension Control Project reported at the American Heart Association (AHA) Scientific Sessions in 2023 but not yet published showed that the 4-year blood pressure–lowering program in adults aged 40 or older significantly reduced the risk for all-cause dementia and cognitive impairment.
Similarly, a post hoc analysis of the SPRINT MIND trial found that participants aged 50 or older who underwent intensive (< 120 mm Hg) vs standard (< 140 mm Hg) blood pressure lowering had a lower rate of probable dementia or mild cognitive impairment.
Other studies pointing to a benefit included a pooled individual participant analysis of five randomized controlled trials, which found class I evidence to support antihypertensive treatment to reduce the risk for incident dementia, and an earlier systematic review and meta-analysis of the association of blood pressure lowering with newly diagnosed dementia or cognitive impairment.
How It Might Work
Some possible mechanisms underlying the connection have emerged.
“Vascular disease caused by hypertension is clearly implicated in one form of dementia, called vascular cognitive impairment and dementia,” Andrew Moran, MD, PhD, associate professor of medicine at Columbia University Vagelos College of Physicians and Surgeons in New York City, told this news organization. “This category includes dementia following a stroke caused by uncontrolled hypertension.”
“At the same time, we now know that hypertension and other vascular risk factors can also contribute, along with other factors, to developing Alzheimer dementia,” he said. “Even without causing clinically evident stroke, vascular disease from hypertension can lead to subtle damage to the brain via ischemia, microhemorrhage, and atrophy.”
“It is well known that hypertension affects the vasculature, and the vasculature of the brain is not spared,” agreed Eileen Handberg, PhD, ARNP, a member of the Hypertension Workgroup at the American College of Cardiology (ACC) and a professor of medicine and director of the Cardiovascular Clinical Trials Program in the University of Florida, Gainesville, Florida. “Combine this with other mechanisms like inflammation and endothelial dysfunction, and add amyloid accumulation, and there is a deterioration in vascular beds leading to decreased cerebral blood flow,” she said.
Treating hypertension likely helps lower dementia risk through “a combination of reduced risk of stroke and also benefits on blood flow, blood vessel health, and reduction in neurodegeneration,” suggested Mitchell S.V. Elkind, MD, chief clinical science officer and past president of the AHA and a professor of neurology and epidemiology at Columbia University Irving Medical Center in New York City. “Midlife blood pressure elevations are associated with deposition of amyloid in the brain, so controlling blood pressure may reduce amyloid deposits and neurodegeneration.”
Time in Range or Treat to Target?
With respect to dementia risk, does treating hypertension to a specific target make a difference, or is it the time spent in a healthy blood pressure range?
“Observational studies and a post hoc analysis of the SPRINT MIND trial suggest that more time spent in a healthy blood pressure range or more stable blood pressure are associated with lower dementia risk,” Dr. Moran said. Citing results of the CHRC program and SPRINT MIND trial, he suggested that while a dose-response effect (the lower the blood pressure, the lower the dementia risk) hasn’t been definitively demonstrated, it is likely the case.
In his practice, Dr. Moran follows ACC/AHA guidelines and prescribes antihypertensives to get blood pressure below 130/80 mm Hg in individuals with hypertension who have other high-risk factors (cardiovascular disease, diabetes, chronic kidney disease, or high risk for these conditions). “The treatment rule for people with hypertension without these other risk factors is less clear — lowering blood pressure below 140/90 mm Hg is a must; I will discuss with patients whether to go lower than that.”
“The relative contributions of time in range versus treating to a target for blood pressure require further study,” said Dr. Elkind. “It is likely that the cumulative effect of blood pressure over time has a big role to play — and it does seem clear that midlife blood pressure is even more important than blood pressure late in life.”
That said, he added, “In general and all things being equal, I would treat to a blood pressure of < 120/80 mmHg,” given the SPRINT trial findings of greater benefits when treating to this systolic blood pressure goal. “Of course, if patients have side effects such as lightheadedness or dizziness or other medical conditions that require a higher target, then one would need to adjust the treatment targets.”
According to Dr. Fuster, targets should not be the focus because they vary. For example, the ACC/AHA guidelines use < 130/80 mm Hg, whereas the European Society of Hypertension guidelines and those of the American Academy of Family Physicians specify < 140/90 mm Hg and include age-based criteria. Because there are no studies comparing the outcomes of one set of guidelines vs another, Dr. Fuster thinks the focus should be on starting treatment as early as possible to prevent hypertension leading to dementia.
He pointed to the ongoing PESA trial, which uses brain MRI and other tests to characterize longitudinal associations among cerebral glucose metabolism, subclinical atherosclerosis, and cardiovascular risk factors in asymptomatic individuals aged 40-54. Most did not have hypertension at baseline.
A recently published analysis of a subcohort of 370 PESA participants found that those with persistent high cardiovascular risk and subclinical carotid atherosclerosis already had signs of brain metabolic decline, “suggesting that maintenance of cardiovascular health during midlife could contribute to reductions in neurodegenerative disease burden later in life,” wrote the investigators.
Is It Ever Too Late?
If starting hypertension treatment in midlife can help reduce the risk for cognitive impairment later, can treating later in life also help? “It’s theoretically possible, but it has to be proven,” Dr. Fuster said. “There are no data on whether there’s less chance to prevent the development of dementia if you start treating hypertension at age 70, for example. And we have no idea whether hypertension treatment will prevent progression in those who already have dementia.”
“Treating high blood pressure in older adults could affect the course of further progressive cognitive decline by improving vascular health and preventing strokes, which likely exacerbate nonvascular dementia,” Dr. Elkind suggested. “Most people with dementia have a combination of vascular and nonvascular dementia, so treating reversible causes wherever possible makes a difference.”
Dr. Elkind treats older patients with this in mind, he said, “even though most of the evidence points to the fact that it is blood pressure in middle age, not older age, that seems to have the biggest impact on later-life cognitive decline and dementia.” Like Dr. Fuster, he said, “the best strategy is to identify and treat blood pressure in midlife, before damage to the brain has advanced.”
Dr. Moran noted, “The latest science on dementia causes suggests it is difficult to draw a border between vascular and nonvascular dementia. So, as a practical matter, healthcare providers should consider that hypertension treatment is one of the best ways to prevent any category of dementia. This dementia prevention is added to the well-known benefits of hypertension treatment to prevent heart attacks, strokes, and kidney disease: ‘Healthy heart, healthy brain.’ ”
“Our BP [blood pressure] control rates overall are still abysmal,” Dr. Handberg added. Currently around one in four US adults with hypertension have it under control. Studies have shown that blood pressure control rates of 70%-80% are achievable, she said. “We can’t let patient or provider inertia continue.”
Dr. Handberg, Dr. Elkind, Dr. Moran, and Dr. Fuster declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular Sleep Patterns Increase Type 2 Diabetes Risk
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Irregular sleep duration was associated with a higher risk for diabetes in middle-aged to older adults in a new UK Biobank study.
The analysis of more than 84,000 participants with 7-day accelerometry data suggested that individuals with the most irregular sleep duration patterns had a 34% higher risk for diabetes compared with their peers who had more consistent sleep patterns.
“It’s recommended to have 7-9 hours of nightly sleep, but what is not considered much in policy guidelines or at the clinical level is how regularly that’s needed,” Sina Kianersi, PhD, of Brigham and Women’s Hospital in Boston, Massachusetts, said in an interview. “What our study added is that it’s not just the duration but keeping it consistent. Patients can reduce their risk of diabetes by maintaining their 7-9 hours of sleep, not just for 1 night but throughout life.”
The study was published online in Diabetes Care.
Modifiable Lifestyle Factor
Researchers analyzed data from 84,421 UK Biobank participants who were free of diabetes when they provided accelerometer data in 2013-2015 and who were followed for a median of 7.5 years (622,080 person-years).
Participants had an average age of 62 years, 57% were women, 97% were White individuals, and 50% were employed in non–shift work jobs.
Sleep duration variability was quantified by the within-person standard deviation (SD) of 7-night accelerometer-measured sleep duration.
Participants with higher sleep duration SD were younger and more likely to be women, shift workers, or current smokers; those who reported definite “evening” chronotype (natural preference of the body to sleep at a certain time); those having lower socioeconomic status, higher body mass index, and shorter mean sleep duration; and were less likely to be White individuals.
In addition, a family history of diabetes and of depression was more prevalent among these participants.
A total of 2058 incident diabetes cases occurred during follow-up.
After adjustment for age, sex, and race, compared with a sleep duration SD ≤ 30 minutes, the hazard ratio (HR) was 1.15 for 31-45 minutes, 1.28 for 46-60 minutes, 1.54 for 61-90 minutes, and 1.59 for ≥ 91 minutes.
After the initial adjustment, individuals with a sleep duration SD of > 60 vs ≤ 60 minutes had a 34% higher diabetes risk. However, further adjustment for lifestyle, comorbidities, environmental factors, and adiposity attenuated the association — ie, the HR comparing sleep duration SD of > 60 vs ≤ 60 minutes was 1.11.
Furthermore, researchers found that the association between sleep duration and diabetes was stronger among individuals with lower diabetes polygenic risk score.
“One possible explanation for this finding is that the impact of sleep irregularity on diabetes risk may be less noticeable in individuals with a high genetic predisposition, where genetic factors dominate,” Dr. Kianersi said. “However, it is important to note that these sleep-gene interaction effects were not consistently observed across different measures and gene-related variables. This is something that remains to be further studied.”
Nevertheless, he added, “I want to emphasize that the association between irregular sleep duration and increased diabetes risk was evident across all levels of diabetes polygenic risk scores.”
The association also was stronger with longer sleep duration. The authors suggested that longer sleep duration “might reduce daylight exposure, which could, in turn, give rise to circadian disruption.”
Overall, Dr. Kianersi said, “Our study identified a modifiable lifestyle factor that can help lower the risk of developing type 2 diabetes.”
The study had several limitations. There was a time lag of a median of 5 years between sleep duration measurements and covariate assessments, which might bias lifestyle behaviors that may vary over time. In addition, a single 7-day sleep duration measurement may not capture long-term sleep patterns. A constrained random sampling approach was used to select participants, raising the potential of selection bias.
Regular Sleep Routine Best
Ana Krieger, MD, MPH, director of the Center for Sleep Medicine at Weill Cornell Medicine in New York City, commented on the study for this news organization. “This is a very interesting study, as it adds to the literature,” she said. “Previous research studies have shown metabolic abnormalities with variations in sleep time and duration.”
“This particular study evaluated a large sample of patients in the UK which were mostly White middle-aged and may not be representative of the general population,” she noted. “A similar study in a Hispanic/Latino group failed to demonstrate any significant association between sleep timing variability and incidence of diabetes. It would be desirable to see if prospective studies are able to demonstrate a reduction in diabetes risk by implementing a more regular sleep routine.”
The importance of the body’s natural circadian rhythm in regulating and anchoring many physiological processes was highlighted by the 2017 Nobel Prize of Medicine, which was awarded to three researchers in circadian biology, she pointed out.
“Alterations in the circadian rhythm are known to affect mood regulation, gastrointestinal function, and alertness, among other factors,” she said. “Keeping a regular sleep routine will help to improve our circadian rhythm and better regulate many processes, including our metabolism and appetite-controlling hormones.”
Notably, a study published online in Diabetologia in a racially and economically diverse US population also found that adults with persistent suboptimal sleep durations (< 7 or > 9 hours nightly over a mean of 5 years) were more likely to develop incident diabetes. The strongest association was found among participants reporting extreme changes and higher variability in their sleep durations.
This study was supported by the National Institutes of Health (grant number R01HL155395) and the UKB project 85501. Dr. Kianersi was supported by the American Heart Association Postdoctoral Fellowship. Dr. Kianersi and Dr. Krieger reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM DIABETES CARE
Treatable Condition Misdiagnosed as Dementia in Almost 13% of Cases
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
The study of more than 68,000 individuals in the general population diagnosed with dementia between 2009 and 2019 found that almost 13% had FIB-4 scores indicative of cirrhosis and potential hepatic encephalopathy.
The findings, recently published online in The American Journal of Medicine, corroborate and extend the researchers’ previous work, which showed that about 10% of US veterans with a dementia diagnosis may in fact have hepatic encephalopathy.
“We need to increase awareness that cirrhosis and related brain complications are common, silent, but treatable when found,” said corresponding author Jasmohan Bajaj, MD, of Virginia Commonwealth University and Richmond VA Medical Center, Richmond, Virginia. “Moreover, these are being increasingly diagnosed in older individuals.”
“Cirrhosis can also predispose patients to liver cancer and other complications, so diagnosing it in all patients is important, regardless of the hepatic encephalopathy-dementia connection,” he said.
FIB-4 Is Key
Dr. Bajaj and colleagues analyzed data from 72 healthcare centers on 68,807 nonveteran patients diagnosed with dementia at two or more physician visits between 2009 and 2019. Patients had no prior cirrhosis diagnosis, the mean age was 73 years, 44.7% were men, and 78% were White.
The team measured the prevalence of two high FIB-4 scores (> 2.67 and > 3.25), selected for their strong predictive value for advanced cirrhosis. Researchers also examined associations between high scores and multiple comorbidities and demographic factors.
Alanine aminotransferase (ALT), aspartate aminotransferase (AST), and platelet labs were collected up to 2 years after the index dementia diagnosis because they are used to calculate FIB-4.
The mean FIB-4 score was 1.78, mean ALT was 23.72 U/L, mean AST was 27.42 U/L, and mean platelets were 243.51 × 109/µL.
A total of 8683 participants (12.8%) had a FIB-4 score greater than 2.67 and 5185 (7.6%) had a score greater than 3.25.
In multivariable logistic regression models, FIB-4 greater than 3.25 was associated with viral hepatitis (odds ratio [OR], 2.23), congestive heart failure (OR,1.73), HIV (OR, 1.72), male gender (OR, 1.42), alcohol use disorder (OR, 1.39), and chronic kidney disease (OR, 1.38).
FIB-4 greater than 3.25 was inversely associated with White race (OR, 0.76) and diabetes (OR, 0.82).
The associations were similar when using a threshold score of greater than 2.67.
“With the aging population, including those with cirrhosis, the potential for overlap between hepatic encephalopathy and dementia has risen and should be considered in the differential diagnosis,” the authors wrote. “Undiagnosed cirrhosis and potential hepatic encephalopathy can be a treatable cause of or contributor towards cognitive impairment in patients diagnosed with dementia.”
Providers should use the FIB-4 index as a screening tool to detect cirrhosis in patients with dementia, they concluded.
The team’s next steps will include investigating barriers to the use of FIB-4 among practitioners, Dr. Bajaj said.
Incorporating use of the FIB-4 index into screening guidelines “with input from all stakeholders, including geriatricians, primary care providers, and neurologists … would greatly expand the diagnosis of cirrhosis and potentially hepatic encephalopathy in dementia patients,” Dr. Bajaj said.
The study had a few limitations, including the selected centers in the cohort database, lack of chart review to confirm diagnoses in individual cases, and the use of a modified FIB-4, with age capped at 65 years.
‘Easy to Miss’
Commenting on the research, Nancy Reau, MD, section chief of hepatology at Rush University Medical Center in Chicago, said that it is easy for physicians to miss asymptomatic liver disease that could progress and lead to cognitive decline.
“Most of my patients are already labeled with liver disease; however, it is not uncommon to receive a patient from another specialist who felt their presentation was more consistent with liver disease than the issue they were referred for,” she said.
Still, even in metabolic dysfunction–associated steatotic liver disease, which affects nearly one third of the population, the condition isn’t advanced enough in most patients to cause symptoms similar to those of dementia, said Dr. Reau, who was not associated with the study.
“It is more important for specialists in neurology to exclude liver disease and for hepatologists or gastroenterologists to be equipped with tools to exclude alternative explanations for neurocognitive presentations,” she said. “It is important to not label a patient as having HE and then miss alternative explanations.”
“Every presentation has a differential diagnosis. Using easy tools like FIB-4 can make sure you don’t miss liver disease as a contributing factor in a patient that presents with neurocognitive symptoms,” Dr. Reau said.
This work was partly supported by grants from Department of Veterans Affairs merit review program and the National Institutes of Health’s National Center for Advancing Translational Science. Dr. Bajaj and Dr. Reau reported no conflicts of interest.
A version of this article appeared on Medscape.com.
From the American Journal of Medicine
High-Fiber Foods Release Appetite-Suppressing Gut Hormone
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
A high-fiber diet affects small intestine metabolism, spurring release of the appetite-suppressing gut hormone peptide tyrosine tyrosine (PYY) more than a low-fiber diet, and it does so regardless of the food’s structure, new research revealed.
METHODOLOGY:
- Researchers investigated how low- and high-fiber diets affect the release of the gut hormones PYY and glucagon-like peptide 1 (GLP-1).
- They randomly assigned 10 healthy volunteers to 4 days on one of three diets: High-fiber intact foods, such as peas and carrots; high-fiber foods with disrupted structures (same high-fiber foods, but mashed or blended); or low-fiber processed foods. Volunteers then participated in the remaining two diets in a randomized order, with a washout period of at least a week in which they reverted to their normal diet between each session.
- The diets were energy- and macronutrient-matched, but only the two high-fiber diets were fiber-matched at 46.3-46.7 grams daily, whereas the low-fiber diet contained 12.6 grams of daily fiber.
- The researchers used nasoenteric tubes to sample chyme from the participants’ distal ileum lumina in a morning fasted state and every 60 minutes for 480 minutes postprandially on days 3 and 4 and confirmed their findings using ileal organoids. Participants reported their postprandial hunger using a visual analog scale.
TAKEAWAY:
- Both high-fiber diets increased PYY release — but not GLP-1 release — compared with a low-fiber diet during the 0-240-minute postprandial period, when the food was mainly in the small intestine.
- At 120 minutes, both high-fiber diets increased PYY compared with the low-fiber diet, a finding that counteracted the researchers’ hypothesis that intact food structures would stimulate PYY to a larger extent than disrupted food structures. Additionally, participants reported less hunger at 120 minutes with the high-fiber diets, compared with the low-fiber diet.
- High-fiber diets also increased ileal stachyose, and the disrupted high-fiber diet increased certain ileal amino acids.
- Treating the ileal organoids with ileal fluids or an amino acid and stachyose mixture stimulated PYY expression similarly to blood PYY expression, confirming the role of ileal metabolites in the release of PYY.
IN PRACTICE:
“High-fiber diets, regardless of their food structure, increased PYY release through alterations in the ileal metabolic profile,” the authors wrote. “Ileal molecules, which are shaped by dietary intake, were shown to play a role in PYY release, which could be used to design diets to promote satiety.”
SOURCE:
The study, led by Aygul Dagbasi, PhD, Imperial College London, England, was published online in Science Translational Medicine
LIMITATIONS:
The study had several limitations, including the small number of participants. The crossover design limited the influence of covariates on the study outcomes. Gastric emptying and gut transit rates differed widely; therefore, food that may have reached and affected the ileum prior to the first postprandial sample point at 60 minutes was not captured. The authors had access to a limited number of organoids, which restricted the number of experiments they could do. Although organoids are useful tools in vitro, they have limitations, the researchers noted.
DISCLOSURES:
The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Nestle Research, and Sosei Heptares. The Section for Nutrition at Imperial College London is funded by grants from the UK Medical Research Council, BBSRC, National Institute for Health and Care Research, and UKRI Innovate UK and is supported by the National Institute for Health and Care Research Imperial Biomedical Research Centre Funding Scheme. The study was funded by UKRI BBSRC to the principal investigator. The lipid analysis was funded by a British Nutrition Foundation Drummond Early Career Scientist Award. The food microscopy studies were supported by the BBSRC Food Innovation and Health Institute Strategic Programme. Three coauthors disclose that they are directors of Melico Sciences, and several coauthors have relationships with industry outside of the submitted work.
A version of this article first appeared on Medscape.com.
A Fitbit for the Gut May Aid in Detection of GI Disorders
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
, new research revealed.
Traditional methods for locating, measuring, and monitoring gasses associated with such disorders as irritable bowel syndrome, inflammatory bowel disease, food intolerances, and gastric cancers are often invasive and typically require hospital-based procedures.
This experimental system, developed by a team at the University of Southern California’s Viterbi School of Engineering, Los Angeles, represents “a significant step forward in ingestible technology,” according to principal investigator Yasser Khan, PhD, and colleagues.
The novel ingestible could someday serve as a “Fitbit for the gut” and aid in early disease detection, Dr. Khan said.
The team’s work was published online in Cell Reports Physical Science.
Real-Time Tracking
While wearables with sensors are a promising way to monitor body functions, the ability to track ingestible devices once they are inside the body has been limited.
To solve this problem, the researchers developed a system that includes a wearable coil (placed on a T-shirt for this study) and an ingestible pill with a 3D-printed shell made from a biocompatible resin.
The pill is equipped with a gas-permeable membrane, an optical gas-sensing membrane, an optical filter, and a printed circuit board that houses its electronic components. The gas sensor can detect oxygen in the 0%-20% range and ammonia in the 0-100 ppm concentration range.
The researchers developed various algorithms and conducted experiments to test the system’s ability to decode the pill’s location in a human gut model and in an ex vivo animal intestine. To simulate the in vivo environment, they tested the system in an agar phantom solution, which enabled them to track the pill’s movement.
So, how does it work?
Simply put, once the patient ingests the pill, a phone application connects to the pill over Bluetooth and sends a command to initiate the target gas and magnetic field measurements.
Next, the wearable coil generates a magnetic field, which is captured by a magnetic sensor on the pill, enabling the pill’s location to be decoded in real time.
Then, using optical absorption spectroscopy with a light-emitting diode, a photodiode, and the pill’s gas-sensing membrane, gasses such as oxygen and ammonia can be measured and mapped in 3D while the pill is in the gut.
Notably, elevated levels of ammonia, which is produced by Helicobacter pylori, could serve as a signal for peptic ulcers, gastric cancer, or irritable bowel syndrome, Dr. Khan said.
“The ingestible system with the wearable coil is both compact and practical, offering a clear path for application in human health,” he said. The work also could “empower patients to conveniently assess their GI gas profiles from home and manage their digestive health.”
The next step is to test the wearable in animal models to assess, among other factors, whether the gas-sensing system “will operate properly in biological tissue and whether clogging or coating with GI liquids and food particles causes sensor fouling and affects the measurement accuracy,” Dr. Khan and colleagues noted.
Dr. Khan acknowledges support from USC Viterbi School of Engineering. A provisional patent application has been filed based on the technology described in this work. During the preparation of this work, the authors used ChatGPT to check for grammatical errors in the writing. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A version of this article first appeared on Medscape.com.
FROM CELL REPORTS PHYSICAL SCIENCE
Factors Linked to Complete Response, Survival in Pancreatic Cancer
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
a multicenter cohort study found. Several factors, including treatment type and tumor features, influenced the outcomes.
METHODOLOGY:
- Preoperative chemo(radio)therapy is increasingly used in patients with localized pancreatic adenocarcinoma and may improve the chance of a pathologic complete response. Achieving a pathologic complete response is associated with improved overall survival.
- However, the evidence on pathologic complete response is based on large national databases or small single-center series. Multicenter studies with in-depth data about complete response are lacking.
- In the current analysis, researchers investigated the incidence and factors associated with pathologic complete response after preoperative chemo(radio)therapy among 1758 patients (mean age, 64 years; 50% men) with localized pancreatic adenocarcinoma who underwent resection after two or more cycles of chemotherapy (with or without radiotherapy).
- Patients were treated at 19 centers in eight countries. The median follow-up was 19 months. Pathologic complete response was defined as the absence of vital tumor cells in the patient’s sampled pancreas specimen after resection.
- Factors associated with overall survival and pathologic complete response were investigated with Cox proportional hazards and logistic regression models, respectively.
TAKEAWAY:
- Researchers found that the rate of pathologic complete response was 4.8% in patients who received chemo(radio)therapy before pancreatic cancer resection.
- Having a pathologic complete response was associated with a 54% lower risk for death (hazard ratio, 0.46). At 5 years, the overall survival rate was 63% in patients with a pathologic complete response vs 30% in patients without one.
- More patients who received preoperative modified FOLFIRINOX achieved a pathologic complete response (58.8% vs 44.7%). Other factors associated with pathologic complete response included tumors located in the pancreatic head (odds ratio [OR], 2.51), tumors > 40 mm at diagnosis (OR, 2.58), partial or complete radiologic response (OR, 13.0), and normal(ized) serum carbohydrate antigen 19-9 after preoperative therapy (OR, 3.76).
- Preoperative radiotherapy (OR, 2.03) and preoperative stereotactic body radiotherapy (OR, 8.91) were also associated with a pathologic complete response; however, preoperative radiotherapy did not improve overall survival, and preoperative stereotactic body radiotherapy was independently associated with worse overall survival. These findings suggest that a pathologic complete response might not always reflect an optimal disease response.
IN PRACTICE:
Although pathologic complete response does not reflect cure, it is associated with better overall survival, the authors wrote. Factors associated with a pathologic complete response may inform treatment decisions.
SOURCE:
The study, with first author Thomas F. Stoop, MD, University of Amsterdam, the Netherlands, was published online on June 18 in JAMA Network Open.
LIMITATIONS:
The study had several limitations. The sample size and the limited number of events precluded comparative subanalyses, as well as a more detailed stratification for preoperative chemotherapy regimens. Information about patients’ race and the presence of BRCA germline mutations, both of which seem to be relevant to the chance of achieving a major pathologic response, was not collected or available.
DISCLOSURES:
No specific funding was noted. Several coauthors have industry relationships outside of the submitted work.
A version of this article first appeared on Medscape.com.
Uproar Over Vitamin D Disease-Prevention Guideline
A recent report by this news organization of a vitamin D clinical practice guideline released by the Endocrine Society in June triggered an outpouring of objections in the comments section from doctors and other readers.
A society press release listed the key new recommendations on the use of vitamin D supplementation and screening to reduce disease risks in individuals without established indications for such treatment or testing:
- For healthy adults younger than 75, no supplementation at doses above the recommended dietary intakes.
- Populations that may benefit from higher doses include: children and adolescents 18 and younger to prevent rickets and to reduce risk for respiratory infection, individuals 75 and older to possibly lower mortality risk, “pregnant people” to potentially reduce various risks, and people with prediabetes to potentially reduce risk of progression.
- No routine testing for 25-hydroxyvitamin D levels because outcome-specific benefits based on those levels have not been identified (including screening in people with dark complexion or obesity).
- Based on insufficient evidence, the panel could not determine specific blood-level thresholds for 25-hydroxyvitamin D for adequacy or for target levels for disease prevention.
This news organization covered the guideline release and simultaneous presentation at the Endocrine Society annual meeting. In response to the coverage, more than 200 doctors and other readers expressed concerns about the guideline, and some said outright that they would not follow it (readers quoted below are identified by the usernames they registered with on the website).
One reader who posted as Dr. Joseph Destefano went so far as to call the guideline “dangerous” and “almost ... evil.” Ironically, some readers attacked this news organization, thinking that the coverage implied an endorsement, rather than a news report.
Ignores Potential Benefits
“They address issues dealing only with endocrinology and bone health for the most part,” Dr. Emilio Gonzalez wrote. “However, vitamin D insufficiency and deficiency are not rare, and they impact the treatment of autoimmune disorders, chronic pain control, immunosuppression, cancer prevention, cardiovascular health, etc. There is plenty of literature in this regard.”
“They make these claims as if quality studies contradicting their guidelines have not been out there for years,” Dr. Brian Batcheldor said. “What about the huge demographic with diseases that impact intestinal absorption, eg, Crohn’s and celiac disease, cystic fibrosis, and ulcerative colitis? What about the one in nine that now have autoimmune diseases still awaiting diagnosis? What about night workers or anyone with more restricted access to sun exposure? How about those whose cultural or religious dress code limit skin exposure?”
The latter group was also mentioned in a post from Dr. Eve Finkelstein who said, “They don’t take into account women who are totally covered for religious reasons. They have no skin other than part of their face exposed. It does not make sense not to supplement them. Ignoring women’s health needs seems to be the norm.”
“I don’t think they considered the oral health effects of vitamin D deficiency,” pointed out commenter Corie Lewis. “Excess dental calculus (tartar) from excess calcium/phosphate in saliva significantly increases an individual’s periodontal disease risks (gum disease), and low saliva calcium/phosphate increases dental caries (cavities) risks, which generally indicates an imbalance of the oral microbiome. Vitamin D can help create balance and reduce those oral health risks.”
Noted Kimberley Morris-Windisch, “Having worked in rheumatology and pain for most of my career, I have seen too many people benefit from correcting deficiency of vitamin D. To ignore this is to miss opportunities to improve patient health.” Furthermore, “I find it unlikely that it would only improve mortality after age 75. That makes no sense.”
“Also,” she added, “what is the number [needed] to harm? In my 25 years, I have seen vitamin D toxicity once and an excessively high level without symptoms one other time.”
“WHY? Just WHY?” lamented Anne Kinchen. “Low levels in pregnant women have long-term effects on the developing fetus — higher and earlier rates of osteopenia in female children, weaker immune systems overall. There are just SO many reasons to test. These guidelines for no testing are absurd!”
No Screening, No Need for Decision-Making?
Several readers questioned the society’s rationale for not screening, as expressed by session moderator Clifford J. Rosen, MD, director of Clinical and Translational Research and senior scientist at Maine Medical Center Research Institute, Scarborough, Maine.
“When clinicians measure vitamin D, then they’re forced to make a decision what to do about it,” Dr. Rosen said. “That’s where questions about the levels come in. And that’s a big problem. So what the panel’s saying is, don’t screen. ... This really gets to the heart of the issue, because we have no data that there’s anything about screening that allows us to improve quality of life. ... Screening is probably not worthwhile in any age group.”
Among the reader comments in this regard:
“So misguided. Don’t look because we don’t know what do to with data. That’s the message this article exposes. The recommendation is do nothing. But, doing nothing IS an action — not a default.” (Lisa Tracy)
“So now, you will not screen for vitamin D because you do not know what to do next? See a naturopathic doctor — we know what to do next!” (Dr. Joyce Roberson)
“Gee, how do we treat it? ... What to do? Sounds incompetent at minimum. I suspect it’s vital, easy, and inexpensive ... so hide it.” (Holly Kohley)
“Just because we do not know is not a rationale for not testing. The opposite should be done.” (Dr. JJ Gold)
Caters to Industry?
Many commentators intimated that pharma and/or insurance company considerations played a role in the recommendations. Their comments included the following:
“I have been under the impression people do routine checkups to verify there are no hidden problems. If only some testing is done, the probability of not finding a problem is huge. ... Preventive healthcare should be looking for something to prevent instead of waiting until they can cure it. Of course, it might come back to ‘follow the money.’ It is much more profitable to diagnose and treat than it is to prevent.” (Grace Kyser)
“The current irrational ‘recommendation’ gives insurance companies an excuse to deny ALL tests of vitamin D — even if the proper code is supplied. The result is — people suffer. This recommendation does harm!” (Dr JJ Gold)
“Essentially, they are saying let’s not screen ‘healthy’ individuals and ignore it altogether. Better to wait till they’re old, pregnant, or already sick and diagnosed with a disease. This is the problem with the healthcare in this country.” (Brittney Lesher)
“Until allopathic medicine stops waiting for severe symptoms to develop before even screening for potential health problems, the most expensive healthcare (aka, sick care) system in the world will continue to be content to focus on medical emergencies and ignore prevention. ...” (Dean Raffelock)
“Don’t test? Are you kidding me? Especially when people are supplementing? That is akin to taking a blood pressure medication without measuring blood pressures! ... Don’t test? Don’t supplement? ... I have only one explanation for such nonsense: Pharma lives off sick people, not healthy ones.” (Georg Schlomka)
On a somewhat conciliatory and pointed note, Dr Francesca Luna-Rudin commented, “I would like to remind all of my fellow physicians that recommendations should be regarded as just that, a ‘recommendation.’ As doctors, we can use guidelines and recommendations in our practice, but if a new one is presented that does not make sense or would lead to harm based on our education and training, then we are not bound to follow it!”
A version of this article first appeared on Medscape.com.
A recent report by this news organization of a vitamin D clinical practice guideline released by the Endocrine Society in June triggered an outpouring of objections in the comments section from doctors and other readers.
A society press release listed the key new recommendations on the use of vitamin D supplementation and screening to reduce disease risks in individuals without established indications for such treatment or testing:
- For healthy adults younger than 75, no supplementation at doses above the recommended dietary intakes.
- Populations that may benefit from higher doses include: children and adolescents 18 and younger to prevent rickets and to reduce risk for respiratory infection, individuals 75 and older to possibly lower mortality risk, “pregnant people” to potentially reduce various risks, and people with prediabetes to potentially reduce risk of progression.
- No routine testing for 25-hydroxyvitamin D levels because outcome-specific benefits based on those levels have not been identified (including screening in people with dark complexion or obesity).
- Based on insufficient evidence, the panel could not determine specific blood-level thresholds for 25-hydroxyvitamin D for adequacy or for target levels for disease prevention.
This news organization covered the guideline release and simultaneous presentation at the Endocrine Society annual meeting. In response to the coverage, more than 200 doctors and other readers expressed concerns about the guideline, and some said outright that they would not follow it (readers quoted below are identified by the usernames they registered with on the website).
One reader who posted as Dr. Joseph Destefano went so far as to call the guideline “dangerous” and “almost ... evil.” Ironically, some readers attacked this news organization, thinking that the coverage implied an endorsement, rather than a news report.
Ignores Potential Benefits
“They address issues dealing only with endocrinology and bone health for the most part,” Dr. Emilio Gonzalez wrote. “However, vitamin D insufficiency and deficiency are not rare, and they impact the treatment of autoimmune disorders, chronic pain control, immunosuppression, cancer prevention, cardiovascular health, etc. There is plenty of literature in this regard.”
“They make these claims as if quality studies contradicting their guidelines have not been out there for years,” Dr. Brian Batcheldor said. “What about the huge demographic with diseases that impact intestinal absorption, eg, Crohn’s and celiac disease, cystic fibrosis, and ulcerative colitis? What about the one in nine that now have autoimmune diseases still awaiting diagnosis? What about night workers or anyone with more restricted access to sun exposure? How about those whose cultural or religious dress code limit skin exposure?”
The latter group was also mentioned in a post from Dr. Eve Finkelstein who said, “They don’t take into account women who are totally covered for religious reasons. They have no skin other than part of their face exposed. It does not make sense not to supplement them. Ignoring women’s health needs seems to be the norm.”
“I don’t think they considered the oral health effects of vitamin D deficiency,” pointed out commenter Corie Lewis. “Excess dental calculus (tartar) from excess calcium/phosphate in saliva significantly increases an individual’s periodontal disease risks (gum disease), and low saliva calcium/phosphate increases dental caries (cavities) risks, which generally indicates an imbalance of the oral microbiome. Vitamin D can help create balance and reduce those oral health risks.”
Noted Kimberley Morris-Windisch, “Having worked in rheumatology and pain for most of my career, I have seen too many people benefit from correcting deficiency of vitamin D. To ignore this is to miss opportunities to improve patient health.” Furthermore, “I find it unlikely that it would only improve mortality after age 75. That makes no sense.”
“Also,” she added, “what is the number [needed] to harm? In my 25 years, I have seen vitamin D toxicity once and an excessively high level without symptoms one other time.”
“WHY? Just WHY?” lamented Anne Kinchen. “Low levels in pregnant women have long-term effects on the developing fetus — higher and earlier rates of osteopenia in female children, weaker immune systems overall. There are just SO many reasons to test. These guidelines for no testing are absurd!”
No Screening, No Need for Decision-Making?
Several readers questioned the society’s rationale for not screening, as expressed by session moderator Clifford J. Rosen, MD, director of Clinical and Translational Research and senior scientist at Maine Medical Center Research Institute, Scarborough, Maine.
“When clinicians measure vitamin D, then they’re forced to make a decision what to do about it,” Dr. Rosen said. “That’s where questions about the levels come in. And that’s a big problem. So what the panel’s saying is, don’t screen. ... This really gets to the heart of the issue, because we have no data that there’s anything about screening that allows us to improve quality of life. ... Screening is probably not worthwhile in any age group.”
Among the reader comments in this regard:
“So misguided. Don’t look because we don’t know what do to with data. That’s the message this article exposes. The recommendation is do nothing. But, doing nothing IS an action — not a default.” (Lisa Tracy)
“So now, you will not screen for vitamin D because you do not know what to do next? See a naturopathic doctor — we know what to do next!” (Dr. Joyce Roberson)
“Gee, how do we treat it? ... What to do? Sounds incompetent at minimum. I suspect it’s vital, easy, and inexpensive ... so hide it.” (Holly Kohley)
“Just because we do not know is not a rationale for not testing. The opposite should be done.” (Dr. JJ Gold)
Caters to Industry?
Many commentators intimated that pharma and/or insurance company considerations played a role in the recommendations. Their comments included the following:
“I have been under the impression people do routine checkups to verify there are no hidden problems. If only some testing is done, the probability of not finding a problem is huge. ... Preventive healthcare should be looking for something to prevent instead of waiting until they can cure it. Of course, it might come back to ‘follow the money.’ It is much more profitable to diagnose and treat than it is to prevent.” (Grace Kyser)
“The current irrational ‘recommendation’ gives insurance companies an excuse to deny ALL tests of vitamin D — even if the proper code is supplied. The result is — people suffer. This recommendation does harm!” (Dr JJ Gold)
“Essentially, they are saying let’s not screen ‘healthy’ individuals and ignore it altogether. Better to wait till they’re old, pregnant, or already sick and diagnosed with a disease. This is the problem with the healthcare in this country.” (Brittney Lesher)
“Until allopathic medicine stops waiting for severe symptoms to develop before even screening for potential health problems, the most expensive healthcare (aka, sick care) system in the world will continue to be content to focus on medical emergencies and ignore prevention. ...” (Dean Raffelock)
“Don’t test? Are you kidding me? Especially when people are supplementing? That is akin to taking a blood pressure medication without measuring blood pressures! ... Don’t test? Don’t supplement? ... I have only one explanation for such nonsense: Pharma lives off sick people, not healthy ones.” (Georg Schlomka)
On a somewhat conciliatory and pointed note, Dr Francesca Luna-Rudin commented, “I would like to remind all of my fellow physicians that recommendations should be regarded as just that, a ‘recommendation.’ As doctors, we can use guidelines and recommendations in our practice, but if a new one is presented that does not make sense or would lead to harm based on our education and training, then we are not bound to follow it!”
A version of this article first appeared on Medscape.com.
A recent report by this news organization of a vitamin D clinical practice guideline released by the Endocrine Society in June triggered an outpouring of objections in the comments section from doctors and other readers.
A society press release listed the key new recommendations on the use of vitamin D supplementation and screening to reduce disease risks in individuals without established indications for such treatment or testing:
- For healthy adults younger than 75, no supplementation at doses above the recommended dietary intakes.
- Populations that may benefit from higher doses include: children and adolescents 18 and younger to prevent rickets and to reduce risk for respiratory infection, individuals 75 and older to possibly lower mortality risk, “pregnant people” to potentially reduce various risks, and people with prediabetes to potentially reduce risk of progression.
- No routine testing for 25-hydroxyvitamin D levels because outcome-specific benefits based on those levels have not been identified (including screening in people with dark complexion or obesity).
- Based on insufficient evidence, the panel could not determine specific blood-level thresholds for 25-hydroxyvitamin D for adequacy or for target levels for disease prevention.
This news organization covered the guideline release and simultaneous presentation at the Endocrine Society annual meeting. In response to the coverage, more than 200 doctors and other readers expressed concerns about the guideline, and some said outright that they would not follow it (readers quoted below are identified by the usernames they registered with on the website).
One reader who posted as Dr. Joseph Destefano went so far as to call the guideline “dangerous” and “almost ... evil.” Ironically, some readers attacked this news organization, thinking that the coverage implied an endorsement, rather than a news report.
Ignores Potential Benefits
“They address issues dealing only with endocrinology and bone health for the most part,” Dr. Emilio Gonzalez wrote. “However, vitamin D insufficiency and deficiency are not rare, and they impact the treatment of autoimmune disorders, chronic pain control, immunosuppression, cancer prevention, cardiovascular health, etc. There is plenty of literature in this regard.”
“They make these claims as if quality studies contradicting their guidelines have not been out there for years,” Dr. Brian Batcheldor said. “What about the huge demographic with diseases that impact intestinal absorption, eg, Crohn’s and celiac disease, cystic fibrosis, and ulcerative colitis? What about the one in nine that now have autoimmune diseases still awaiting diagnosis? What about night workers or anyone with more restricted access to sun exposure? How about those whose cultural or religious dress code limit skin exposure?”
The latter group was also mentioned in a post from Dr. Eve Finkelstein who said, “They don’t take into account women who are totally covered for religious reasons. They have no skin other than part of their face exposed. It does not make sense not to supplement them. Ignoring women’s health needs seems to be the norm.”
“I don’t think they considered the oral health effects of vitamin D deficiency,” pointed out commenter Corie Lewis. “Excess dental calculus (tartar) from excess calcium/phosphate in saliva significantly increases an individual’s periodontal disease risks (gum disease), and low saliva calcium/phosphate increases dental caries (cavities) risks, which generally indicates an imbalance of the oral microbiome. Vitamin D can help create balance and reduce those oral health risks.”
Noted Kimberley Morris-Windisch, “Having worked in rheumatology and pain for most of my career, I have seen too many people benefit from correcting deficiency of vitamin D. To ignore this is to miss opportunities to improve patient health.” Furthermore, “I find it unlikely that it would only improve mortality after age 75. That makes no sense.”
“Also,” she added, “what is the number [needed] to harm? In my 25 years, I have seen vitamin D toxicity once and an excessively high level without symptoms one other time.”
“WHY? Just WHY?” lamented Anne Kinchen. “Low levels in pregnant women have long-term effects on the developing fetus — higher and earlier rates of osteopenia in female children, weaker immune systems overall. There are just SO many reasons to test. These guidelines for no testing are absurd!”
No Screening, No Need for Decision-Making?
Several readers questioned the society’s rationale for not screening, as expressed by session moderator Clifford J. Rosen, MD, director of Clinical and Translational Research and senior scientist at Maine Medical Center Research Institute, Scarborough, Maine.
“When clinicians measure vitamin D, then they’re forced to make a decision what to do about it,” Dr. Rosen said. “That’s where questions about the levels come in. And that’s a big problem. So what the panel’s saying is, don’t screen. ... This really gets to the heart of the issue, because we have no data that there’s anything about screening that allows us to improve quality of life. ... Screening is probably not worthwhile in any age group.”
Among the reader comments in this regard:
“So misguided. Don’t look because we don’t know what do to with data. That’s the message this article exposes. The recommendation is do nothing. But, doing nothing IS an action — not a default.” (Lisa Tracy)
“So now, you will not screen for vitamin D because you do not know what to do next? See a naturopathic doctor — we know what to do next!” (Dr. Joyce Roberson)
“Gee, how do we treat it? ... What to do? Sounds incompetent at minimum. I suspect it’s vital, easy, and inexpensive ... so hide it.” (Holly Kohley)
“Just because we do not know is not a rationale for not testing. The opposite should be done.” (Dr. JJ Gold)
Caters to Industry?
Many commentators intimated that pharma and/or insurance company considerations played a role in the recommendations. Their comments included the following:
“I have been under the impression people do routine checkups to verify there are no hidden problems. If only some testing is done, the probability of not finding a problem is huge. ... Preventive healthcare should be looking for something to prevent instead of waiting until they can cure it. Of course, it might come back to ‘follow the money.’ It is much more profitable to diagnose and treat than it is to prevent.” (Grace Kyser)
“The current irrational ‘recommendation’ gives insurance companies an excuse to deny ALL tests of vitamin D — even if the proper code is supplied. The result is — people suffer. This recommendation does harm!” (Dr JJ Gold)
“Essentially, they are saying let’s not screen ‘healthy’ individuals and ignore it altogether. Better to wait till they’re old, pregnant, or already sick and diagnosed with a disease. This is the problem with the healthcare in this country.” (Brittney Lesher)
“Until allopathic medicine stops waiting for severe symptoms to develop before even screening for potential health problems, the most expensive healthcare (aka, sick care) system in the world will continue to be content to focus on medical emergencies and ignore prevention. ...” (Dean Raffelock)
“Don’t test? Are you kidding me? Especially when people are supplementing? That is akin to taking a blood pressure medication without measuring blood pressures! ... Don’t test? Don’t supplement? ... I have only one explanation for such nonsense: Pharma lives off sick people, not healthy ones.” (Georg Schlomka)
On a somewhat conciliatory and pointed note, Dr Francesca Luna-Rudin commented, “I would like to remind all of my fellow physicians that recommendations should be regarded as just that, a ‘recommendation.’ As doctors, we can use guidelines and recommendations in our practice, but if a new one is presented that does not make sense or would lead to harm based on our education and training, then we are not bound to follow it!”
A version of this article first appeared on Medscape.com.