User login
Early MS treatment tied to a major reduction in severe disability
a new study suggests.
Patients who received early treatment had a 45% lower risk of reaching a disability score of 3 and a 60% lower risk of advancing to secondary progressive MS compared with those who began treatment 18 months or more after symptoms presented.
Those with a score of 3 can still walk unassisted but have moderate disability in one of eight areas, such as motor function, vision or thinking skills, or mild disability in three or four areas.
“With a very early treatment, within 6 months from the first symptoms and even before the MS diagnosis, we are now able to decrease long-term disability. This means the earlier the better – time is brain,” lead author Alvaro Cobo-Calvo, MD, PhD, clinical neurologists and researcher with the Multiple Sclerosis Center of Catalonia in Barcelona and the Universitat Autonoma de Barcelona, said in an interview.
The findings were published online in Neurology.
Measuring disability
The observational, retrospective study included people aged 50 years or younger who received MS treatment within 6 months of their first clinical demyelinating event (n = 194), 6-16 months later (n = 192), or more than 16 months after the initial symptoms presented (n = 194).
The investigators noted that this cohort is one of the few that is considered “deeply phenotyped,” meaning it is followed prospectively over time with strict quality controls and systematic data collection methods.
MRIs were done within 3-5 months of the first symptoms, again at 12 months after the first event, and every 5 years over a median 11.2-year follow-up.
Disability levels were measured using the Expanded Disability Status Scale, with scores ranging from 0-10 and higher scores indicating more disability.
Patients who received treatment within 6 months of first symptoms were 45% less likely to have a disability score of 3 by the end of the study than did those who received treatment more than 16 months after that first event (hazard ratio, 0.55; 95% confidence interval, 0.32-0.97).
The earliest-treatment group also had a 60% lower risk of advancing to secondary progressive MS than did people in the latest-treatment group (HR, 0.40; 95% CI, 0.19-0.85).
Better disease stability
The researchers also found that earlier treatment was associated with a 53% better chance of disease stability 1 year after initial treatment (HR, 0.47; 95% CI, 0.28-0.80).
The early-treatment group also had a lower disability progression rate and lower severe disability in a self-reported test, compared with those who were treated later.
The investigators also found that patients who received early treatment were at lower risk for disability, even those with a higher baseline radiologic burden.
Current guidelines recommend early treatment of MS, but it is unclear whether disease-modifying treatments (DMTs) should be prescribed after the first MS symptoms or after a definitive MS diagnosis.
Earlier studies often evaluated treatment efficacy after MS diagnosis. This study began tracking efficacy when therapy began after the first symptoms. In some cases, that was before a diagnosis was given.
“It is important to be cautious when starting treatment and we need to know if the patient will evolve to MS or if the patient is diagnosed with MS based on current McDonald criteria.
“In our study, 70% of patients had MS at the time of the first symptoms according to McDonald 201, but the remainder started treatment without an ‘official’ diagnosis but with an event highly suggestive of MS,” Dr. Cobo-Calvo said.
He added that very early treatment after first symptoms is key to preserving neurologic functionality.
Controversy remains
Adding MRI results as a clinical variable is a novel approach, but the MRI risk score used in the study is a new tool that has not yet been validated, the authors of an accompanying editorial noted.
“The results of this study show that in order to achieve a balance between compared groups, matching on MRI has little to add to good-quality balancing on patients’ clinical and demographic features,” wrote Erin Longbrake, MD, PhD, of the department of neurology, Yale University, New Haven, Conn., and Tomas Kalincik, MD, PhD, of the Neuroimmunology Centre, department of neurology, Royal Melbourne Hospital and the CORe unit, department of medicine, University of Melbourne.
Despite growing evidence pointing to improved outcomes from administering DMTs soon after diagnosis, the timing and sequence of therapy remains an area of controversy, they added.
“While these uncertain diagnostic scenarios may tempt neurologists to ‘wait and see,’ the data presented here remind us that these patients remain at risk of accumulating disability,” the authors wrote. “Neurologists must therefore remain vigilant to ensure that diagnosis is made promptly, that patients are followed up effectively and that effective treatments are used liberally.”
The study was funded by the European Regional Development Fund, Instituto de Salud Carlos III. Dr. Cobo-Calvo has received a grant from Instituto de Salud Carlos III. Dr. Longbrake has consulted for Genentech and NGM Bio and received research support from Biogen & Genentech. Dr. Kalincik has received conference travel support and/or speaker honoraria from WebMD Global, Eisai, Novartis, Biogen, Roche, Sanofi Genzyme, Teva, BioCSL, and Merck, and has received research or educational event support from Biogen, Novartis, Genzyme, Roche, Celgene, and Merck.
A version of this article first appeared on Medscape.com.
a new study suggests.
Patients who received early treatment had a 45% lower risk of reaching a disability score of 3 and a 60% lower risk of advancing to secondary progressive MS compared with those who began treatment 18 months or more after symptoms presented.
Those with a score of 3 can still walk unassisted but have moderate disability in one of eight areas, such as motor function, vision or thinking skills, or mild disability in three or four areas.
“With a very early treatment, within 6 months from the first symptoms and even before the MS diagnosis, we are now able to decrease long-term disability. This means the earlier the better – time is brain,” lead author Alvaro Cobo-Calvo, MD, PhD, clinical neurologists and researcher with the Multiple Sclerosis Center of Catalonia in Barcelona and the Universitat Autonoma de Barcelona, said in an interview.
The findings were published online in Neurology.
Measuring disability
The observational, retrospective study included people aged 50 years or younger who received MS treatment within 6 months of their first clinical demyelinating event (n = 194), 6-16 months later (n = 192), or more than 16 months after the initial symptoms presented (n = 194).
The investigators noted that this cohort is one of the few that is considered “deeply phenotyped,” meaning it is followed prospectively over time with strict quality controls and systematic data collection methods.
MRIs were done within 3-5 months of the first symptoms, again at 12 months after the first event, and every 5 years over a median 11.2-year follow-up.
Disability levels were measured using the Expanded Disability Status Scale, with scores ranging from 0-10 and higher scores indicating more disability.
Patients who received treatment within 6 months of first symptoms were 45% less likely to have a disability score of 3 by the end of the study than did those who received treatment more than 16 months after that first event (hazard ratio, 0.55; 95% confidence interval, 0.32-0.97).
The earliest-treatment group also had a 60% lower risk of advancing to secondary progressive MS than did people in the latest-treatment group (HR, 0.40; 95% CI, 0.19-0.85).
Better disease stability
The researchers also found that earlier treatment was associated with a 53% better chance of disease stability 1 year after initial treatment (HR, 0.47; 95% CI, 0.28-0.80).
The early-treatment group also had a lower disability progression rate and lower severe disability in a self-reported test, compared with those who were treated later.
The investigators also found that patients who received early treatment were at lower risk for disability, even those with a higher baseline radiologic burden.
Current guidelines recommend early treatment of MS, but it is unclear whether disease-modifying treatments (DMTs) should be prescribed after the first MS symptoms or after a definitive MS diagnosis.
Earlier studies often evaluated treatment efficacy after MS diagnosis. This study began tracking efficacy when therapy began after the first symptoms. In some cases, that was before a diagnosis was given.
“It is important to be cautious when starting treatment and we need to know if the patient will evolve to MS or if the patient is diagnosed with MS based on current McDonald criteria.
“In our study, 70% of patients had MS at the time of the first symptoms according to McDonald 201, but the remainder started treatment without an ‘official’ diagnosis but with an event highly suggestive of MS,” Dr. Cobo-Calvo said.
He added that very early treatment after first symptoms is key to preserving neurologic functionality.
Controversy remains
Adding MRI results as a clinical variable is a novel approach, but the MRI risk score used in the study is a new tool that has not yet been validated, the authors of an accompanying editorial noted.
“The results of this study show that in order to achieve a balance between compared groups, matching on MRI has little to add to good-quality balancing on patients’ clinical and demographic features,” wrote Erin Longbrake, MD, PhD, of the department of neurology, Yale University, New Haven, Conn., and Tomas Kalincik, MD, PhD, of the Neuroimmunology Centre, department of neurology, Royal Melbourne Hospital and the CORe unit, department of medicine, University of Melbourne.
Despite growing evidence pointing to improved outcomes from administering DMTs soon after diagnosis, the timing and sequence of therapy remains an area of controversy, they added.
“While these uncertain diagnostic scenarios may tempt neurologists to ‘wait and see,’ the data presented here remind us that these patients remain at risk of accumulating disability,” the authors wrote. “Neurologists must therefore remain vigilant to ensure that diagnosis is made promptly, that patients are followed up effectively and that effective treatments are used liberally.”
The study was funded by the European Regional Development Fund, Instituto de Salud Carlos III. Dr. Cobo-Calvo has received a grant from Instituto de Salud Carlos III. Dr. Longbrake has consulted for Genentech and NGM Bio and received research support from Biogen & Genentech. Dr. Kalincik has received conference travel support and/or speaker honoraria from WebMD Global, Eisai, Novartis, Biogen, Roche, Sanofi Genzyme, Teva, BioCSL, and Merck, and has received research or educational event support from Biogen, Novartis, Genzyme, Roche, Celgene, and Merck.
A version of this article first appeared on Medscape.com.
a new study suggests.
Patients who received early treatment had a 45% lower risk of reaching a disability score of 3 and a 60% lower risk of advancing to secondary progressive MS compared with those who began treatment 18 months or more after symptoms presented.
Those with a score of 3 can still walk unassisted but have moderate disability in one of eight areas, such as motor function, vision or thinking skills, or mild disability in three or four areas.
“With a very early treatment, within 6 months from the first symptoms and even before the MS diagnosis, we are now able to decrease long-term disability. This means the earlier the better – time is brain,” lead author Alvaro Cobo-Calvo, MD, PhD, clinical neurologists and researcher with the Multiple Sclerosis Center of Catalonia in Barcelona and the Universitat Autonoma de Barcelona, said in an interview.
The findings were published online in Neurology.
Measuring disability
The observational, retrospective study included people aged 50 years or younger who received MS treatment within 6 months of their first clinical demyelinating event (n = 194), 6-16 months later (n = 192), or more than 16 months after the initial symptoms presented (n = 194).
The investigators noted that this cohort is one of the few that is considered “deeply phenotyped,” meaning it is followed prospectively over time with strict quality controls and systematic data collection methods.
MRIs were done within 3-5 months of the first symptoms, again at 12 months after the first event, and every 5 years over a median 11.2-year follow-up.
Disability levels were measured using the Expanded Disability Status Scale, with scores ranging from 0-10 and higher scores indicating more disability.
Patients who received treatment within 6 months of first symptoms were 45% less likely to have a disability score of 3 by the end of the study than did those who received treatment more than 16 months after that first event (hazard ratio, 0.55; 95% confidence interval, 0.32-0.97).
The earliest-treatment group also had a 60% lower risk of advancing to secondary progressive MS than did people in the latest-treatment group (HR, 0.40; 95% CI, 0.19-0.85).
Better disease stability
The researchers also found that earlier treatment was associated with a 53% better chance of disease stability 1 year after initial treatment (HR, 0.47; 95% CI, 0.28-0.80).
The early-treatment group also had a lower disability progression rate and lower severe disability in a self-reported test, compared with those who were treated later.
The investigators also found that patients who received early treatment were at lower risk for disability, even those with a higher baseline radiologic burden.
Current guidelines recommend early treatment of MS, but it is unclear whether disease-modifying treatments (DMTs) should be prescribed after the first MS symptoms or after a definitive MS diagnosis.
Earlier studies often evaluated treatment efficacy after MS diagnosis. This study began tracking efficacy when therapy began after the first symptoms. In some cases, that was before a diagnosis was given.
“It is important to be cautious when starting treatment and we need to know if the patient will evolve to MS or if the patient is diagnosed with MS based on current McDonald criteria.
“In our study, 70% of patients had MS at the time of the first symptoms according to McDonald 201, but the remainder started treatment without an ‘official’ diagnosis but with an event highly suggestive of MS,” Dr. Cobo-Calvo said.
He added that very early treatment after first symptoms is key to preserving neurologic functionality.
Controversy remains
Adding MRI results as a clinical variable is a novel approach, but the MRI risk score used in the study is a new tool that has not yet been validated, the authors of an accompanying editorial noted.
“The results of this study show that in order to achieve a balance between compared groups, matching on MRI has little to add to good-quality balancing on patients’ clinical and demographic features,” wrote Erin Longbrake, MD, PhD, of the department of neurology, Yale University, New Haven, Conn., and Tomas Kalincik, MD, PhD, of the Neuroimmunology Centre, department of neurology, Royal Melbourne Hospital and the CORe unit, department of medicine, University of Melbourne.
Despite growing evidence pointing to improved outcomes from administering DMTs soon after diagnosis, the timing and sequence of therapy remains an area of controversy, they added.
“While these uncertain diagnostic scenarios may tempt neurologists to ‘wait and see,’ the data presented here remind us that these patients remain at risk of accumulating disability,” the authors wrote. “Neurologists must therefore remain vigilant to ensure that diagnosis is made promptly, that patients are followed up effectively and that effective treatments are used liberally.”
The study was funded by the European Regional Development Fund, Instituto de Salud Carlos III. Dr. Cobo-Calvo has received a grant from Instituto de Salud Carlos III. Dr. Longbrake has consulted for Genentech and NGM Bio and received research support from Biogen & Genentech. Dr. Kalincik has received conference travel support and/or speaker honoraria from WebMD Global, Eisai, Novartis, Biogen, Roche, Sanofi Genzyme, Teva, BioCSL, and Merck, and has received research or educational event support from Biogen, Novartis, Genzyme, Roche, Celgene, and Merck.
A version of this article first appeared on Medscape.com.
FROM NEUROLOGY
Vegetarian diets can improve high-risk cardiovascular disease
, a meta-analysis of randomized controlled trials shows.
“To the best of our knowledge, this meta-analysis is the first that generates evidence from randomized controlled trials to assess the association of vegetarian diets with outcomes in people affected by cardiovascular diseases,” report the authors. The study was published online in JAMA Network Open.
“The greatest improvements in hemoglobin A1c and low-density lipoprotein cholesterol (LDL-C) were observed in individuals with type 2 diabetes and people at high risk of cardiovascular disease, highlighting the potential protective and synergistic effects of vegetarian diets for the primary prevention of cardiovascular disease,” they say.
Poor diet is well-established as increasing the morbidity and mortality associated with cardiovascular disease; however, although data has linked vegetarian diets to cardiovascular disease prevention in the general population, research on the effectiveness of such diets in people at high risk of cardiovascular disease is lacking.
“To the best of our knowledge, no meta-analysis of randomized controlled trials has been conducted to investigate the association of vegetarian diets with outcomes among people with CVD – indeed, research here has primarily focused on observational studies,” writes Tian Wang, RD, and colleagues at the University of Sydney.
Greater decreases in LDL-C, A1c, and body weight with vegetarian diets
For the meta-analysis, researchers identified 20 randomized controlled trials involving vegetarian diets that included 1,878 adults with or at a high risk of cardiovascular disease and included measurements of LDL-C, A1c, or systolic blood pressure.
The studies were conducted in the United States, Asia, Europe, and New Zealand between 1990 and 2021. Sample sizes ranged from 12 to 291 participants.
The mean range age of participants was 28-64 years. Studies included patients with cardiovascular disease (four studies), diabetes (seven studies), and those with at least two cardiovascular risk factors (nine studies).
The mean duration of the dietary intervention was 25.4 weeks (range 2-24 months). The most commonly prescribed diets were vegan (plant-based foods only), lacto-ovo-vegetarian (excluded meat, poultry, seafood, and dairy products, but allowed eggs), and lacto-vegetarian (same as previous but allowed dairy products).
Overall, those who consumed a vegetarian diet for an average of 6 months, versus comparison diets, had significantly greater decreases in LDL-C (6.6 mg/dL beyond the reduction achieved with standard therapy); A1c (0.24%); and body weight (3.4 kg), but the reduction in systolic blood pressure (0.1 mmHg) was not significantly greater.
Assessment of the overall certainty of evidence evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) tool showed a moderate level of evidence for reductions in LDL-C and A1c with the vegetarian diet.
Lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (14.1 mg/dL); however, four out of the five trials restricted energy intake.
Of note, vegetarian diets were most effective for achieving glycemic control among people with type 2 diabetes and leading to improvements in weight among those at high risk of cardiovascular disease as well as those with type 2 diabetes.
The effects “suggest that vegetarian diets might have a synergistic [or at least nonantagonistic] use in potentiating the effects of optimal drug therapy in the prevention and treatment of a range of cardiometabolic diseases,” the authors write.
Although previous studies have shown similar improvements associated with a vegetarian diet, most studies did not stratify populations based on disease status, type of vegetarian diet, or comparison diet, the authors note.
The lack of improvement in systolic blood pressure is consistent with previous meta-analyses of vegetarian diets in general and suggests that salt intake may be the more important factor for those measures.
“[The meta-analysis] suggests that diet quality plays a major role in lowering blood pressure independent of animal food consumption, as the DASH [Dietary Approaches to Stop Hypertension] ... trial demonstrated,” the authors note.
Decreases in medication dose with vegetarian diet
Although most patients were taking medications to manage hypertension, hyperglycemia, and/or dyslipidemia at trial enrollment in as many as eight of the studies, the vegetarian diet intervention resulted in a decrease in medication dose.
In fact, medication use could obscure the favorable effects of vegetarian diets, which could have a larger effect size, the authors speculate.
“This hypothesis is supported by two randomized controlled trials in our meta-analysis that required patients not to take medication that could influence cardiometabolic outcomes, [and] these studies significantly improved systolic blood pressure and LDL-C,” they write.
Not all vegetarian diets are healthy
Although there are numerous variations in vegetarian diets, ranging from vegan diets that eliminate all animal food to pesco-vegetarian diets that allow fish or seafood, most that are well-balanced can provide health benefits including lower saturated fat, L-carnitine, and choline (precursors of the atherogenic TMAO), and other benefits that might explain the improvements seen in the meta-analysis.
The diets may also be high in dietary fiber, mono- and polyunsaturated fatty acids, potassium, magnesium, and phytochemical, and have lower glycemic index scores.
Of note, 12 studies in the meta-analysis emphasized low-fat content, which the authors speculate may have contributed to the improvements observed in LDC-C.
Specifically, lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (–14.1 mg/dL); however, four out of five of the trials restricted energy intake, which could have also played a role in improvements.
Importantly, not all vegetarian diets are healthy, and the authors caution about some that allow, for instance, deep-fried foods rich in trans-fatty acids and salt, such as tempura vegetables, potentially increasing the risk of type 2 diabetes and coronary heart disease.
They note that “more than one-third of the studies included in our meta-analysis did not emphasize the importance of consuming minimally processed plant-based whole foods.”
Overall, however, the fact that the greatest improvements in A1c and LDL-C were seen in patients with type 2 diabetes and those at high risk of CVD “highlight[s] the potential protective and synergistic effects of vegetarian diets for the primary prevention of CVD.”
A version of this article first appeared on Medscape.com.
, a meta-analysis of randomized controlled trials shows.
“To the best of our knowledge, this meta-analysis is the first that generates evidence from randomized controlled trials to assess the association of vegetarian diets with outcomes in people affected by cardiovascular diseases,” report the authors. The study was published online in JAMA Network Open.
“The greatest improvements in hemoglobin A1c and low-density lipoprotein cholesterol (LDL-C) were observed in individuals with type 2 diabetes and people at high risk of cardiovascular disease, highlighting the potential protective and synergistic effects of vegetarian diets for the primary prevention of cardiovascular disease,” they say.
Poor diet is well-established as increasing the morbidity and mortality associated with cardiovascular disease; however, although data has linked vegetarian diets to cardiovascular disease prevention in the general population, research on the effectiveness of such diets in people at high risk of cardiovascular disease is lacking.
“To the best of our knowledge, no meta-analysis of randomized controlled trials has been conducted to investigate the association of vegetarian diets with outcomes among people with CVD – indeed, research here has primarily focused on observational studies,” writes Tian Wang, RD, and colleagues at the University of Sydney.
Greater decreases in LDL-C, A1c, and body weight with vegetarian diets
For the meta-analysis, researchers identified 20 randomized controlled trials involving vegetarian diets that included 1,878 adults with or at a high risk of cardiovascular disease and included measurements of LDL-C, A1c, or systolic blood pressure.
The studies were conducted in the United States, Asia, Europe, and New Zealand between 1990 and 2021. Sample sizes ranged from 12 to 291 participants.
The mean range age of participants was 28-64 years. Studies included patients with cardiovascular disease (four studies), diabetes (seven studies), and those with at least two cardiovascular risk factors (nine studies).
The mean duration of the dietary intervention was 25.4 weeks (range 2-24 months). The most commonly prescribed diets were vegan (plant-based foods only), lacto-ovo-vegetarian (excluded meat, poultry, seafood, and dairy products, but allowed eggs), and lacto-vegetarian (same as previous but allowed dairy products).
Overall, those who consumed a vegetarian diet for an average of 6 months, versus comparison diets, had significantly greater decreases in LDL-C (6.6 mg/dL beyond the reduction achieved with standard therapy); A1c (0.24%); and body weight (3.4 kg), but the reduction in systolic blood pressure (0.1 mmHg) was not significantly greater.
Assessment of the overall certainty of evidence evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) tool showed a moderate level of evidence for reductions in LDL-C and A1c with the vegetarian diet.
Lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (14.1 mg/dL); however, four out of the five trials restricted energy intake.
Of note, vegetarian diets were most effective for achieving glycemic control among people with type 2 diabetes and leading to improvements in weight among those at high risk of cardiovascular disease as well as those with type 2 diabetes.
The effects “suggest that vegetarian diets might have a synergistic [or at least nonantagonistic] use in potentiating the effects of optimal drug therapy in the prevention and treatment of a range of cardiometabolic diseases,” the authors write.
Although previous studies have shown similar improvements associated with a vegetarian diet, most studies did not stratify populations based on disease status, type of vegetarian diet, or comparison diet, the authors note.
The lack of improvement in systolic blood pressure is consistent with previous meta-analyses of vegetarian diets in general and suggests that salt intake may be the more important factor for those measures.
“[The meta-analysis] suggests that diet quality plays a major role in lowering blood pressure independent of animal food consumption, as the DASH [Dietary Approaches to Stop Hypertension] ... trial demonstrated,” the authors note.
Decreases in medication dose with vegetarian diet
Although most patients were taking medications to manage hypertension, hyperglycemia, and/or dyslipidemia at trial enrollment in as many as eight of the studies, the vegetarian diet intervention resulted in a decrease in medication dose.
In fact, medication use could obscure the favorable effects of vegetarian diets, which could have a larger effect size, the authors speculate.
“This hypothesis is supported by two randomized controlled trials in our meta-analysis that required patients not to take medication that could influence cardiometabolic outcomes, [and] these studies significantly improved systolic blood pressure and LDL-C,” they write.
Not all vegetarian diets are healthy
Although there are numerous variations in vegetarian diets, ranging from vegan diets that eliminate all animal food to pesco-vegetarian diets that allow fish or seafood, most that are well-balanced can provide health benefits including lower saturated fat, L-carnitine, and choline (precursors of the atherogenic TMAO), and other benefits that might explain the improvements seen in the meta-analysis.
The diets may also be high in dietary fiber, mono- and polyunsaturated fatty acids, potassium, magnesium, and phytochemical, and have lower glycemic index scores.
Of note, 12 studies in the meta-analysis emphasized low-fat content, which the authors speculate may have contributed to the improvements observed in LDC-C.
Specifically, lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (–14.1 mg/dL); however, four out of five of the trials restricted energy intake, which could have also played a role in improvements.
Importantly, not all vegetarian diets are healthy, and the authors caution about some that allow, for instance, deep-fried foods rich in trans-fatty acids and salt, such as tempura vegetables, potentially increasing the risk of type 2 diabetes and coronary heart disease.
They note that “more than one-third of the studies included in our meta-analysis did not emphasize the importance of consuming minimally processed plant-based whole foods.”
Overall, however, the fact that the greatest improvements in A1c and LDL-C were seen in patients with type 2 diabetes and those at high risk of CVD “highlight[s] the potential protective and synergistic effects of vegetarian diets for the primary prevention of CVD.”
A version of this article first appeared on Medscape.com.
, a meta-analysis of randomized controlled trials shows.
“To the best of our knowledge, this meta-analysis is the first that generates evidence from randomized controlled trials to assess the association of vegetarian diets with outcomes in people affected by cardiovascular diseases,” report the authors. The study was published online in JAMA Network Open.
“The greatest improvements in hemoglobin A1c and low-density lipoprotein cholesterol (LDL-C) were observed in individuals with type 2 diabetes and people at high risk of cardiovascular disease, highlighting the potential protective and synergistic effects of vegetarian diets for the primary prevention of cardiovascular disease,” they say.
Poor diet is well-established as increasing the morbidity and mortality associated with cardiovascular disease; however, although data has linked vegetarian diets to cardiovascular disease prevention in the general population, research on the effectiveness of such diets in people at high risk of cardiovascular disease is lacking.
“To the best of our knowledge, no meta-analysis of randomized controlled trials has been conducted to investigate the association of vegetarian diets with outcomes among people with CVD – indeed, research here has primarily focused on observational studies,” writes Tian Wang, RD, and colleagues at the University of Sydney.
Greater decreases in LDL-C, A1c, and body weight with vegetarian diets
For the meta-analysis, researchers identified 20 randomized controlled trials involving vegetarian diets that included 1,878 adults with or at a high risk of cardiovascular disease and included measurements of LDL-C, A1c, or systolic blood pressure.
The studies were conducted in the United States, Asia, Europe, and New Zealand between 1990 and 2021. Sample sizes ranged from 12 to 291 participants.
The mean range age of participants was 28-64 years. Studies included patients with cardiovascular disease (four studies), diabetes (seven studies), and those with at least two cardiovascular risk factors (nine studies).
The mean duration of the dietary intervention was 25.4 weeks (range 2-24 months). The most commonly prescribed diets were vegan (plant-based foods only), lacto-ovo-vegetarian (excluded meat, poultry, seafood, and dairy products, but allowed eggs), and lacto-vegetarian (same as previous but allowed dairy products).
Overall, those who consumed a vegetarian diet for an average of 6 months, versus comparison diets, had significantly greater decreases in LDL-C (6.6 mg/dL beyond the reduction achieved with standard therapy); A1c (0.24%); and body weight (3.4 kg), but the reduction in systolic blood pressure (0.1 mmHg) was not significantly greater.
Assessment of the overall certainty of evidence evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) tool showed a moderate level of evidence for reductions in LDL-C and A1c with the vegetarian diet.
Lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (14.1 mg/dL); however, four out of the five trials restricted energy intake.
Of note, vegetarian diets were most effective for achieving glycemic control among people with type 2 diabetes and leading to improvements in weight among those at high risk of cardiovascular disease as well as those with type 2 diabetes.
The effects “suggest that vegetarian diets might have a synergistic [or at least nonantagonistic] use in potentiating the effects of optimal drug therapy in the prevention and treatment of a range of cardiometabolic diseases,” the authors write.
Although previous studies have shown similar improvements associated with a vegetarian diet, most studies did not stratify populations based on disease status, type of vegetarian diet, or comparison diet, the authors note.
The lack of improvement in systolic blood pressure is consistent with previous meta-analyses of vegetarian diets in general and suggests that salt intake may be the more important factor for those measures.
“[The meta-analysis] suggests that diet quality plays a major role in lowering blood pressure independent of animal food consumption, as the DASH [Dietary Approaches to Stop Hypertension] ... trial demonstrated,” the authors note.
Decreases in medication dose with vegetarian diet
Although most patients were taking medications to manage hypertension, hyperglycemia, and/or dyslipidemia at trial enrollment in as many as eight of the studies, the vegetarian diet intervention resulted in a decrease in medication dose.
In fact, medication use could obscure the favorable effects of vegetarian diets, which could have a larger effect size, the authors speculate.
“This hypothesis is supported by two randomized controlled trials in our meta-analysis that required patients not to take medication that could influence cardiometabolic outcomes, [and] these studies significantly improved systolic blood pressure and LDL-C,” they write.
Not all vegetarian diets are healthy
Although there are numerous variations in vegetarian diets, ranging from vegan diets that eliminate all animal food to pesco-vegetarian diets that allow fish or seafood, most that are well-balanced can provide health benefits including lower saturated fat, L-carnitine, and choline (precursors of the atherogenic TMAO), and other benefits that might explain the improvements seen in the meta-analysis.
The diets may also be high in dietary fiber, mono- and polyunsaturated fatty acids, potassium, magnesium, and phytochemical, and have lower glycemic index scores.
Of note, 12 studies in the meta-analysis emphasized low-fat content, which the authors speculate may have contributed to the improvements observed in LDC-C.
Specifically, lacto-ovo vegetarian diets were associated with the greatest reduction in LDL-C (–14.1 mg/dL); however, four out of five of the trials restricted energy intake, which could have also played a role in improvements.
Importantly, not all vegetarian diets are healthy, and the authors caution about some that allow, for instance, deep-fried foods rich in trans-fatty acids and salt, such as tempura vegetables, potentially increasing the risk of type 2 diabetes and coronary heart disease.
They note that “more than one-third of the studies included in our meta-analysis did not emphasize the importance of consuming minimally processed plant-based whole foods.”
Overall, however, the fact that the greatest improvements in A1c and LDL-C were seen in patients with type 2 diabetes and those at high risk of CVD “highlight[s] the potential protective and synergistic effects of vegetarian diets for the primary prevention of CVD.”
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
SGLT2 inhibitors linked with fewer gout flares in diabetes
TOPLINE:
compared with matched patients treated with a dipeptidyl peptidase–4 (DPP-4) inhibitor.
METHODOLOGY:
- The study used observational data collected from the entire population of British Columbia that included 15,067 adults with both gout and type 2 diabetes in 2014-2020.
- The group included 8,318 patients who initiated an SGLT2 inhibitor and 6,749 patients who initiated a DPP-4 inhibitor during the study period after at least 1 year of continuous enrollment.
- Using propensity-score matching, 4,075 matched pairs were identified, where one person initiated an SGLT2 inhibitor and the other started a DPP-4 inhibitor.
- Primary outcome was recurrent gout flare counts during follow-up that required an ED visit, hospital admission, or an outpatient visit for a gout flare coupled with appropriate treatment, tallied from the first day of drug receipt until June 30, 2022, with an average follow-up of 1.6 years.
- Secondary endpoints included the incidence of myocardial infarction and stroke.
TAKEAWAY:
- Total gout-flare rates after SGLT2 inhibitor initiation were 52.4/1000 person-years and after DPP-4 inhibitor initiation were 79.7/1,000 person-years, an adjusted rate ratio of 0.66, a reduction significantly linked with SGLT2 inhibitor use.
- For flares that required an ED visit or hospitalization, initiation of an SGLT2 inhibitor was linked with a significant, reduced aRR of 0.52, compared with DPP-4 inhibitor initiation.
- The flare-rate reduction linked with SGLT2 inhibitor use was consistent regardless of sex, age, baseline diuretic use, prior treatment with a urate-lowering agent, and baseline gout intensity.
- SGLT2 inhibitor initiation was also significantly linked with an adjusted reduced hazard ratio of 0.69 in the incidence of myocardial infarction, compared with DPP-4 inhibitor initiation, but stroke incidence was not significantly different between the groups.
IN PRACTICE:
These findings suggest that SGLT2 inhibitors could have a much-needed ability to simultaneously reduce the burden of recurrent gout flares and coronary sequelae in patients with gout and type 2 diabetes, indicating that “SGLT2 inhibitors may offer distinct benefits,” making the drug class “a particularly attractive addition to current urate-lowering therapies,” the researchers write.
SOURCE:
The study was primarily conducted by researchers at Massachusetts General Hospital in Boston. The study was published online July 24 in Annals of Internal Medicine.
LIMITATIONS:
The data used in the study did not include gout flares that did not require medical attention and did not include laboratory findings for study participants. Because the data were observational the findings may be susceptible to unmeasured confounding.
DISCLOSURES:
The study received no commercial funding. One author has reported receiving consulting fees from ANI and LG Chem.
A version of this article first appeared on Medscape.com.
TOPLINE:
compared with matched patients treated with a dipeptidyl peptidase–4 (DPP-4) inhibitor.
METHODOLOGY:
- The study used observational data collected from the entire population of British Columbia that included 15,067 adults with both gout and type 2 diabetes in 2014-2020.
- The group included 8,318 patients who initiated an SGLT2 inhibitor and 6,749 patients who initiated a DPP-4 inhibitor during the study period after at least 1 year of continuous enrollment.
- Using propensity-score matching, 4,075 matched pairs were identified, where one person initiated an SGLT2 inhibitor and the other started a DPP-4 inhibitor.
- Primary outcome was recurrent gout flare counts during follow-up that required an ED visit, hospital admission, or an outpatient visit for a gout flare coupled with appropriate treatment, tallied from the first day of drug receipt until June 30, 2022, with an average follow-up of 1.6 years.
- Secondary endpoints included the incidence of myocardial infarction and stroke.
TAKEAWAY:
- Total gout-flare rates after SGLT2 inhibitor initiation were 52.4/1000 person-years and after DPP-4 inhibitor initiation were 79.7/1,000 person-years, an adjusted rate ratio of 0.66, a reduction significantly linked with SGLT2 inhibitor use.
- For flares that required an ED visit or hospitalization, initiation of an SGLT2 inhibitor was linked with a significant, reduced aRR of 0.52, compared with DPP-4 inhibitor initiation.
- The flare-rate reduction linked with SGLT2 inhibitor use was consistent regardless of sex, age, baseline diuretic use, prior treatment with a urate-lowering agent, and baseline gout intensity.
- SGLT2 inhibitor initiation was also significantly linked with an adjusted reduced hazard ratio of 0.69 in the incidence of myocardial infarction, compared with DPP-4 inhibitor initiation, but stroke incidence was not significantly different between the groups.
IN PRACTICE:
These findings suggest that SGLT2 inhibitors could have a much-needed ability to simultaneously reduce the burden of recurrent gout flares and coronary sequelae in patients with gout and type 2 diabetes, indicating that “SGLT2 inhibitors may offer distinct benefits,” making the drug class “a particularly attractive addition to current urate-lowering therapies,” the researchers write.
SOURCE:
The study was primarily conducted by researchers at Massachusetts General Hospital in Boston. The study was published online July 24 in Annals of Internal Medicine.
LIMITATIONS:
The data used in the study did not include gout flares that did not require medical attention and did not include laboratory findings for study participants. Because the data were observational the findings may be susceptible to unmeasured confounding.
DISCLOSURES:
The study received no commercial funding. One author has reported receiving consulting fees from ANI and LG Chem.
A version of this article first appeared on Medscape.com.
TOPLINE:
compared with matched patients treated with a dipeptidyl peptidase–4 (DPP-4) inhibitor.
METHODOLOGY:
- The study used observational data collected from the entire population of British Columbia that included 15,067 adults with both gout and type 2 diabetes in 2014-2020.
- The group included 8,318 patients who initiated an SGLT2 inhibitor and 6,749 patients who initiated a DPP-4 inhibitor during the study period after at least 1 year of continuous enrollment.
- Using propensity-score matching, 4,075 matched pairs were identified, where one person initiated an SGLT2 inhibitor and the other started a DPP-4 inhibitor.
- Primary outcome was recurrent gout flare counts during follow-up that required an ED visit, hospital admission, or an outpatient visit for a gout flare coupled with appropriate treatment, tallied from the first day of drug receipt until June 30, 2022, with an average follow-up of 1.6 years.
- Secondary endpoints included the incidence of myocardial infarction and stroke.
TAKEAWAY:
- Total gout-flare rates after SGLT2 inhibitor initiation were 52.4/1000 person-years and after DPP-4 inhibitor initiation were 79.7/1,000 person-years, an adjusted rate ratio of 0.66, a reduction significantly linked with SGLT2 inhibitor use.
- For flares that required an ED visit or hospitalization, initiation of an SGLT2 inhibitor was linked with a significant, reduced aRR of 0.52, compared with DPP-4 inhibitor initiation.
- The flare-rate reduction linked with SGLT2 inhibitor use was consistent regardless of sex, age, baseline diuretic use, prior treatment with a urate-lowering agent, and baseline gout intensity.
- SGLT2 inhibitor initiation was also significantly linked with an adjusted reduced hazard ratio of 0.69 in the incidence of myocardial infarction, compared with DPP-4 inhibitor initiation, but stroke incidence was not significantly different between the groups.
IN PRACTICE:
These findings suggest that SGLT2 inhibitors could have a much-needed ability to simultaneously reduce the burden of recurrent gout flares and coronary sequelae in patients with gout and type 2 diabetes, indicating that “SGLT2 inhibitors may offer distinct benefits,” making the drug class “a particularly attractive addition to current urate-lowering therapies,” the researchers write.
SOURCE:
The study was primarily conducted by researchers at Massachusetts General Hospital in Boston. The study was published online July 24 in Annals of Internal Medicine.
LIMITATIONS:
The data used in the study did not include gout flares that did not require medical attention and did not include laboratory findings for study participants. Because the data were observational the findings may be susceptible to unmeasured confounding.
DISCLOSURES:
The study received no commercial funding. One author has reported receiving consulting fees from ANI and LG Chem.
A version of this article first appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Functional MRI shows that empathetic remarks reduce pain
recently published in the Proceedings of the National Academy of Sciences, that was conducted by a team led by neuroscientist Dan-Mikael Ellingsen, PhD, from Oslo University Hospital.
These are the results of a study,The researchers used functional MRI to scan the brains of 20 patients with chronic pain to investigate how a physician’s demeanor may affect patients’ sensitivity to pain, including effects in the central nervous system. During the scans, which were conducted in two sessions, the patients’ legs were exposed to stimuli that ranged from painless to moderately painful. The patients recorded perceived pain intensity using a scale. The physicians also underwent fMRI.
Half of the patients were subjected to the pain stimuli while alone; the other half were subjected to pain while in the presence of a physician. The latter group of patients was divided into two subgroups. Half of the patients had spoken to the accompanying physician before the examination. They discussed the history of the patient’s condition to date, among other things. The other half underwent the brain scans without any prior interaction with a physician.
Worse when alone
Dr. Ellingsen and his colleagues found that patients who were alone during the examination reported greater pain than those who were in the presence of a physician, even though they were subjected to stimuli of the same intensity. In instances in which the physician and patient had already spoken before the brain scan, patients additionally felt that the physician was empathetic and understood their pain. Furthermore, the physicians were better able to estimate the pain that their patients experienced.
The patients who had a physician by their side consistently experienced pain that was milder than the pain experienced by those who were alone. For pairs that had spoken beforehand, the patients considered their physician to be better able to understand their pain, and the physicians estimated the perceived pain intensity of their patients more accurately.
Evidence of trust
There was greater activity in the dorsolateral and ventrolateral prefrontal cortex, as well as in the primary and secondary somatosensory areas, in patients in the subgroup that had spoken to a physician. For the physicians, compared with the comparison group, there was an increase in correspondence between activity in the dorsolateral prefrontal cortex and activity in the secondary somatosensory areas of patients, which is a brain region that is known to react to pain. The brain activity correlation increased in line with the self-reported mutual trust between the physician and patient.
“These results prove that empathy and support can decrease pain intensity,” the investigators write. The data shed light on the brain processes behind the social modulation of pain during the interaction between the physician and the patient. Concordances in the brain are increased by greater therapeutic alliance.
Beyond medication
Winfried Meissner, MD, head of the pain clinic at the department of anesthesiology and intensive care medicine at Jena University Hospital, Germany, and former president of the German Pain Society, said in an interview: “I view this as a vital study that impressively demonstrates that effective, intensive pain therapy is not just a case of administering the correct analgesic.”
“Instead, a focus should be placed on what common sense tells us, which is just how crucial an empathetic attitude from physicians and good communication with patients are when it comes to the success of any therapy,” Dr. Meissner added. Unfortunately, such an attitude and such communication often are not provided in clinical practice because of limitations on time.
“Now, with objectively collected data from patients and physicians, [Dr.] Ellingsen’s team has been able to demonstrate that human interaction has a decisive impact on the treatment of patients experiencing pain,” said Dr. Meissner. “The study should encourage practitioners to treat communication just as seriously as the pharmacology of analgesics.”
Perception and attitude
“The study shows remarkably well that empathetic conversation between the physician and patient represents a valuable therapeutic method and should be recognized as such,” emphasized Dr. Meissner. Of course, conversation cannot replace pharmacologic treatment, but it can supplement and reinforce it. Furthermore, a physician’s empathy presumably has an effect that is at least as great as a suitable analgesic.
“Pain is more than just sensory perception,” explained Dr. Meissner. “We all know that it has a strong affective component, and perception is greatly determined by context.” This can be seen, for example, in athletes, who often attribute less importance to their pain and can successfully perform competitively despite a painful injury.
Positive expectations
Dr. Meissner advised all physicians to treat patients with pain empathetically. He encourages them to ask patients about their pain, accompanying symptoms, possible fears, and other mental stress and to take these factors seriously.
Moreover, the findings accentuate the effect of prescribed analgesics. “Numerous studies have meanwhile shown that the more positive a patient’s expectations, the better the effect of a medication,” said Dr. Meissner. “We physicians must exploit this effect, too.”
This article was translated from the Medscape German Edition and a version appeared on Medscape.com.
recently published in the Proceedings of the National Academy of Sciences, that was conducted by a team led by neuroscientist Dan-Mikael Ellingsen, PhD, from Oslo University Hospital.
These are the results of a study,The researchers used functional MRI to scan the brains of 20 patients with chronic pain to investigate how a physician’s demeanor may affect patients’ sensitivity to pain, including effects in the central nervous system. During the scans, which were conducted in two sessions, the patients’ legs were exposed to stimuli that ranged from painless to moderately painful. The patients recorded perceived pain intensity using a scale. The physicians also underwent fMRI.
Half of the patients were subjected to the pain stimuli while alone; the other half were subjected to pain while in the presence of a physician. The latter group of patients was divided into two subgroups. Half of the patients had spoken to the accompanying physician before the examination. They discussed the history of the patient’s condition to date, among other things. The other half underwent the brain scans without any prior interaction with a physician.
Worse when alone
Dr. Ellingsen and his colleagues found that patients who were alone during the examination reported greater pain than those who were in the presence of a physician, even though they were subjected to stimuli of the same intensity. In instances in which the physician and patient had already spoken before the brain scan, patients additionally felt that the physician was empathetic and understood their pain. Furthermore, the physicians were better able to estimate the pain that their patients experienced.
The patients who had a physician by their side consistently experienced pain that was milder than the pain experienced by those who were alone. For pairs that had spoken beforehand, the patients considered their physician to be better able to understand their pain, and the physicians estimated the perceived pain intensity of their patients more accurately.
Evidence of trust
There was greater activity in the dorsolateral and ventrolateral prefrontal cortex, as well as in the primary and secondary somatosensory areas, in patients in the subgroup that had spoken to a physician. For the physicians, compared with the comparison group, there was an increase in correspondence between activity in the dorsolateral prefrontal cortex and activity in the secondary somatosensory areas of patients, which is a brain region that is known to react to pain. The brain activity correlation increased in line with the self-reported mutual trust between the physician and patient.
“These results prove that empathy and support can decrease pain intensity,” the investigators write. The data shed light on the brain processes behind the social modulation of pain during the interaction between the physician and the patient. Concordances in the brain are increased by greater therapeutic alliance.
Beyond medication
Winfried Meissner, MD, head of the pain clinic at the department of anesthesiology and intensive care medicine at Jena University Hospital, Germany, and former president of the German Pain Society, said in an interview: “I view this as a vital study that impressively demonstrates that effective, intensive pain therapy is not just a case of administering the correct analgesic.”
“Instead, a focus should be placed on what common sense tells us, which is just how crucial an empathetic attitude from physicians and good communication with patients are when it comes to the success of any therapy,” Dr. Meissner added. Unfortunately, such an attitude and such communication often are not provided in clinical practice because of limitations on time.
“Now, with objectively collected data from patients and physicians, [Dr.] Ellingsen’s team has been able to demonstrate that human interaction has a decisive impact on the treatment of patients experiencing pain,” said Dr. Meissner. “The study should encourage practitioners to treat communication just as seriously as the pharmacology of analgesics.”
Perception and attitude
“The study shows remarkably well that empathetic conversation between the physician and patient represents a valuable therapeutic method and should be recognized as such,” emphasized Dr. Meissner. Of course, conversation cannot replace pharmacologic treatment, but it can supplement and reinforce it. Furthermore, a physician’s empathy presumably has an effect that is at least as great as a suitable analgesic.
“Pain is more than just sensory perception,” explained Dr. Meissner. “We all know that it has a strong affective component, and perception is greatly determined by context.” This can be seen, for example, in athletes, who often attribute less importance to their pain and can successfully perform competitively despite a painful injury.
Positive expectations
Dr. Meissner advised all physicians to treat patients with pain empathetically. He encourages them to ask patients about their pain, accompanying symptoms, possible fears, and other mental stress and to take these factors seriously.
Moreover, the findings accentuate the effect of prescribed analgesics. “Numerous studies have meanwhile shown that the more positive a patient’s expectations, the better the effect of a medication,” said Dr. Meissner. “We physicians must exploit this effect, too.”
This article was translated from the Medscape German Edition and a version appeared on Medscape.com.
recently published in the Proceedings of the National Academy of Sciences, that was conducted by a team led by neuroscientist Dan-Mikael Ellingsen, PhD, from Oslo University Hospital.
These are the results of a study,The researchers used functional MRI to scan the brains of 20 patients with chronic pain to investigate how a physician’s demeanor may affect patients’ sensitivity to pain, including effects in the central nervous system. During the scans, which were conducted in two sessions, the patients’ legs were exposed to stimuli that ranged from painless to moderately painful. The patients recorded perceived pain intensity using a scale. The physicians also underwent fMRI.
Half of the patients were subjected to the pain stimuli while alone; the other half were subjected to pain while in the presence of a physician. The latter group of patients was divided into two subgroups. Half of the patients had spoken to the accompanying physician before the examination. They discussed the history of the patient’s condition to date, among other things. The other half underwent the brain scans without any prior interaction with a physician.
Worse when alone
Dr. Ellingsen and his colleagues found that patients who were alone during the examination reported greater pain than those who were in the presence of a physician, even though they were subjected to stimuli of the same intensity. In instances in which the physician and patient had already spoken before the brain scan, patients additionally felt that the physician was empathetic and understood their pain. Furthermore, the physicians were better able to estimate the pain that their patients experienced.
The patients who had a physician by their side consistently experienced pain that was milder than the pain experienced by those who were alone. For pairs that had spoken beforehand, the patients considered their physician to be better able to understand their pain, and the physicians estimated the perceived pain intensity of their patients more accurately.
Evidence of trust
There was greater activity in the dorsolateral and ventrolateral prefrontal cortex, as well as in the primary and secondary somatosensory areas, in patients in the subgroup that had spoken to a physician. For the physicians, compared with the comparison group, there was an increase in correspondence between activity in the dorsolateral prefrontal cortex and activity in the secondary somatosensory areas of patients, which is a brain region that is known to react to pain. The brain activity correlation increased in line with the self-reported mutual trust between the physician and patient.
“These results prove that empathy and support can decrease pain intensity,” the investigators write. The data shed light on the brain processes behind the social modulation of pain during the interaction between the physician and the patient. Concordances in the brain are increased by greater therapeutic alliance.
Beyond medication
Winfried Meissner, MD, head of the pain clinic at the department of anesthesiology and intensive care medicine at Jena University Hospital, Germany, and former president of the German Pain Society, said in an interview: “I view this as a vital study that impressively demonstrates that effective, intensive pain therapy is not just a case of administering the correct analgesic.”
“Instead, a focus should be placed on what common sense tells us, which is just how crucial an empathetic attitude from physicians and good communication with patients are when it comes to the success of any therapy,” Dr. Meissner added. Unfortunately, such an attitude and such communication often are not provided in clinical practice because of limitations on time.
“Now, with objectively collected data from patients and physicians, [Dr.] Ellingsen’s team has been able to demonstrate that human interaction has a decisive impact on the treatment of patients experiencing pain,” said Dr. Meissner. “The study should encourage practitioners to treat communication just as seriously as the pharmacology of analgesics.”
Perception and attitude
“The study shows remarkably well that empathetic conversation between the physician and patient represents a valuable therapeutic method and should be recognized as such,” emphasized Dr. Meissner. Of course, conversation cannot replace pharmacologic treatment, but it can supplement and reinforce it. Furthermore, a physician’s empathy presumably has an effect that is at least as great as a suitable analgesic.
“Pain is more than just sensory perception,” explained Dr. Meissner. “We all know that it has a strong affective component, and perception is greatly determined by context.” This can be seen, for example, in athletes, who often attribute less importance to their pain and can successfully perform competitively despite a painful injury.
Positive expectations
Dr. Meissner advised all physicians to treat patients with pain empathetically. He encourages them to ask patients about their pain, accompanying symptoms, possible fears, and other mental stress and to take these factors seriously.
Moreover, the findings accentuate the effect of prescribed analgesics. “Numerous studies have meanwhile shown that the more positive a patient’s expectations, the better the effect of a medication,” said Dr. Meissner. “We physicians must exploit this effect, too.”
This article was translated from the Medscape German Edition and a version appeared on Medscape.com.
FROM THE PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES
‘Brain fitness program’ may aid memory loss, concussion, ADHD
new research shows.
The program, which consists of targeted cognitive training and EEG-based neurofeedback, coupled with meditation and diet/lifestyle coaching, led to improvements in memory, attention, mood, alertness, and sleep.
The program promotes “neuroplasticity and was equally effective for patients with all three conditions,” program creator Majid Fotuhi, MD, PhD, said in an interview.
Patients with mild to moderate cognitive symptoms often see “remarkable” results within 3 months of consistently following the program, said Dr. Fotuhi, adjunct professor of neuroscience at George Washington University, Washington, and medical director of NeuroGrow Brain Fitness Center, McLean, Va.
“It actually makes intuitive sense that a healthier and stronger brain would function better and that patients of all ages with various cognitive or emotional symptoms would all benefit from improving the biology of their brain,” Dr. Fotuhi added.
The study was published online in the Journal of Alzheimer’s Disease Reports.
Personalized program
The findings are based on 223 children and adults who completed the 12-week NeuroGrow Brain Fitness Program (NeuroGrow BFP), including 71 with ADHD, 88 with PCS, and 64 with memory loss, defined as diagnosed mild cognitive impairment or subjective cognitive decline.
As part of the program, participants undergo a complete neurocognitive evaluation, including tests for verbal memory, complex attention, processing speed, executive functioning, and the Neurocognitive Index.
They also complete questionnaires regarding sleep, mood, diet, exercise, and anxiety/depression, and they undergo quantitative EEG at the beginning and end of the program.
A comparison of before and after neurocognitive test scores showed that all three patient subgroups experienced statistically significant improvements on most measures, the study team reports.
After completing the program, 60%-90% of patients scored higher on cognitive tests and reported having fewer cognitive, sleep, and emotional symptoms.
In all subgroups, the most significant improvement was observed in executive functioning.
“These preliminary findings appear to show that multimodal interventions which are known to increase neuroplasticity in the brain, when personalized, can have benefits for patients with cognitive symptoms from a variety of neurological conditions,” the investigators wrote.
The study’s strengths include a large, community-based sample of patients of different ages who had disruptive symptoms and abnormalities as determined using objective cognitive tests whose progress was monitored by objective and subjective measures.
The chief limitation is the lack of a control or placebo group.
“Though it is difficult to find a comparable group of patients with the exact same profile of cognitive deficits and brain-related symptoms, studying a larger group of patients – and comparing them with a wait-list group – may make it possible to do a more definitive assessment of the NeuroGrow BFP,” the researchers noted.
Dr. Fotuhi said the “secret to the success” of the program is that it involves a full assessment of all cognitive and neurobehavioral symptoms for each patient. This allows for individualized and targeted interventions for specific concerns and symptoms.
He said there is a need to recognize that patients who present to a neurology practice with a single complaint, such as a problem with memory or attention, often have other problems, such as anxiety/depression, stress, insomnia, sedentary lifestyle, obesity, diabetes, sleep apnea, or alcohol overuse.
“Each of these factors can affect their cognitive abilities and need a multimodal set of interventions in order to see full resolution of their cognitive symptoms,” Dr. Fotuhi said.
He has created a series of educational videos to demonstrate the program’s benefits.
The self-pay cost for the NeuroGrow BFP assessment and treatment sessions is approximately $7,000.
Dr. Fotuhi said all of the interventions included in the program are readily available at low cost.
He suggested that health care professionals who lack time or staff for conducting a comprehensive neurocognitive assessment for their patients can provide them with a copy of the Brain Health Index.
“Patients can then be instructed to work on the individual components of their brain health on their own – and measure their brain health index on a weekly basis,” Dr. Fotuhi said. “Private practices or academic centers can use the detailed information I have provided in my paper to develop their own brain fitness program.”
Not ready for prime time
Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, noted that “nonpharmacologic interventions can help alleviate some of the symptoms associated with dementia.
“The current study investigates nonpharmacologic interventions in a small number of patients with ADHD, postconcussion syndrome, or memory loss. The researchers found improvements on most measures following the brain rehabilitation program.
“While this is interesting, more work is needed in larger, more diverse cohorts before these programs can be applied broadly. Nonpharmacologic interventions are a helpful tool that need to be studied further in future studies,” Dr. Griffin added.
Funding for the study was provided by the NeuroGrow Brain Fitness Center. Dr. Fotuhi, the owner of NeuroGrow, was involved in data analysis, writing, editing, approval, and decision to publish. Dr. Griffin reported no disclosures.
A version of this article appeared on Medscape.com.
new research shows.
The program, which consists of targeted cognitive training and EEG-based neurofeedback, coupled with meditation and diet/lifestyle coaching, led to improvements in memory, attention, mood, alertness, and sleep.
The program promotes “neuroplasticity and was equally effective for patients with all three conditions,” program creator Majid Fotuhi, MD, PhD, said in an interview.
Patients with mild to moderate cognitive symptoms often see “remarkable” results within 3 months of consistently following the program, said Dr. Fotuhi, adjunct professor of neuroscience at George Washington University, Washington, and medical director of NeuroGrow Brain Fitness Center, McLean, Va.
“It actually makes intuitive sense that a healthier and stronger brain would function better and that patients of all ages with various cognitive or emotional symptoms would all benefit from improving the biology of their brain,” Dr. Fotuhi added.
The study was published online in the Journal of Alzheimer’s Disease Reports.
Personalized program
The findings are based on 223 children and adults who completed the 12-week NeuroGrow Brain Fitness Program (NeuroGrow BFP), including 71 with ADHD, 88 with PCS, and 64 with memory loss, defined as diagnosed mild cognitive impairment or subjective cognitive decline.
As part of the program, participants undergo a complete neurocognitive evaluation, including tests for verbal memory, complex attention, processing speed, executive functioning, and the Neurocognitive Index.
They also complete questionnaires regarding sleep, mood, diet, exercise, and anxiety/depression, and they undergo quantitative EEG at the beginning and end of the program.
A comparison of before and after neurocognitive test scores showed that all three patient subgroups experienced statistically significant improvements on most measures, the study team reports.
After completing the program, 60%-90% of patients scored higher on cognitive tests and reported having fewer cognitive, sleep, and emotional symptoms.
In all subgroups, the most significant improvement was observed in executive functioning.
“These preliminary findings appear to show that multimodal interventions which are known to increase neuroplasticity in the brain, when personalized, can have benefits for patients with cognitive symptoms from a variety of neurological conditions,” the investigators wrote.
The study’s strengths include a large, community-based sample of patients of different ages who had disruptive symptoms and abnormalities as determined using objective cognitive tests whose progress was monitored by objective and subjective measures.
The chief limitation is the lack of a control or placebo group.
“Though it is difficult to find a comparable group of patients with the exact same profile of cognitive deficits and brain-related symptoms, studying a larger group of patients – and comparing them with a wait-list group – may make it possible to do a more definitive assessment of the NeuroGrow BFP,” the researchers noted.
Dr. Fotuhi said the “secret to the success” of the program is that it involves a full assessment of all cognitive and neurobehavioral symptoms for each patient. This allows for individualized and targeted interventions for specific concerns and symptoms.
He said there is a need to recognize that patients who present to a neurology practice with a single complaint, such as a problem with memory or attention, often have other problems, such as anxiety/depression, stress, insomnia, sedentary lifestyle, obesity, diabetes, sleep apnea, or alcohol overuse.
“Each of these factors can affect their cognitive abilities and need a multimodal set of interventions in order to see full resolution of their cognitive symptoms,” Dr. Fotuhi said.
He has created a series of educational videos to demonstrate the program’s benefits.
The self-pay cost for the NeuroGrow BFP assessment and treatment sessions is approximately $7,000.
Dr. Fotuhi said all of the interventions included in the program are readily available at low cost.
He suggested that health care professionals who lack time or staff for conducting a comprehensive neurocognitive assessment for their patients can provide them with a copy of the Brain Health Index.
“Patients can then be instructed to work on the individual components of their brain health on their own – and measure their brain health index on a weekly basis,” Dr. Fotuhi said. “Private practices or academic centers can use the detailed information I have provided in my paper to develop their own brain fitness program.”
Not ready for prime time
Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, noted that “nonpharmacologic interventions can help alleviate some of the symptoms associated with dementia.
“The current study investigates nonpharmacologic interventions in a small number of patients with ADHD, postconcussion syndrome, or memory loss. The researchers found improvements on most measures following the brain rehabilitation program.
“While this is interesting, more work is needed in larger, more diverse cohorts before these programs can be applied broadly. Nonpharmacologic interventions are a helpful tool that need to be studied further in future studies,” Dr. Griffin added.
Funding for the study was provided by the NeuroGrow Brain Fitness Center. Dr. Fotuhi, the owner of NeuroGrow, was involved in data analysis, writing, editing, approval, and decision to publish. Dr. Griffin reported no disclosures.
A version of this article appeared on Medscape.com.
new research shows.
The program, which consists of targeted cognitive training and EEG-based neurofeedback, coupled with meditation and diet/lifestyle coaching, led to improvements in memory, attention, mood, alertness, and sleep.
The program promotes “neuroplasticity and was equally effective for patients with all three conditions,” program creator Majid Fotuhi, MD, PhD, said in an interview.
Patients with mild to moderate cognitive symptoms often see “remarkable” results within 3 months of consistently following the program, said Dr. Fotuhi, adjunct professor of neuroscience at George Washington University, Washington, and medical director of NeuroGrow Brain Fitness Center, McLean, Va.
“It actually makes intuitive sense that a healthier and stronger brain would function better and that patients of all ages with various cognitive or emotional symptoms would all benefit from improving the biology of their brain,” Dr. Fotuhi added.
The study was published online in the Journal of Alzheimer’s Disease Reports.
Personalized program
The findings are based on 223 children and adults who completed the 12-week NeuroGrow Brain Fitness Program (NeuroGrow BFP), including 71 with ADHD, 88 with PCS, and 64 with memory loss, defined as diagnosed mild cognitive impairment or subjective cognitive decline.
As part of the program, participants undergo a complete neurocognitive evaluation, including tests for verbal memory, complex attention, processing speed, executive functioning, and the Neurocognitive Index.
They also complete questionnaires regarding sleep, mood, diet, exercise, and anxiety/depression, and they undergo quantitative EEG at the beginning and end of the program.
A comparison of before and after neurocognitive test scores showed that all three patient subgroups experienced statistically significant improvements on most measures, the study team reports.
After completing the program, 60%-90% of patients scored higher on cognitive tests and reported having fewer cognitive, sleep, and emotional symptoms.
In all subgroups, the most significant improvement was observed in executive functioning.
“These preliminary findings appear to show that multimodal interventions which are known to increase neuroplasticity in the brain, when personalized, can have benefits for patients with cognitive symptoms from a variety of neurological conditions,” the investigators wrote.
The study’s strengths include a large, community-based sample of patients of different ages who had disruptive symptoms and abnormalities as determined using objective cognitive tests whose progress was monitored by objective and subjective measures.
The chief limitation is the lack of a control or placebo group.
“Though it is difficult to find a comparable group of patients with the exact same profile of cognitive deficits and brain-related symptoms, studying a larger group of patients – and comparing them with a wait-list group – may make it possible to do a more definitive assessment of the NeuroGrow BFP,” the researchers noted.
Dr. Fotuhi said the “secret to the success” of the program is that it involves a full assessment of all cognitive and neurobehavioral symptoms for each patient. This allows for individualized and targeted interventions for specific concerns and symptoms.
He said there is a need to recognize that patients who present to a neurology practice with a single complaint, such as a problem with memory or attention, often have other problems, such as anxiety/depression, stress, insomnia, sedentary lifestyle, obesity, diabetes, sleep apnea, or alcohol overuse.
“Each of these factors can affect their cognitive abilities and need a multimodal set of interventions in order to see full resolution of their cognitive symptoms,” Dr. Fotuhi said.
He has created a series of educational videos to demonstrate the program’s benefits.
The self-pay cost for the NeuroGrow BFP assessment and treatment sessions is approximately $7,000.
Dr. Fotuhi said all of the interventions included in the program are readily available at low cost.
He suggested that health care professionals who lack time or staff for conducting a comprehensive neurocognitive assessment for their patients can provide them with a copy of the Brain Health Index.
“Patients can then be instructed to work on the individual components of their brain health on their own – and measure their brain health index on a weekly basis,” Dr. Fotuhi said. “Private practices or academic centers can use the detailed information I have provided in my paper to develop their own brain fitness program.”
Not ready for prime time
Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, noted that “nonpharmacologic interventions can help alleviate some of the symptoms associated with dementia.
“The current study investigates nonpharmacologic interventions in a small number of patients with ADHD, postconcussion syndrome, or memory loss. The researchers found improvements on most measures following the brain rehabilitation program.
“While this is interesting, more work is needed in larger, more diverse cohorts before these programs can be applied broadly. Nonpharmacologic interventions are a helpful tool that need to be studied further in future studies,” Dr. Griffin added.
Funding for the study was provided by the NeuroGrow Brain Fitness Center. Dr. Fotuhi, the owner of NeuroGrow, was involved in data analysis, writing, editing, approval, and decision to publish. Dr. Griffin reported no disclosures.
A version of this article appeared on Medscape.com.
FROM THE JOURNAL OF ALZHEIMER’S DISEASE REPORTS
Link between low co-pays for new diabetes drugs and patient adherence
Findings from a recent study indicate that the less U.S. patients pay out of pocket for drugs that often have high co-pays, such as sodium-glucose cotransporter 2 (SGLT2) inhibitors or glucagonlike peptide-1 (GLP-1) agonists, the more they adhere to taking these medications.
The study, led by Utibe R. Essien, MD, from University of California, Los Angeles, and Balvindar Singh, MD, PhD, from University of Pittsburgh, was published online in JAMA Cardiology.
Patient data from Clinformatics Data Mart, a health insurance claims database, was analyzed for the study. The information for 90,041 adults from the United States who had commercial and Medicare health insurance, and who started taking a GLP-1 agonist or SGLT2 inhibitor between 2014 and 2020 was reviewed. Participants had type 2 diabetes, heart failure, or both.
The primary outcome showed patients with a lower drug co-pay had significantly higher odds of 12-month adherence to GLP-1 agonists and SGLT2 inhibitors, compared with those with a higher co-pay. These differences persisted after controlling for patient demographic, clinical, and socioeconomic covariates.
After full adjustments were made and after the 12 months, patients with a high co-pay of $50 per month or more were 53% less likely to adhere to an SGLT2 inhibitor and 32% less likely to adhere to a GLP-1 agonist, compared with patients with a co-pay of less than $10 per month for these agents.
“Lowering high out-of-pocket prescription costs may be key to improving adherence to guideline-recommended therapies and advancing overall quality of care in patients with type 2 diabetes and heart failure,” the authors conclude.
The authors acknowledge the study’s limitations, including the inability to exclude residual confounding, uncertain generalizability for those without health insurance or with public insurance and possible misclassifications of type 2 diabetes and heart failure diagnoses or medical comorbidities. Additionally, this study did not have information on patients’ preferences associated with medication use, including specific reasons for poor adherence, and could not assess how co-payments influenced initial prescription receipt or abandonment at the pharmacy, or other factors including possible price inflation.
The study received no commercial funding. One author (not a lead author) is an adviser to several drug companies including ones that market SGLT2 inhibitors or GLP-1 agonists.
Findings from a recent study indicate that the less U.S. patients pay out of pocket for drugs that often have high co-pays, such as sodium-glucose cotransporter 2 (SGLT2) inhibitors or glucagonlike peptide-1 (GLP-1) agonists, the more they adhere to taking these medications.
The study, led by Utibe R. Essien, MD, from University of California, Los Angeles, and Balvindar Singh, MD, PhD, from University of Pittsburgh, was published online in JAMA Cardiology.
Patient data from Clinformatics Data Mart, a health insurance claims database, was analyzed for the study. The information for 90,041 adults from the United States who had commercial and Medicare health insurance, and who started taking a GLP-1 agonist or SGLT2 inhibitor between 2014 and 2020 was reviewed. Participants had type 2 diabetes, heart failure, or both.
The primary outcome showed patients with a lower drug co-pay had significantly higher odds of 12-month adherence to GLP-1 agonists and SGLT2 inhibitors, compared with those with a higher co-pay. These differences persisted after controlling for patient demographic, clinical, and socioeconomic covariates.
After full adjustments were made and after the 12 months, patients with a high co-pay of $50 per month or more were 53% less likely to adhere to an SGLT2 inhibitor and 32% less likely to adhere to a GLP-1 agonist, compared with patients with a co-pay of less than $10 per month for these agents.
“Lowering high out-of-pocket prescription costs may be key to improving adherence to guideline-recommended therapies and advancing overall quality of care in patients with type 2 diabetes and heart failure,” the authors conclude.
The authors acknowledge the study’s limitations, including the inability to exclude residual confounding, uncertain generalizability for those without health insurance or with public insurance and possible misclassifications of type 2 diabetes and heart failure diagnoses or medical comorbidities. Additionally, this study did not have information on patients’ preferences associated with medication use, including specific reasons for poor adherence, and could not assess how co-payments influenced initial prescription receipt or abandonment at the pharmacy, or other factors including possible price inflation.
The study received no commercial funding. One author (not a lead author) is an adviser to several drug companies including ones that market SGLT2 inhibitors or GLP-1 agonists.
Findings from a recent study indicate that the less U.S. patients pay out of pocket for drugs that often have high co-pays, such as sodium-glucose cotransporter 2 (SGLT2) inhibitors or glucagonlike peptide-1 (GLP-1) agonists, the more they adhere to taking these medications.
The study, led by Utibe R. Essien, MD, from University of California, Los Angeles, and Balvindar Singh, MD, PhD, from University of Pittsburgh, was published online in JAMA Cardiology.
Patient data from Clinformatics Data Mart, a health insurance claims database, was analyzed for the study. The information for 90,041 adults from the United States who had commercial and Medicare health insurance, and who started taking a GLP-1 agonist or SGLT2 inhibitor between 2014 and 2020 was reviewed. Participants had type 2 diabetes, heart failure, or both.
The primary outcome showed patients with a lower drug co-pay had significantly higher odds of 12-month adherence to GLP-1 agonists and SGLT2 inhibitors, compared with those with a higher co-pay. These differences persisted after controlling for patient demographic, clinical, and socioeconomic covariates.
After full adjustments were made and after the 12 months, patients with a high co-pay of $50 per month or more were 53% less likely to adhere to an SGLT2 inhibitor and 32% less likely to adhere to a GLP-1 agonist, compared with patients with a co-pay of less than $10 per month for these agents.
“Lowering high out-of-pocket prescription costs may be key to improving adherence to guideline-recommended therapies and advancing overall quality of care in patients with type 2 diabetes and heart failure,” the authors conclude.
The authors acknowledge the study’s limitations, including the inability to exclude residual confounding, uncertain generalizability for those without health insurance or with public insurance and possible misclassifications of type 2 diabetes and heart failure diagnoses or medical comorbidities. Additionally, this study did not have information on patients’ preferences associated with medication use, including specific reasons for poor adherence, and could not assess how co-payments influenced initial prescription receipt or abandonment at the pharmacy, or other factors including possible price inflation.
The study received no commercial funding. One author (not a lead author) is an adviser to several drug companies including ones that market SGLT2 inhibitors or GLP-1 agonists.
FROM JAMA CARDIOLOGY
Can a biodegradable brain implant deliver lifesaving cancer meds?
It’s the latest advance in a rapidly growing field using ultrasound – high-frequency sound waves undetectable to humans – to fight cancer and other diseases.
The problem addressed by the researchers is the blood-brain barrier, a nearly impenetrable blood vessel lining that keeps harmful molecules from passing into the brain from the blood. But this lining can also block chemo drugs from reaching cancer cells.
So the scientists implanted 1-cm2 devices into the skulls of mice, directly behind the tumor site. The implants generate ultrasound waves, loosening the barrier and allowing the drugs to reach the tumor. The sound waves leave healthy tissue undamaged.
“You inject the drug into the body and turn on the ultrasound at the same time. You’re going to hit precisely at the tumor area every single time you use it,” said lead study author Thanh Nguyen, PhD, an associate professor of mechanical engineering at the University of Connecticut, Storrs.
The drug used in the study was paclitaxel, which normally struggles to get through the blood-brain barrier. The tumors shrank, and the mice doubled their lifetime, compared with untreated mice. The mice showed no bad health effects 6 months later.
Breaking through the blood-brain barrier
The biodegradable implant is made of glycine, an amino acid that’s also strongly piezoelectric, meaning it vibrates when subjected to an electrical current. To make it, researchers cultivated glycine crystals, shattered them into pieces, and finally used a process called electrospinning, which applies a high electrical voltage to the nanocrystals.
Voltage flows to the implant via an external device. The resulting ultrasound causes the tightly adhered cells of the blood-brain barrier to vibrate, stretching them out and creating space for pores to form.
“That allows in very tiny particles, including chemo drugs,” said Dr. Nguyen.
His earlier biodegradable implant broke apart from the force, but the new glycine implant is more flexible, stable, and highly piezoelectric. It could be implanted after a patient has surgery to remove a brain tumor, to continue treating residual cancer cells. The implant dissolves harmlessly in the body over time, and doctors can control its lifespan.
A new wave of uses for ultrasound
Dr. Nguyen’s study builds on similar efforts, including a recent clinical trial of a nonbiodegradable implant for treating brain tumors. Ultrasound can focus energy on precise targets in the body.
It’s like “using a magnifying glass to focus multiple beams of light on a point and burn a hole in a leaf,” said Neal Kassell, MD, founder and chairman of the Focused Ultrasound Foundation. This approach spares adjacent normal tissue.
Doctors now understand more than 30 ways that ultrasound interacts with tissue – from destroying abnormal tissue to delivering drugs more effectively to stimulating an immune response. A decade ago, only five such interactions were known.
This opens the door for treating “a wide spectrum of medical disorders,” from neurodegenerative diseases like Alzheimer’s and Parkinson’s to difficult-to-treat cancers of the prostate and pancreas, and even addiction, said Dr. Kassell.
Dr. Kassell envisions using focused ultrasound to treat brain tumors as an alternative (or complement) to surgery, chemotherapy, immunotherapy, or radiation therapy. In the meantime, implants have helped show “the effectiveness of opening the blood-brain barrier.”
Dr. Nguyen’s team plans on testing the safety and efficacy of their implant in pigs next. Eventually, Dr. Nguyen hopes to develop a patch with an array of implants to target different areas of the brain.
One study coauthor is cofounder of PiezoBioMembrane and SingleTimeMicroneedles. The other study authors reported no conflicts of interest.
A version of this article originally appeared on WebMD.com.
It’s the latest advance in a rapidly growing field using ultrasound – high-frequency sound waves undetectable to humans – to fight cancer and other diseases.
The problem addressed by the researchers is the blood-brain barrier, a nearly impenetrable blood vessel lining that keeps harmful molecules from passing into the brain from the blood. But this lining can also block chemo drugs from reaching cancer cells.
So the scientists implanted 1-cm2 devices into the skulls of mice, directly behind the tumor site. The implants generate ultrasound waves, loosening the barrier and allowing the drugs to reach the tumor. The sound waves leave healthy tissue undamaged.
“You inject the drug into the body and turn on the ultrasound at the same time. You’re going to hit precisely at the tumor area every single time you use it,” said lead study author Thanh Nguyen, PhD, an associate professor of mechanical engineering at the University of Connecticut, Storrs.
The drug used in the study was paclitaxel, which normally struggles to get through the blood-brain barrier. The tumors shrank, and the mice doubled their lifetime, compared with untreated mice. The mice showed no bad health effects 6 months later.
Breaking through the blood-brain barrier
The biodegradable implant is made of glycine, an amino acid that’s also strongly piezoelectric, meaning it vibrates when subjected to an electrical current. To make it, researchers cultivated glycine crystals, shattered them into pieces, and finally used a process called electrospinning, which applies a high electrical voltage to the nanocrystals.
Voltage flows to the implant via an external device. The resulting ultrasound causes the tightly adhered cells of the blood-brain barrier to vibrate, stretching them out and creating space for pores to form.
“That allows in very tiny particles, including chemo drugs,” said Dr. Nguyen.
His earlier biodegradable implant broke apart from the force, but the new glycine implant is more flexible, stable, and highly piezoelectric. It could be implanted after a patient has surgery to remove a brain tumor, to continue treating residual cancer cells. The implant dissolves harmlessly in the body over time, and doctors can control its lifespan.
A new wave of uses for ultrasound
Dr. Nguyen’s study builds on similar efforts, including a recent clinical trial of a nonbiodegradable implant for treating brain tumors. Ultrasound can focus energy on precise targets in the body.
It’s like “using a magnifying glass to focus multiple beams of light on a point and burn a hole in a leaf,” said Neal Kassell, MD, founder and chairman of the Focused Ultrasound Foundation. This approach spares adjacent normal tissue.
Doctors now understand more than 30 ways that ultrasound interacts with tissue – from destroying abnormal tissue to delivering drugs more effectively to stimulating an immune response. A decade ago, only five such interactions were known.
This opens the door for treating “a wide spectrum of medical disorders,” from neurodegenerative diseases like Alzheimer’s and Parkinson’s to difficult-to-treat cancers of the prostate and pancreas, and even addiction, said Dr. Kassell.
Dr. Kassell envisions using focused ultrasound to treat brain tumors as an alternative (or complement) to surgery, chemotherapy, immunotherapy, or radiation therapy. In the meantime, implants have helped show “the effectiveness of opening the blood-brain barrier.”
Dr. Nguyen’s team plans on testing the safety and efficacy of their implant in pigs next. Eventually, Dr. Nguyen hopes to develop a patch with an array of implants to target different areas of the brain.
One study coauthor is cofounder of PiezoBioMembrane and SingleTimeMicroneedles. The other study authors reported no conflicts of interest.
A version of this article originally appeared on WebMD.com.
It’s the latest advance in a rapidly growing field using ultrasound – high-frequency sound waves undetectable to humans – to fight cancer and other diseases.
The problem addressed by the researchers is the blood-brain barrier, a nearly impenetrable blood vessel lining that keeps harmful molecules from passing into the brain from the blood. But this lining can also block chemo drugs from reaching cancer cells.
So the scientists implanted 1-cm2 devices into the skulls of mice, directly behind the tumor site. The implants generate ultrasound waves, loosening the barrier and allowing the drugs to reach the tumor. The sound waves leave healthy tissue undamaged.
“You inject the drug into the body and turn on the ultrasound at the same time. You’re going to hit precisely at the tumor area every single time you use it,” said lead study author Thanh Nguyen, PhD, an associate professor of mechanical engineering at the University of Connecticut, Storrs.
The drug used in the study was paclitaxel, which normally struggles to get through the blood-brain barrier. The tumors shrank, and the mice doubled their lifetime, compared with untreated mice. The mice showed no bad health effects 6 months later.
Breaking through the blood-brain barrier
The biodegradable implant is made of glycine, an amino acid that’s also strongly piezoelectric, meaning it vibrates when subjected to an electrical current. To make it, researchers cultivated glycine crystals, shattered them into pieces, and finally used a process called electrospinning, which applies a high electrical voltage to the nanocrystals.
Voltage flows to the implant via an external device. The resulting ultrasound causes the tightly adhered cells of the blood-brain barrier to vibrate, stretching them out and creating space for pores to form.
“That allows in very tiny particles, including chemo drugs,” said Dr. Nguyen.
His earlier biodegradable implant broke apart from the force, but the new glycine implant is more flexible, stable, and highly piezoelectric. It could be implanted after a patient has surgery to remove a brain tumor, to continue treating residual cancer cells. The implant dissolves harmlessly in the body over time, and doctors can control its lifespan.
A new wave of uses for ultrasound
Dr. Nguyen’s study builds on similar efforts, including a recent clinical trial of a nonbiodegradable implant for treating brain tumors. Ultrasound can focus energy on precise targets in the body.
It’s like “using a magnifying glass to focus multiple beams of light on a point and burn a hole in a leaf,” said Neal Kassell, MD, founder and chairman of the Focused Ultrasound Foundation. This approach spares adjacent normal tissue.
Doctors now understand more than 30 ways that ultrasound interacts with tissue – from destroying abnormal tissue to delivering drugs more effectively to stimulating an immune response. A decade ago, only five such interactions were known.
This opens the door for treating “a wide spectrum of medical disorders,” from neurodegenerative diseases like Alzheimer’s and Parkinson’s to difficult-to-treat cancers of the prostate and pancreas, and even addiction, said Dr. Kassell.
Dr. Kassell envisions using focused ultrasound to treat brain tumors as an alternative (or complement) to surgery, chemotherapy, immunotherapy, or radiation therapy. In the meantime, implants have helped show “the effectiveness of opening the blood-brain barrier.”
Dr. Nguyen’s team plans on testing the safety and efficacy of their implant in pigs next. Eventually, Dr. Nguyen hopes to develop a patch with an array of implants to target different areas of the brain.
One study coauthor is cofounder of PiezoBioMembrane and SingleTimeMicroneedles. The other study authors reported no conflicts of interest.
A version of this article originally appeared on WebMD.com.
FROM SCIENCE ADVANCES
Smartwatches able to detect very early signs of Parkinson’s
new research shows.
An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.
“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.
“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.
“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.
The study was published online in Nature Medicine.
Novel biomarker for PD
Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.
At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).
The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.
The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.
Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.
“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.
High-quality research
In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”
Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.
But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.
Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.
“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.
“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.
The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
new research shows.
An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.
“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.
“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.
“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.
The study was published online in Nature Medicine.
Novel biomarker for PD
Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.
At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).
The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.
The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.
Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.
“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.
High-quality research
In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”
Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.
But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.
Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.
“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.
“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.
The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
new research shows.
An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.
“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.
“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.
“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.
The study was published online in Nature Medicine.
Novel biomarker for PD
Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.
At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).
The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.
The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.
Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.
“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.
High-quality research
In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”
Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.
But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.
Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.
“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.
“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.
The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
FROM NATURE MEDICINE
Coffee’s brain-boosting effect goes beyond caffeine
“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.
The study was published online in Frontiers in Behavioral Neuroscience.
Caffeine can’t take all the credit
Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.
The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).
They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.
To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.
The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.
Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.
This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.
However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.
“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.
Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.
Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.
The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.
A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.
The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.
The study was published online in Frontiers in Behavioral Neuroscience.
Caffeine can’t take all the credit
Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.
The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).
They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.
To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.
The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.
Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.
This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.
However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.
“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.
Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.
Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.
The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.
A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.
The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.
The study was published online in Frontiers in Behavioral Neuroscience.
Caffeine can’t take all the credit
Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.
The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).
They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.
To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.
The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.
Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.
This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.
However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.
“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.
Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.
Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.
The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.
A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.
The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.
A version of this article originally appeared on Medscape.com.
FROM FRONTIERS IN BEHAVIORAL NEUROSCIENCE
Medical cannabis does not reduce use of prescription meds
TOPLINE:
new study published in Annals of Internal Medicine.
, according to aMETHODOLOGY:
- Cannabis advocates suggest that legal medical cannabis can be a partial solution to the opioid overdose crisis in the United States, which claimed more than 80,000 lives in 2021.
- Current research on how legalized cannabis reduces dependence on prescription pain medication is inconclusive.
- Researchers examined insurance data for the period 2010-2022 from 583,820 adults with chronic noncancer pain.
- They drew from 12 states in which medical cannabis is legal and from 17 in which it is not legal to create a hypothetical randomized trial. The control group simulated prescription rates where medical cannabis was not available.
- Authors evaluated prescription rates for opioids, nonopioid painkillers, and pain interventions, such as physical therapy.
TAKEAWAY:
In a given month during the first 3 years after legalization, for states with medical cannabis, the investigators found the following:
- There was an average decrease of 1.07 percentage points in the proportion of patients who received any opioid prescription, compared to a 1.12 percentage point decrease in the control group.
- There was an average increase of 1.14 percentage points in the proportion of patients who received any nonopioid prescription painkiller, compared to a 1.19 percentage point increase in the control group.
- There was a 0.17 percentage point decrease in the proportion of patients who received any pain procedure, compared to a 0.001 percentage point decrease in the control group.
IN PRACTICE:
“This study did not identify important effects of medical cannabis laws on receipt of opioid or nonopioid pain treatment among patients with chronic noncancer pain,” according to the researchers.
SOURCE:
The study was led by Emma E. McGinty, PhD, of Weill Cornell Medicine, New York, and was funded by the National Institute on Drug Abuse.
LIMITATIONS:
The investigators used a simulated, hypothetical control group that was based on untestable assumptions. They also drew data solely from insured individuals, so the study does not necessarily represent uninsured populations.
DISCLOSURES:
Dr. McGinty reports receiving a grant from NIDA. Her coauthors reported receiving support from NIDA and the National Institutes of Health.
A version of this article first appeared on Medscape.com.
TOPLINE:
new study published in Annals of Internal Medicine.
, according to aMETHODOLOGY:
- Cannabis advocates suggest that legal medical cannabis can be a partial solution to the opioid overdose crisis in the United States, which claimed more than 80,000 lives in 2021.
- Current research on how legalized cannabis reduces dependence on prescription pain medication is inconclusive.
- Researchers examined insurance data for the period 2010-2022 from 583,820 adults with chronic noncancer pain.
- They drew from 12 states in which medical cannabis is legal and from 17 in which it is not legal to create a hypothetical randomized trial. The control group simulated prescription rates where medical cannabis was not available.
- Authors evaluated prescription rates for opioids, nonopioid painkillers, and pain interventions, such as physical therapy.
TAKEAWAY:
In a given month during the first 3 years after legalization, for states with medical cannabis, the investigators found the following:
- There was an average decrease of 1.07 percentage points in the proportion of patients who received any opioid prescription, compared to a 1.12 percentage point decrease in the control group.
- There was an average increase of 1.14 percentage points in the proportion of patients who received any nonopioid prescription painkiller, compared to a 1.19 percentage point increase in the control group.
- There was a 0.17 percentage point decrease in the proportion of patients who received any pain procedure, compared to a 0.001 percentage point decrease in the control group.
IN PRACTICE:
“This study did not identify important effects of medical cannabis laws on receipt of opioid or nonopioid pain treatment among patients with chronic noncancer pain,” according to the researchers.
SOURCE:
The study was led by Emma E. McGinty, PhD, of Weill Cornell Medicine, New York, and was funded by the National Institute on Drug Abuse.
LIMITATIONS:
The investigators used a simulated, hypothetical control group that was based on untestable assumptions. They also drew data solely from insured individuals, so the study does not necessarily represent uninsured populations.
DISCLOSURES:
Dr. McGinty reports receiving a grant from NIDA. Her coauthors reported receiving support from NIDA and the National Institutes of Health.
A version of this article first appeared on Medscape.com.
TOPLINE:
new study published in Annals of Internal Medicine.
, according to aMETHODOLOGY:
- Cannabis advocates suggest that legal medical cannabis can be a partial solution to the opioid overdose crisis in the United States, which claimed more than 80,000 lives in 2021.
- Current research on how legalized cannabis reduces dependence on prescription pain medication is inconclusive.
- Researchers examined insurance data for the period 2010-2022 from 583,820 adults with chronic noncancer pain.
- They drew from 12 states in which medical cannabis is legal and from 17 in which it is not legal to create a hypothetical randomized trial. The control group simulated prescription rates where medical cannabis was not available.
- Authors evaluated prescription rates for opioids, nonopioid painkillers, and pain interventions, such as physical therapy.
TAKEAWAY:
In a given month during the first 3 years after legalization, for states with medical cannabis, the investigators found the following:
- There was an average decrease of 1.07 percentage points in the proportion of patients who received any opioid prescription, compared to a 1.12 percentage point decrease in the control group.
- There was an average increase of 1.14 percentage points in the proportion of patients who received any nonopioid prescription painkiller, compared to a 1.19 percentage point increase in the control group.
- There was a 0.17 percentage point decrease in the proportion of patients who received any pain procedure, compared to a 0.001 percentage point decrease in the control group.
IN PRACTICE:
“This study did not identify important effects of medical cannabis laws on receipt of opioid or nonopioid pain treatment among patients with chronic noncancer pain,” according to the researchers.
SOURCE:
The study was led by Emma E. McGinty, PhD, of Weill Cornell Medicine, New York, and was funded by the National Institute on Drug Abuse.
LIMITATIONS:
The investigators used a simulated, hypothetical control group that was based on untestable assumptions. They also drew data solely from insured individuals, so the study does not necessarily represent uninsured populations.
DISCLOSURES:
Dr. McGinty reports receiving a grant from NIDA. Her coauthors reported receiving support from NIDA and the National Institutes of Health.
A version of this article first appeared on Medscape.com.