User login
Semaglutide Improves Taste Sensitivity in Women With Obesity
The glucagon-like peptide-1 (GLP-1) receptor agonist semaglutide (Ozempic, Wegovy) enhances taste sensitivity, changes brain responses to sweet tastes and may even alter expression of genes in the tongue associated with taste bud development, according to new research presented at the annual meeting of the Endocrine Society, held in Boston.
“Some studies have reported that individuals living with obesity often perceive tastes as less intense,” noted Mojca Jensterle Sever, PhD, of the University Medical Centre in Ljubljana, Slovenia, who presented the work. Research also suggests that “populations prone to obesity have an inherently elevated desire for sweet and energy-dense foods,” she continued.
Studies in animal models have also previously shown that GLP-1 plays an important role in taste sensitivity, but it was not known if this hormone also influenced human taste perception.
In this proof-of-concept study, researchers randomly assigned 30 women with polycystic ovary syndrome (PCOS) to either 1 mg of semaglutide, administered once a week, or placebo for 16 weeks. Participants were on average 34 years old with a body mass index (BMI) of 36.4. Participants with PCOS were selected with the “aim to reduce variability in taste perception across different phases of the menstrual cycle,” Dr. Sever said.
Prior to the intervention, researchers tested participants’ taste sensitivity using 16 taste strips infused with four different concentrations of sweet, sour, salty, and bitter substances. Participants were asked to identify the taste of each strip. Every correct answer counted as one point, with a possible total of 16 points overall. Tongue biopsies were conducted for gene expression analysis.
Researchers also used functional MRI (fMRI) to evaluate brain responses to a series of calorie-dense, low-calorie, and non-food visual cues as well as to sweet taste stimulus. A sweet solution was administered on the tongue 30 minutes before and after participants consumed a standardized meal: a high-protein enriched nutritional drink.
These tests were repeated after 16 weeks.
The semaglutide group also exhibited decreased activation of the putamen (a structure in the brain involved with the brain’s reward system) on fMRI in response to calorie-dense cues. In response to sweet taste stimulus, those taking semaglutide showed increased activation of angular gyrus on MRI compared with the placebo group. The angular gyrus is part of the brain’s parietal lobe and is involved with language, memory, reasoning, and attention.
Lastly, researchers identified differential mRNA expression in the genes EYA, PRMT8, CRLF1, and CYP1B1, which are associated with taste bud development, renewal, and differentiation.
The findings are “fascinating, because we think about all of the factors that this new class of agents are able to improve, but taste is often not something that we look at, though there have been very strong associations,” said Gitanjali Srivastava, MD, of Vanderbilt University, Nashville, Tennessee, who moderated the session.
“Is it possible that another mechanism of action for this class of agents is perhaps indirectly altering our taste perception,” she posited, and, because of that, “we have an altered sense of satiety and hunger?”
Dr. Sever noted Dr. Several limitations to the study, including that only specific tastes were evaluated in a controlled study environment, “which may not reflect everyday experience,” she said. Taste perception can also vary widely from person to person, and changes in mRNA expression do not necessarily reflect changes in protein levels or activity.
“Our study should be seen and interpreted as a proof-of-concept study,” Dr. Sever added, with additional research needed to explore the relationship between semaglutide and taste perception.
Dr. Srivastava consults for Novo Nordisk, Eli Lilly, and Rhythm Pharmaceuticals. She has received research grant support from Eli Lilly. Dr. Sever reports no relevant financial relationships.
A version of this article appeared on Medscape.com .
The glucagon-like peptide-1 (GLP-1) receptor agonist semaglutide (Ozempic, Wegovy) enhances taste sensitivity, changes brain responses to sweet tastes and may even alter expression of genes in the tongue associated with taste bud development, according to new research presented at the annual meeting of the Endocrine Society, held in Boston.
“Some studies have reported that individuals living with obesity often perceive tastes as less intense,” noted Mojca Jensterle Sever, PhD, of the University Medical Centre in Ljubljana, Slovenia, who presented the work. Research also suggests that “populations prone to obesity have an inherently elevated desire for sweet and energy-dense foods,” she continued.
Studies in animal models have also previously shown that GLP-1 plays an important role in taste sensitivity, but it was not known if this hormone also influenced human taste perception.
In this proof-of-concept study, researchers randomly assigned 30 women with polycystic ovary syndrome (PCOS) to either 1 mg of semaglutide, administered once a week, or placebo for 16 weeks. Participants were on average 34 years old with a body mass index (BMI) of 36.4. Participants with PCOS were selected with the “aim to reduce variability in taste perception across different phases of the menstrual cycle,” Dr. Sever said.
Prior to the intervention, researchers tested participants’ taste sensitivity using 16 taste strips infused with four different concentrations of sweet, sour, salty, and bitter substances. Participants were asked to identify the taste of each strip. Every correct answer counted as one point, with a possible total of 16 points overall. Tongue biopsies were conducted for gene expression analysis.
Researchers also used functional MRI (fMRI) to evaluate brain responses to a series of calorie-dense, low-calorie, and non-food visual cues as well as to sweet taste stimulus. A sweet solution was administered on the tongue 30 minutes before and after participants consumed a standardized meal: a high-protein enriched nutritional drink.
These tests were repeated after 16 weeks.
The semaglutide group also exhibited decreased activation of the putamen (a structure in the brain involved with the brain’s reward system) on fMRI in response to calorie-dense cues. In response to sweet taste stimulus, those taking semaglutide showed increased activation of angular gyrus on MRI compared with the placebo group. The angular gyrus is part of the brain’s parietal lobe and is involved with language, memory, reasoning, and attention.
Lastly, researchers identified differential mRNA expression in the genes EYA, PRMT8, CRLF1, and CYP1B1, which are associated with taste bud development, renewal, and differentiation.
The findings are “fascinating, because we think about all of the factors that this new class of agents are able to improve, but taste is often not something that we look at, though there have been very strong associations,” said Gitanjali Srivastava, MD, of Vanderbilt University, Nashville, Tennessee, who moderated the session.
“Is it possible that another mechanism of action for this class of agents is perhaps indirectly altering our taste perception,” she posited, and, because of that, “we have an altered sense of satiety and hunger?”
Dr. Sever noted Dr. Several limitations to the study, including that only specific tastes were evaluated in a controlled study environment, “which may not reflect everyday experience,” she said. Taste perception can also vary widely from person to person, and changes in mRNA expression do not necessarily reflect changes in protein levels or activity.
“Our study should be seen and interpreted as a proof-of-concept study,” Dr. Sever added, with additional research needed to explore the relationship between semaglutide and taste perception.
Dr. Srivastava consults for Novo Nordisk, Eli Lilly, and Rhythm Pharmaceuticals. She has received research grant support from Eli Lilly. Dr. Sever reports no relevant financial relationships.
A version of this article appeared on Medscape.com .
The glucagon-like peptide-1 (GLP-1) receptor agonist semaglutide (Ozempic, Wegovy) enhances taste sensitivity, changes brain responses to sweet tastes and may even alter expression of genes in the tongue associated with taste bud development, according to new research presented at the annual meeting of the Endocrine Society, held in Boston.
“Some studies have reported that individuals living with obesity often perceive tastes as less intense,” noted Mojca Jensterle Sever, PhD, of the University Medical Centre in Ljubljana, Slovenia, who presented the work. Research also suggests that “populations prone to obesity have an inherently elevated desire for sweet and energy-dense foods,” she continued.
Studies in animal models have also previously shown that GLP-1 plays an important role in taste sensitivity, but it was not known if this hormone also influenced human taste perception.
In this proof-of-concept study, researchers randomly assigned 30 women with polycystic ovary syndrome (PCOS) to either 1 mg of semaglutide, administered once a week, or placebo for 16 weeks. Participants were on average 34 years old with a body mass index (BMI) of 36.4. Participants with PCOS were selected with the “aim to reduce variability in taste perception across different phases of the menstrual cycle,” Dr. Sever said.
Prior to the intervention, researchers tested participants’ taste sensitivity using 16 taste strips infused with four different concentrations of sweet, sour, salty, and bitter substances. Participants were asked to identify the taste of each strip. Every correct answer counted as one point, with a possible total of 16 points overall. Tongue biopsies were conducted for gene expression analysis.
Researchers also used functional MRI (fMRI) to evaluate brain responses to a series of calorie-dense, low-calorie, and non-food visual cues as well as to sweet taste stimulus. A sweet solution was administered on the tongue 30 minutes before and after participants consumed a standardized meal: a high-protein enriched nutritional drink.
These tests were repeated after 16 weeks.
The semaglutide group also exhibited decreased activation of the putamen (a structure in the brain involved with the brain’s reward system) on fMRI in response to calorie-dense cues. In response to sweet taste stimulus, those taking semaglutide showed increased activation of angular gyrus on MRI compared with the placebo group. The angular gyrus is part of the brain’s parietal lobe and is involved with language, memory, reasoning, and attention.
Lastly, researchers identified differential mRNA expression in the genes EYA, PRMT8, CRLF1, and CYP1B1, which are associated with taste bud development, renewal, and differentiation.
The findings are “fascinating, because we think about all of the factors that this new class of agents are able to improve, but taste is often not something that we look at, though there have been very strong associations,” said Gitanjali Srivastava, MD, of Vanderbilt University, Nashville, Tennessee, who moderated the session.
“Is it possible that another mechanism of action for this class of agents is perhaps indirectly altering our taste perception,” she posited, and, because of that, “we have an altered sense of satiety and hunger?”
Dr. Sever noted Dr. Several limitations to the study, including that only specific tastes were evaluated in a controlled study environment, “which may not reflect everyday experience,” she said. Taste perception can also vary widely from person to person, and changes in mRNA expression do not necessarily reflect changes in protein levels or activity.
“Our study should be seen and interpreted as a proof-of-concept study,” Dr. Sever added, with additional research needed to explore the relationship between semaglutide and taste perception.
Dr. Srivastava consults for Novo Nordisk, Eli Lilly, and Rhythm Pharmaceuticals. She has received research grant support from Eli Lilly. Dr. Sever reports no relevant financial relationships.
A version of this article appeared on Medscape.com .
‘Ozempic Burgers’ Offer Indulgences to People With Obesity
My crystal ball says that Big Food’s ongoing development and marketing of products designed for the reduced appetites of people taking anti-obesity medications will simultaneously be welcomed by their target market and scorned by self-righteous, healthy-living, just-try-harder, isn’t-this-just-feeding-the-problem hypocrites.
For the privileged, self-righteous, healthy-living crowd, the right to enjoy dietary indulgences and conveniences is inversely proportional to your weight. Often, judgment isn’t cast on the less-than-perfect choices of those with so-called “normal” weight; that’s often not the case for those with obesity.
Think you’re free from this paradigm? If you are, good for you. But I’d wager that there are plenty of readers who state that they’re free from bias, but when standing in supermarket checkout lines, they scrutinize and silently pass judgment on the contents of the grocery carts of people with obesity or, similarly, on the orders of people with obesity in fast-food restaurants.
Yet, there are bags of chips and cookies in most of our weekly carts, and who among us doesn’t, at times, grab some greasy comfort or convenience?
Unfortunately, the fuel for these sorts of judgments — implicit weight bias — is not only pervasive but also durable. A recent study of temporal changes to implicit biases demonstrated that unlike biases about race, skin tone, sexuality, age, and disability — between 2007 and 2016, tested levels of these implicit bias were seen to be in decline —biases about weight remain stable.
As to the products themselves, according to the recent article, they’ll be smaller, lower in calories, and high in protein and fat. To put it another way, compared with their nonshrunken counterparts,
With that said, I’d be remiss if I didn’t assert that the discussion of the merits or lack thereof of these sorts of offerings is misguided and pointless in that the food industry’s job is not one of social service provision or preventive healthcare. As I’ve written in the past, the food industry is neither friend, foe, nor partner. The food industry’s one job is to sell food, and if they see a market opportunity, they’ll take it. In this case, that turns out to be refreshing in a sense in that unlike moral-panic scolds, the food industry doesn’t judge its customers’ right to buy its products on the basis of how much their customers weigh.
Whereas the food industry’s response to anti-obesity medications’ impact on appetite may be to embrace it, many others’, including in medicine, seem to involve some degree of judgment or scorn. Yes, our behavior has an impact on our weight, but intentional behavior change in the name of weight requires multiple layers of deep and perpetual privilege. And yes, our environment is indeed a tremendous contributor to the challenge of obesity, but the world is full of medical conditions influenced or caused by our environment. Yet, discussions around how medications fail to address obesity’s root cause are the only such root-cause discussions I ever see.
Put more plainly, “how dare we develop medications for conditions influenced by our environment” is an odd stance to take in a world full of conditions influenced by our environments and where our environments’ primary change-driver is sales. Products that support the use of medications that improve life’s quality while markedly reducing the risk for an ever-growing number of conditions should be celebrated.
Dr. Freedhoff has disclosed the following relevant financial relationships:
Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for Bariatric Medical Institute and Constant Health; received research grant from Novo Nordisk; publicly shared opinions via Weighty Matters and social media.
A version of this article appeared on Medscape.com.
My crystal ball says that Big Food’s ongoing development and marketing of products designed for the reduced appetites of people taking anti-obesity medications will simultaneously be welcomed by their target market and scorned by self-righteous, healthy-living, just-try-harder, isn’t-this-just-feeding-the-problem hypocrites.
For the privileged, self-righteous, healthy-living crowd, the right to enjoy dietary indulgences and conveniences is inversely proportional to your weight. Often, judgment isn’t cast on the less-than-perfect choices of those with so-called “normal” weight; that’s often not the case for those with obesity.
Think you’re free from this paradigm? If you are, good for you. But I’d wager that there are plenty of readers who state that they’re free from bias, but when standing in supermarket checkout lines, they scrutinize and silently pass judgment on the contents of the grocery carts of people with obesity or, similarly, on the orders of people with obesity in fast-food restaurants.
Yet, there are bags of chips and cookies in most of our weekly carts, and who among us doesn’t, at times, grab some greasy comfort or convenience?
Unfortunately, the fuel for these sorts of judgments — implicit weight bias — is not only pervasive but also durable. A recent study of temporal changes to implicit biases demonstrated that unlike biases about race, skin tone, sexuality, age, and disability — between 2007 and 2016, tested levels of these implicit bias were seen to be in decline —biases about weight remain stable.
As to the products themselves, according to the recent article, they’ll be smaller, lower in calories, and high in protein and fat. To put it another way, compared with their nonshrunken counterparts,
With that said, I’d be remiss if I didn’t assert that the discussion of the merits or lack thereof of these sorts of offerings is misguided and pointless in that the food industry’s job is not one of social service provision or preventive healthcare. As I’ve written in the past, the food industry is neither friend, foe, nor partner. The food industry’s one job is to sell food, and if they see a market opportunity, they’ll take it. In this case, that turns out to be refreshing in a sense in that unlike moral-panic scolds, the food industry doesn’t judge its customers’ right to buy its products on the basis of how much their customers weigh.
Whereas the food industry’s response to anti-obesity medications’ impact on appetite may be to embrace it, many others’, including in medicine, seem to involve some degree of judgment or scorn. Yes, our behavior has an impact on our weight, but intentional behavior change in the name of weight requires multiple layers of deep and perpetual privilege. And yes, our environment is indeed a tremendous contributor to the challenge of obesity, but the world is full of medical conditions influenced or caused by our environment. Yet, discussions around how medications fail to address obesity’s root cause are the only such root-cause discussions I ever see.
Put more plainly, “how dare we develop medications for conditions influenced by our environment” is an odd stance to take in a world full of conditions influenced by our environments and where our environments’ primary change-driver is sales. Products that support the use of medications that improve life’s quality while markedly reducing the risk for an ever-growing number of conditions should be celebrated.
Dr. Freedhoff has disclosed the following relevant financial relationships:
Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for Bariatric Medical Institute and Constant Health; received research grant from Novo Nordisk; publicly shared opinions via Weighty Matters and social media.
A version of this article appeared on Medscape.com.
My crystal ball says that Big Food’s ongoing development and marketing of products designed for the reduced appetites of people taking anti-obesity medications will simultaneously be welcomed by their target market and scorned by self-righteous, healthy-living, just-try-harder, isn’t-this-just-feeding-the-problem hypocrites.
For the privileged, self-righteous, healthy-living crowd, the right to enjoy dietary indulgences and conveniences is inversely proportional to your weight. Often, judgment isn’t cast on the less-than-perfect choices of those with so-called “normal” weight; that’s often not the case for those with obesity.
Think you’re free from this paradigm? If you are, good for you. But I’d wager that there are plenty of readers who state that they’re free from bias, but when standing in supermarket checkout lines, they scrutinize and silently pass judgment on the contents of the grocery carts of people with obesity or, similarly, on the orders of people with obesity in fast-food restaurants.
Yet, there are bags of chips and cookies in most of our weekly carts, and who among us doesn’t, at times, grab some greasy comfort or convenience?
Unfortunately, the fuel for these sorts of judgments — implicit weight bias — is not only pervasive but also durable. A recent study of temporal changes to implicit biases demonstrated that unlike biases about race, skin tone, sexuality, age, and disability — between 2007 and 2016, tested levels of these implicit bias were seen to be in decline —biases about weight remain stable.
As to the products themselves, according to the recent article, they’ll be smaller, lower in calories, and high in protein and fat. To put it another way, compared with their nonshrunken counterparts,
With that said, I’d be remiss if I didn’t assert that the discussion of the merits or lack thereof of these sorts of offerings is misguided and pointless in that the food industry’s job is not one of social service provision or preventive healthcare. As I’ve written in the past, the food industry is neither friend, foe, nor partner. The food industry’s one job is to sell food, and if they see a market opportunity, they’ll take it. In this case, that turns out to be refreshing in a sense in that unlike moral-panic scolds, the food industry doesn’t judge its customers’ right to buy its products on the basis of how much their customers weigh.
Whereas the food industry’s response to anti-obesity medications’ impact on appetite may be to embrace it, many others’, including in medicine, seem to involve some degree of judgment or scorn. Yes, our behavior has an impact on our weight, but intentional behavior change in the name of weight requires multiple layers of deep and perpetual privilege. And yes, our environment is indeed a tremendous contributor to the challenge of obesity, but the world is full of medical conditions influenced or caused by our environment. Yet, discussions around how medications fail to address obesity’s root cause are the only such root-cause discussions I ever see.
Put more plainly, “how dare we develop medications for conditions influenced by our environment” is an odd stance to take in a world full of conditions influenced by our environments and where our environments’ primary change-driver is sales. Products that support the use of medications that improve life’s quality while markedly reducing the risk for an ever-growing number of conditions should be celebrated.
Dr. Freedhoff has disclosed the following relevant financial relationships:
Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for Bariatric Medical Institute and Constant Health; received research grant from Novo Nordisk; publicly shared opinions via Weighty Matters and social media.
A version of this article appeared on Medscape.com.
Arterial Stiffness May Predict Risk for Glaucoma
TOPLINE:
Arterial stiffness increases the risk for developing glaucoma, a new study found.
METHODOLOGY:
- To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
- They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
- The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.
TAKEAWAY:
- Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
- Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
- The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).
IN PRACTICE:
“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.
SOURCE:
This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.
LIMITATIONS:
The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.
DISCLOSURES:
The study did not receive any funding. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
Arterial stiffness increases the risk for developing glaucoma, a new study found.
METHODOLOGY:
- To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
- They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
- The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.
TAKEAWAY:
- Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
- Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
- The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).
IN PRACTICE:
“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.
SOURCE:
This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.
LIMITATIONS:
The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.
DISCLOSURES:
The study did not receive any funding. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
TOPLINE:
Arterial stiffness increases the risk for developing glaucoma, a new study found.
METHODOLOGY:
- To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
- They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
- The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.
TAKEAWAY:
- Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
- Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
- The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).
IN PRACTICE:
“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.
SOURCE:
This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.
LIMITATIONS:
The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.
DISCLOSURES:
The study did not receive any funding. The authors declared no conflicts of interest.
A version of this article appeared on Medscape.com.
Rheumatologists Deserve Better Pay, Say Respondents to Compensation Survey
While rheumatologists reported small pay gains this year, more than half said the specialty was underpaid.
In the Medscape Rheumatologist Compensation Report 2024, 53% said that they did not feel fairly paid given their work demands. Rheumatologist respondents reported earning an average of $286,000 annually, ranking them as the seventh lowest earners out of a total of 29 specialties surveyed. Orthopedics was the highest earning specialty, with $558,000 in annual income, and diabetes & endocrinology was the lowest earning specialty, with $256,000 in annual compensation.
In last year’s report, a rheumatologist’s average income was $281,000.
This new report was compiled from an online survey including more than 7000 physicians from 29 specialties, of whom 1% of respondents were rheumatologists. Most respondents (58%) were women, and 39% were men. The survey was available from October 2, 2023, to January 16, 2024.
Rheumatologists reported a 2% increase in pay compared with that cited in the previous year’s report. Physical medicine and rehabilitation had the largest bump in pay at 11%. A total of 29% of rheumatologists said their pay had increased from that in the previous year, and 18% reported fewer earnings. About half (53%) reported that their income remained the same.
When asked about physician pay in the United States, 61% of rheumatologists said most physicians were underpaid, 34% said physicians were paid fairly, and only 4% said most physicians were overpaid.
“Most physicians who take care of chronic illnesses in long-term patients are underpaid. Not all doctors are,” said one survey respondent.
Another 41% of rheumatologists said they supplemented income with additional work, including other medical-related work (30%), nonmedical-related work (5%), adding more hours to their primary job (5%), and medical moonlighting (4%). (Respondents could choose more than one option in the survey.) This is slightly lower than last year’s survey, where 46% of rheumatologist respondents said they took on additional work.
About three out of four rheumatologists said that other medical businesses or competing physician practices did not affect their income, and only 5% said these competitors considerably affected income.
Rheumatologists listed being good at their job/diagnosing (36%) as the most rewarding part of their profession, followed by gratitude from/relationships with patients (26%) and making the world a better place/helping others (19%). Difficulties with insurance and receiving fair reimbursement (22%), dealing with difficult patients (20%), having many rules and regulations (18%), and working with an electronic health record system (15%) were the most commonly reported challenges for rheumatologists.
A version of this article appeared on Medscape.com.
While rheumatologists reported small pay gains this year, more than half said the specialty was underpaid.
In the Medscape Rheumatologist Compensation Report 2024, 53% said that they did not feel fairly paid given their work demands. Rheumatologist respondents reported earning an average of $286,000 annually, ranking them as the seventh lowest earners out of a total of 29 specialties surveyed. Orthopedics was the highest earning specialty, with $558,000 in annual income, and diabetes & endocrinology was the lowest earning specialty, with $256,000 in annual compensation.
In last year’s report, a rheumatologist’s average income was $281,000.
This new report was compiled from an online survey including more than 7000 physicians from 29 specialties, of whom 1% of respondents were rheumatologists. Most respondents (58%) were women, and 39% were men. The survey was available from October 2, 2023, to January 16, 2024.
Rheumatologists reported a 2% increase in pay compared with that cited in the previous year’s report. Physical medicine and rehabilitation had the largest bump in pay at 11%. A total of 29% of rheumatologists said their pay had increased from that in the previous year, and 18% reported fewer earnings. About half (53%) reported that their income remained the same.
When asked about physician pay in the United States, 61% of rheumatologists said most physicians were underpaid, 34% said physicians were paid fairly, and only 4% said most physicians were overpaid.
“Most physicians who take care of chronic illnesses in long-term patients are underpaid. Not all doctors are,” said one survey respondent.
Another 41% of rheumatologists said they supplemented income with additional work, including other medical-related work (30%), nonmedical-related work (5%), adding more hours to their primary job (5%), and medical moonlighting (4%). (Respondents could choose more than one option in the survey.) This is slightly lower than last year’s survey, where 46% of rheumatologist respondents said they took on additional work.
About three out of four rheumatologists said that other medical businesses or competing physician practices did not affect their income, and only 5% said these competitors considerably affected income.
Rheumatologists listed being good at their job/diagnosing (36%) as the most rewarding part of their profession, followed by gratitude from/relationships with patients (26%) and making the world a better place/helping others (19%). Difficulties with insurance and receiving fair reimbursement (22%), dealing with difficult patients (20%), having many rules and regulations (18%), and working with an electronic health record system (15%) were the most commonly reported challenges for rheumatologists.
A version of this article appeared on Medscape.com.
While rheumatologists reported small pay gains this year, more than half said the specialty was underpaid.
In the Medscape Rheumatologist Compensation Report 2024, 53% said that they did not feel fairly paid given their work demands. Rheumatologist respondents reported earning an average of $286,000 annually, ranking them as the seventh lowest earners out of a total of 29 specialties surveyed. Orthopedics was the highest earning specialty, with $558,000 in annual income, and diabetes & endocrinology was the lowest earning specialty, with $256,000 in annual compensation.
In last year’s report, a rheumatologist’s average income was $281,000.
This new report was compiled from an online survey including more than 7000 physicians from 29 specialties, of whom 1% of respondents were rheumatologists. Most respondents (58%) were women, and 39% were men. The survey was available from October 2, 2023, to January 16, 2024.
Rheumatologists reported a 2% increase in pay compared with that cited in the previous year’s report. Physical medicine and rehabilitation had the largest bump in pay at 11%. A total of 29% of rheumatologists said their pay had increased from that in the previous year, and 18% reported fewer earnings. About half (53%) reported that their income remained the same.
When asked about physician pay in the United States, 61% of rheumatologists said most physicians were underpaid, 34% said physicians were paid fairly, and only 4% said most physicians were overpaid.
“Most physicians who take care of chronic illnesses in long-term patients are underpaid. Not all doctors are,” said one survey respondent.
Another 41% of rheumatologists said they supplemented income with additional work, including other medical-related work (30%), nonmedical-related work (5%), adding more hours to their primary job (5%), and medical moonlighting (4%). (Respondents could choose more than one option in the survey.) This is slightly lower than last year’s survey, where 46% of rheumatologist respondents said they took on additional work.
About three out of four rheumatologists said that other medical businesses or competing physician practices did not affect their income, and only 5% said these competitors considerably affected income.
Rheumatologists listed being good at their job/diagnosing (36%) as the most rewarding part of their profession, followed by gratitude from/relationships with patients (26%) and making the world a better place/helping others (19%). Difficulties with insurance and receiving fair reimbursement (22%), dealing with difficult patients (20%), having many rules and regulations (18%), and working with an electronic health record system (15%) were the most commonly reported challenges for rheumatologists.
A version of this article appeared on Medscape.com.
No Increased Risk for Fractures Seen With Frequent Steroid Injections for Musculoskeletal Conditions
TOPLINE:
The cumulative effect of frequent corticosteroid injections (CSIs), a common treatment for musculoskeletal pain, does not appear to increase the risk for fractures.
METHODOLOGY:
- Researchers utilized an institutional electronic health record database to identify adults in Olmsted County, Minnesota, receiving corticosteroid injections from May 1, 2018, to July 1, 2022.
- Corticosteroid equivalents were calculated for medications injected, including methylprednisolone, triamcinolone, betamethasone, and dexamethasone.
- Patients were excluded if they had a prescription for oral prednisone equivalents greater than 2.5 mg/day for more than 30 days.
- Fracture events were identified using ICD-9 and ICD-10 codes and were included only if they occurred after the first corticosteroid injection.
TAKEAWAY:
- A total of 7197 patients were analyzed, with a mean age of 64.4 years, and of these patients, 346 (4.8%) had a new fracture in a mean time of 329 days from the first corticosteroid injection, including 149 (43.1%) in classic osteoporotic locations.
- The study reported no increased fracture risk associated with corticosteroid injections and no significant difference in fracture rates across cumulative corticosteroid injection dose quartiles, regardless of osteoporosis status.
- Factors such as previous fractures, age, and Charlson Comorbidity Index were associated with a higher risk for fractures, not corticosteroid injections.
IN PRACTICE:
“Clinicians should be reassured that frequent CSI is not associated with higher fracture risk and should not withhold these important pain treatments owing to concern for fracture,” wrote the authors of the study.
SOURCE:
The study was led by Terin T. Sytsma, MD, Division of Community Internal Medicine, Geriatrics, and Palliative Care, Mayo Clinic, Rochester, Minnesota. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s retrospective cohort design and its focus on a predominantly White population in a single community may limit the generalizability of the findings. Confounding variables such as smoking status, alcohol intake, and physical activity were acknowledged as potential contributors to fracture risk. Only clinically apparent fractures were considered, excluding silent vertebral fractures, and differences in corticosteroid formulation were not delineated.
DISCLOSURES:
The study was supported by a Mayo Clinic Catalyst Award to Dr. Sytsma. The authors had no conflicts of interest to report.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
The cumulative effect of frequent corticosteroid injections (CSIs), a common treatment for musculoskeletal pain, does not appear to increase the risk for fractures.
METHODOLOGY:
- Researchers utilized an institutional electronic health record database to identify adults in Olmsted County, Minnesota, receiving corticosteroid injections from May 1, 2018, to July 1, 2022.
- Corticosteroid equivalents were calculated for medications injected, including methylprednisolone, triamcinolone, betamethasone, and dexamethasone.
- Patients were excluded if they had a prescription for oral prednisone equivalents greater than 2.5 mg/day for more than 30 days.
- Fracture events were identified using ICD-9 and ICD-10 codes and were included only if they occurred after the first corticosteroid injection.
TAKEAWAY:
- A total of 7197 patients were analyzed, with a mean age of 64.4 years, and of these patients, 346 (4.8%) had a new fracture in a mean time of 329 days from the first corticosteroid injection, including 149 (43.1%) in classic osteoporotic locations.
- The study reported no increased fracture risk associated with corticosteroid injections and no significant difference in fracture rates across cumulative corticosteroid injection dose quartiles, regardless of osteoporosis status.
- Factors such as previous fractures, age, and Charlson Comorbidity Index were associated with a higher risk for fractures, not corticosteroid injections.
IN PRACTICE:
“Clinicians should be reassured that frequent CSI is not associated with higher fracture risk and should not withhold these important pain treatments owing to concern for fracture,” wrote the authors of the study.
SOURCE:
The study was led by Terin T. Sytsma, MD, Division of Community Internal Medicine, Geriatrics, and Palliative Care, Mayo Clinic, Rochester, Minnesota. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s retrospective cohort design and its focus on a predominantly White population in a single community may limit the generalizability of the findings. Confounding variables such as smoking status, alcohol intake, and physical activity were acknowledged as potential contributors to fracture risk. Only clinically apparent fractures were considered, excluding silent vertebral fractures, and differences in corticosteroid formulation were not delineated.
DISCLOSURES:
The study was supported by a Mayo Clinic Catalyst Award to Dr. Sytsma. The authors had no conflicts of interest to report.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
The cumulative effect of frequent corticosteroid injections (CSIs), a common treatment for musculoskeletal pain, does not appear to increase the risk for fractures.
METHODOLOGY:
- Researchers utilized an institutional electronic health record database to identify adults in Olmsted County, Minnesota, receiving corticosteroid injections from May 1, 2018, to July 1, 2022.
- Corticosteroid equivalents were calculated for medications injected, including methylprednisolone, triamcinolone, betamethasone, and dexamethasone.
- Patients were excluded if they had a prescription for oral prednisone equivalents greater than 2.5 mg/day for more than 30 days.
- Fracture events were identified using ICD-9 and ICD-10 codes and were included only if they occurred after the first corticosteroid injection.
TAKEAWAY:
- A total of 7197 patients were analyzed, with a mean age of 64.4 years, and of these patients, 346 (4.8%) had a new fracture in a mean time of 329 days from the first corticosteroid injection, including 149 (43.1%) in classic osteoporotic locations.
- The study reported no increased fracture risk associated with corticosteroid injections and no significant difference in fracture rates across cumulative corticosteroid injection dose quartiles, regardless of osteoporosis status.
- Factors such as previous fractures, age, and Charlson Comorbidity Index were associated with a higher risk for fractures, not corticosteroid injections.
IN PRACTICE:
“Clinicians should be reassured that frequent CSI is not associated with higher fracture risk and should not withhold these important pain treatments owing to concern for fracture,” wrote the authors of the study.
SOURCE:
The study was led by Terin T. Sytsma, MD, Division of Community Internal Medicine, Geriatrics, and Palliative Care, Mayo Clinic, Rochester, Minnesota. It was published online in JAMA Network Open.
LIMITATIONS:
The study’s retrospective cohort design and its focus on a predominantly White population in a single community may limit the generalizability of the findings. Confounding variables such as smoking status, alcohol intake, and physical activity were acknowledged as potential contributors to fracture risk. Only clinically apparent fractures were considered, excluding silent vertebral fractures, and differences in corticosteroid formulation were not delineated.
DISCLOSURES:
The study was supported by a Mayo Clinic Catalyst Award to Dr. Sytsma. The authors had no conflicts of interest to report.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
In MS With Mild Symptoms, Non-Motor Symptoms Predict Later Mobility Problems
NASHVILLE, TENNESSEE — However, these associations fall away among patients with more severe disease, according to a new study performed in Australia. The findings could eventually help tailor physical activity interventions.
The research grew out of frustrations with developing interventions focused on strength. “There are many systematic reviews showing stronger and stronger evidence that exercise is beneficial. It does change your walking. It does improve your balance,” said Katrina Williams, PhD, during a presentation of the results at the annual meeting of the Consortium of Multiple Sclerosis Centers.
However, when her group’s intervention studies yielded no statistically significant improvements, she began to search for explanations, and began to suspect heterogeneity among MS patients. Their clinic took all comers, regardless of disability level. “[Our attitude was] we will make it work. We’ll get you actively moving and exercising. But when you break down a lot of those systematic reviews, there’s not a lot of teasing out of disability levels. So, potentially, it is the disability level that might be leading to why some people don’t change or why we’re not getting the statistically significant benefits, because we’re not addressing the individual at their level of disease progression,” said Dr. Williams, who is a senior lecturer in physiotherapy at the University of Queensland, Brisbane, Australia.
“Physiotherapists, we love exercise, we love movement, but we’re a bit unidimensional. It’s some strength training, [or] let’s get on that bike and do cardiovascular. But that may not be enough for individuals who have different symptoms profiles. We’re assuming that the motor profile is the most important, and the one that needs to be addressed in these individuals,” said Dr. Williams.
Focusing on Non-Motor Symptoms
When she searched the literature, she could find little evidence of non-motor symptoms correlating to walking, balance, or even quality of life. To dig deeper, her group studied 220 MS patients in Australia who self-reported symptoms of dizziness, vision problems, fatigue, and spasticity. The population had a mean age of 42 years, and 82% were female. They ranged in disease severity from disease step (DS) 0 to DS 6. The researchers categorized respondents as between DS 0 (mild symptoms that were mostly sensory) to DS 3 (MS interferes with walking) and from DS 4 (early cane use) to DS 6 (requiring bilateral walking support).
Deficits were more commonly reported in the DS 4-6 group than the DS 0-3 group with respect to light touch (88% vs 72%), proprioception (63% vs 41%), fatigue (100% vs 96%), and spasticity (78% vs 69%). There were no significant differences in dizziness, vision, or memory/cognition/emotion.
A linear regression model incorporating sensory worsening, age, social participation, perceived deficit, and spasticity showed an R2 adjusted value of 0.73. However, when they looked only at DS 0-3 patients, the R2 value strengthened to 0.86. Among the DS 4-6 group, the correlation largely disappeared with an R2 value of 0.16. Specifically, there were stronger associations in the DS 0-3 group than the overall group (DS 0-6) between perceived walking deficit and sensory worsening (R2 0.45 vs 0.31), fatigue (0.67 vs 0.05), spasticity (0.47 vs 0.16), and balance (0.8 vs 0.16).
“Most non-motor symptoms do have moderate to weak correlations to walking confidence and walking balance, and quality of life, and the correlations do decline as disability worsens. Those with less disability had more correlations that were stronger, particularly for the walking and balance confidence. So [among those] walking without an aid, there are more non-motor correlations aligned to the actual outcomes. In more disabled, they fell away, so there’s something else going on that we do have to look at,” said Dr. Williams.
She called for other clinicians to explore non-motor symptoms in patients with less disability, and the relationships of those symptoms to gait, balance, and overall MS impact, in the hopes that such observations could improve the tailoring of physiotherapy programs.
Perception May Differ From Actual Function
During the Q&A session, Nora Fritz, PhD, an associate professor of neurology at Wayne State University, Detroit, Michigan, asked about the lack of correlations seen in more disabled patients. “It’s not exactly what you would expect to happen,” said Dr. Fritz, in an interview.
She asked Dr. Williams if the study had sufficient power to detect associations in patients with more severe disability, since the study had a relatively small sample size and many predictors in its regression model. Dr. Fritz also noted that perceptions may differ from actual function, so actual function can’t be captured using a survey. Dr. Williams responded that the group is now working to incorporate more clinical measures to their correlations.
Another audience member said she was “perplexed” by the drop-off of correlation in the most severe group. She suggested the possibility that as patients become more disabled, they may be less likely to perceive the relatively less severe non-motor symptoms and therefore did not report them.
Dr. Williams and Dr. Fritz have no relevant financial disclosures.
NASHVILLE, TENNESSEE — However, these associations fall away among patients with more severe disease, according to a new study performed in Australia. The findings could eventually help tailor physical activity interventions.
The research grew out of frustrations with developing interventions focused on strength. “There are many systematic reviews showing stronger and stronger evidence that exercise is beneficial. It does change your walking. It does improve your balance,” said Katrina Williams, PhD, during a presentation of the results at the annual meeting of the Consortium of Multiple Sclerosis Centers.
However, when her group’s intervention studies yielded no statistically significant improvements, she began to search for explanations, and began to suspect heterogeneity among MS patients. Their clinic took all comers, regardless of disability level. “[Our attitude was] we will make it work. We’ll get you actively moving and exercising. But when you break down a lot of those systematic reviews, there’s not a lot of teasing out of disability levels. So, potentially, it is the disability level that might be leading to why some people don’t change or why we’re not getting the statistically significant benefits, because we’re not addressing the individual at their level of disease progression,” said Dr. Williams, who is a senior lecturer in physiotherapy at the University of Queensland, Brisbane, Australia.
“Physiotherapists, we love exercise, we love movement, but we’re a bit unidimensional. It’s some strength training, [or] let’s get on that bike and do cardiovascular. But that may not be enough for individuals who have different symptoms profiles. We’re assuming that the motor profile is the most important, and the one that needs to be addressed in these individuals,” said Dr. Williams.
Focusing on Non-Motor Symptoms
When she searched the literature, she could find little evidence of non-motor symptoms correlating to walking, balance, or even quality of life. To dig deeper, her group studied 220 MS patients in Australia who self-reported symptoms of dizziness, vision problems, fatigue, and spasticity. The population had a mean age of 42 years, and 82% were female. They ranged in disease severity from disease step (DS) 0 to DS 6. The researchers categorized respondents as between DS 0 (mild symptoms that were mostly sensory) to DS 3 (MS interferes with walking) and from DS 4 (early cane use) to DS 6 (requiring bilateral walking support).
Deficits were more commonly reported in the DS 4-6 group than the DS 0-3 group with respect to light touch (88% vs 72%), proprioception (63% vs 41%), fatigue (100% vs 96%), and spasticity (78% vs 69%). There were no significant differences in dizziness, vision, or memory/cognition/emotion.
A linear regression model incorporating sensory worsening, age, social participation, perceived deficit, and spasticity showed an R2 adjusted value of 0.73. However, when they looked only at DS 0-3 patients, the R2 value strengthened to 0.86. Among the DS 4-6 group, the correlation largely disappeared with an R2 value of 0.16. Specifically, there were stronger associations in the DS 0-3 group than the overall group (DS 0-6) between perceived walking deficit and sensory worsening (R2 0.45 vs 0.31), fatigue (0.67 vs 0.05), spasticity (0.47 vs 0.16), and balance (0.8 vs 0.16).
“Most non-motor symptoms do have moderate to weak correlations to walking confidence and walking balance, and quality of life, and the correlations do decline as disability worsens. Those with less disability had more correlations that were stronger, particularly for the walking and balance confidence. So [among those] walking without an aid, there are more non-motor correlations aligned to the actual outcomes. In more disabled, they fell away, so there’s something else going on that we do have to look at,” said Dr. Williams.
She called for other clinicians to explore non-motor symptoms in patients with less disability, and the relationships of those symptoms to gait, balance, and overall MS impact, in the hopes that such observations could improve the tailoring of physiotherapy programs.
Perception May Differ From Actual Function
During the Q&A session, Nora Fritz, PhD, an associate professor of neurology at Wayne State University, Detroit, Michigan, asked about the lack of correlations seen in more disabled patients. “It’s not exactly what you would expect to happen,” said Dr. Fritz, in an interview.
She asked Dr. Williams if the study had sufficient power to detect associations in patients with more severe disability, since the study had a relatively small sample size and many predictors in its regression model. Dr. Fritz also noted that perceptions may differ from actual function, so actual function can’t be captured using a survey. Dr. Williams responded that the group is now working to incorporate more clinical measures to their correlations.
Another audience member said she was “perplexed” by the drop-off of correlation in the most severe group. She suggested the possibility that as patients become more disabled, they may be less likely to perceive the relatively less severe non-motor symptoms and therefore did not report them.
Dr. Williams and Dr. Fritz have no relevant financial disclosures.
NASHVILLE, TENNESSEE — However, these associations fall away among patients with more severe disease, according to a new study performed in Australia. The findings could eventually help tailor physical activity interventions.
The research grew out of frustrations with developing interventions focused on strength. “There are many systematic reviews showing stronger and stronger evidence that exercise is beneficial. It does change your walking. It does improve your balance,” said Katrina Williams, PhD, during a presentation of the results at the annual meeting of the Consortium of Multiple Sclerosis Centers.
However, when her group’s intervention studies yielded no statistically significant improvements, she began to search for explanations, and began to suspect heterogeneity among MS patients. Their clinic took all comers, regardless of disability level. “[Our attitude was] we will make it work. We’ll get you actively moving and exercising. But when you break down a lot of those systematic reviews, there’s not a lot of teasing out of disability levels. So, potentially, it is the disability level that might be leading to why some people don’t change or why we’re not getting the statistically significant benefits, because we’re not addressing the individual at their level of disease progression,” said Dr. Williams, who is a senior lecturer in physiotherapy at the University of Queensland, Brisbane, Australia.
“Physiotherapists, we love exercise, we love movement, but we’re a bit unidimensional. It’s some strength training, [or] let’s get on that bike and do cardiovascular. But that may not be enough for individuals who have different symptoms profiles. We’re assuming that the motor profile is the most important, and the one that needs to be addressed in these individuals,” said Dr. Williams.
Focusing on Non-Motor Symptoms
When she searched the literature, she could find little evidence of non-motor symptoms correlating to walking, balance, or even quality of life. To dig deeper, her group studied 220 MS patients in Australia who self-reported symptoms of dizziness, vision problems, fatigue, and spasticity. The population had a mean age of 42 years, and 82% were female. They ranged in disease severity from disease step (DS) 0 to DS 6. The researchers categorized respondents as between DS 0 (mild symptoms that were mostly sensory) to DS 3 (MS interferes with walking) and from DS 4 (early cane use) to DS 6 (requiring bilateral walking support).
Deficits were more commonly reported in the DS 4-6 group than the DS 0-3 group with respect to light touch (88% vs 72%), proprioception (63% vs 41%), fatigue (100% vs 96%), and spasticity (78% vs 69%). There were no significant differences in dizziness, vision, or memory/cognition/emotion.
A linear regression model incorporating sensory worsening, age, social participation, perceived deficit, and spasticity showed an R2 adjusted value of 0.73. However, when they looked only at DS 0-3 patients, the R2 value strengthened to 0.86. Among the DS 4-6 group, the correlation largely disappeared with an R2 value of 0.16. Specifically, there were stronger associations in the DS 0-3 group than the overall group (DS 0-6) between perceived walking deficit and sensory worsening (R2 0.45 vs 0.31), fatigue (0.67 vs 0.05), spasticity (0.47 vs 0.16), and balance (0.8 vs 0.16).
“Most non-motor symptoms do have moderate to weak correlations to walking confidence and walking balance, and quality of life, and the correlations do decline as disability worsens. Those with less disability had more correlations that were stronger, particularly for the walking and balance confidence. So [among those] walking without an aid, there are more non-motor correlations aligned to the actual outcomes. In more disabled, they fell away, so there’s something else going on that we do have to look at,” said Dr. Williams.
She called for other clinicians to explore non-motor symptoms in patients with less disability, and the relationships of those symptoms to gait, balance, and overall MS impact, in the hopes that such observations could improve the tailoring of physiotherapy programs.
Perception May Differ From Actual Function
During the Q&A session, Nora Fritz, PhD, an associate professor of neurology at Wayne State University, Detroit, Michigan, asked about the lack of correlations seen in more disabled patients. “It’s not exactly what you would expect to happen,” said Dr. Fritz, in an interview.
She asked Dr. Williams if the study had sufficient power to detect associations in patients with more severe disability, since the study had a relatively small sample size and many predictors in its regression model. Dr. Fritz also noted that perceptions may differ from actual function, so actual function can’t be captured using a survey. Dr. Williams responded that the group is now working to incorporate more clinical measures to their correlations.
Another audience member said she was “perplexed” by the drop-off of correlation in the most severe group. She suggested the possibility that as patients become more disabled, they may be less likely to perceive the relatively less severe non-motor symptoms and therefore did not report them.
Dr. Williams and Dr. Fritz have no relevant financial disclosures.
FROM CMSC 2024
The Value of Early Education
Early education is right up there with motherhood and apple pie as unarguable positive concepts. How could exposing young children to a school-like atmosphere not be a benefit, particularly in communities dominated by socioeconomic challenges? While there are some questions about the value of playing Mozart to infants, early education in the traditional sense continues to be viewed as a key strategy for providing young children a preschool foundation on which a successful academic career can be built. Several oft-cited randomized controlled trials have fueled both private and public interest and funding.
However, a recent commentary published in Science suggests that all programs are “not unequivocally positive and much more research is needed.” “Worrisome results in Tennessee,” “Success in Boston,” and “Largely null results for Headstart” are just a few of the article’s section titles and convey a sense of the inconsistency the investigators found as they reviewed early education systems around the country.
While there may be some politicians who may attempt to use the results of this investigation as a reason to cancel public funding of underperforming early education programs, the authors avoid this baby-and-the-bathwater conclusion. Instead, they urge more rigorous research “to understand how effective programs can be designed and implemented.”
The kind of re-thinking and brainstorming these investigators suggest takes time. While we’re waiting for this process to gain traction, this might be a good time to consider some of the benefits of early education that we don’t usually consider when our focus is on academic metrics.
A recent paper in Children’s Health Care by investigators at the Boston University Medical Center and School of Medicine considered the diet of children attending preschool. Looking at the dietary records of more than 300 children attending 30 childcare centers, the researchers found that the children’s diets before arrival at daycare was less healthy than while they were in daycare. “The hour after pickup appeared to be the least healthful” of any of the time periods surveyed. Of course, we will all conjure up images of what this chaotic post-daycare pickup may look like and cut the harried parents and grandparents some slack when it comes to nutritional choices. However, the bottom line is that for the group of children surveyed being in preschool or daycare protected them from a less healthy diet they were being provided outside of school hours.
Our recent experience with pandemic-related school closures provides more evidence that being in school was superior to any remote experience academically. School-age children and adolescents gained weight when school closures were the norm. Play patterns for children shifted from outdoor play to indoor play — often dominated by more sedentary video games. Both fatal and non-fatal gun-related injuries surged during the pandemic and, by far, the majority of these occur in the home and not at school.
Stepping back to look at this broader picture that includes diet, physical activity, and safety — not to mention the benefits of socialization — leads one to arrive at the unfortunate conclusion that Of course there will be those who point to the belief that schools are petri dishes putting children at greater risk for respiratory infections. On the other hand, we must accept that schools haven’t proved to be a major factor in the spread of COVID that many had feared.
The authors of the study in Science are certainly correct in recommending a more thorough investigation into the academic benefits of preschool education. However, we must keep in mind that preschool offers an environment that can be a positive influence on young children.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Early education is right up there with motherhood and apple pie as unarguable positive concepts. How could exposing young children to a school-like atmosphere not be a benefit, particularly in communities dominated by socioeconomic challenges? While there are some questions about the value of playing Mozart to infants, early education in the traditional sense continues to be viewed as a key strategy for providing young children a preschool foundation on which a successful academic career can be built. Several oft-cited randomized controlled trials have fueled both private and public interest and funding.
However, a recent commentary published in Science suggests that all programs are “not unequivocally positive and much more research is needed.” “Worrisome results in Tennessee,” “Success in Boston,” and “Largely null results for Headstart” are just a few of the article’s section titles and convey a sense of the inconsistency the investigators found as they reviewed early education systems around the country.
While there may be some politicians who may attempt to use the results of this investigation as a reason to cancel public funding of underperforming early education programs, the authors avoid this baby-and-the-bathwater conclusion. Instead, they urge more rigorous research “to understand how effective programs can be designed and implemented.”
The kind of re-thinking and brainstorming these investigators suggest takes time. While we’re waiting for this process to gain traction, this might be a good time to consider some of the benefits of early education that we don’t usually consider when our focus is on academic metrics.
A recent paper in Children’s Health Care by investigators at the Boston University Medical Center and School of Medicine considered the diet of children attending preschool. Looking at the dietary records of more than 300 children attending 30 childcare centers, the researchers found that the children’s diets before arrival at daycare was less healthy than while they were in daycare. “The hour after pickup appeared to be the least healthful” of any of the time periods surveyed. Of course, we will all conjure up images of what this chaotic post-daycare pickup may look like and cut the harried parents and grandparents some slack when it comes to nutritional choices. However, the bottom line is that for the group of children surveyed being in preschool or daycare protected them from a less healthy diet they were being provided outside of school hours.
Our recent experience with pandemic-related school closures provides more evidence that being in school was superior to any remote experience academically. School-age children and adolescents gained weight when school closures were the norm. Play patterns for children shifted from outdoor play to indoor play — often dominated by more sedentary video games. Both fatal and non-fatal gun-related injuries surged during the pandemic and, by far, the majority of these occur in the home and not at school.
Stepping back to look at this broader picture that includes diet, physical activity, and safety — not to mention the benefits of socialization — leads one to arrive at the unfortunate conclusion that Of course there will be those who point to the belief that schools are petri dishes putting children at greater risk for respiratory infections. On the other hand, we must accept that schools haven’t proved to be a major factor in the spread of COVID that many had feared.
The authors of the study in Science are certainly correct in recommending a more thorough investigation into the academic benefits of preschool education. However, we must keep in mind that preschool offers an environment that can be a positive influence on young children.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Early education is right up there with motherhood and apple pie as unarguable positive concepts. How could exposing young children to a school-like atmosphere not be a benefit, particularly in communities dominated by socioeconomic challenges? While there are some questions about the value of playing Mozart to infants, early education in the traditional sense continues to be viewed as a key strategy for providing young children a preschool foundation on which a successful academic career can be built. Several oft-cited randomized controlled trials have fueled both private and public interest and funding.
However, a recent commentary published in Science suggests that all programs are “not unequivocally positive and much more research is needed.” “Worrisome results in Tennessee,” “Success in Boston,” and “Largely null results for Headstart” are just a few of the article’s section titles and convey a sense of the inconsistency the investigators found as they reviewed early education systems around the country.
While there may be some politicians who may attempt to use the results of this investigation as a reason to cancel public funding of underperforming early education programs, the authors avoid this baby-and-the-bathwater conclusion. Instead, they urge more rigorous research “to understand how effective programs can be designed and implemented.”
The kind of re-thinking and brainstorming these investigators suggest takes time. While we’re waiting for this process to gain traction, this might be a good time to consider some of the benefits of early education that we don’t usually consider when our focus is on academic metrics.
A recent paper in Children’s Health Care by investigators at the Boston University Medical Center and School of Medicine considered the diet of children attending preschool. Looking at the dietary records of more than 300 children attending 30 childcare centers, the researchers found that the children’s diets before arrival at daycare was less healthy than while they were in daycare. “The hour after pickup appeared to be the least healthful” of any of the time periods surveyed. Of course, we will all conjure up images of what this chaotic post-daycare pickup may look like and cut the harried parents and grandparents some slack when it comes to nutritional choices. However, the bottom line is that for the group of children surveyed being in preschool or daycare protected them from a less healthy diet they were being provided outside of school hours.
Our recent experience with pandemic-related school closures provides more evidence that being in school was superior to any remote experience academically. School-age children and adolescents gained weight when school closures were the norm. Play patterns for children shifted from outdoor play to indoor play — often dominated by more sedentary video games. Both fatal and non-fatal gun-related injuries surged during the pandemic and, by far, the majority of these occur in the home and not at school.
Stepping back to look at this broader picture that includes diet, physical activity, and safety — not to mention the benefits of socialization — leads one to arrive at the unfortunate conclusion that Of course there will be those who point to the belief that schools are petri dishes putting children at greater risk for respiratory infections. On the other hand, we must accept that schools haven’t proved to be a major factor in the spread of COVID that many had feared.
The authors of the study in Science are certainly correct in recommending a more thorough investigation into the academic benefits of preschool education. However, we must keep in mind that preschool offers an environment that can be a positive influence on young children.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Progestin-Only IUDs Linked to 22% Lower Ischemic Stroke Risk
Women who used levonorgestrel-releasing intrauterine devices (LG-IUDs) were 22% less likely to have a stroke than those who did not use hormonal contraception, new research suggested.
The Danish study, which included 1.7 million women, also showed no increased risk for intracerebral hemorrhage for those using the IUDs.
“The finding raises the question of whether levonorgestrel, in addition to its contraceptive properties, could have the potential to prevent (ischemic stroke),” wrote corresponding author Tom Skyhøj Olsen, MD, PhD, of Bispebjerg University Hospital, Copenhagen, Denmark, and coauthors.
The research was published online on May 16, 2024, in the journal Stroke.
A Big-Picture Look
Commonly used combined hormonal contraceptives that contain both progestins and ethinylestradiol are linked to an increased risk for ischemic stroke. Previous research suggested that progestin-only options, including LG-IUDs, are not associated with elevated risk and may even lower the risk. The IUDs had also been previously associated with lower risk for thromboembolism.
The new study was a large-scale investigation of all reproductive-age women in Denmark that compared stroke risk in those who used the progestin-only IUDs with those who didn’t use hormonal contraception. It also examined the risk for intracerebral hemorrhage, which had not been previously studied.
The historic cohort study drew on several large national databases in Denmark, including the Danish Stroke Registry, to evaluate the interplay between IUD contraception, stroke, and intracerebral hemorrhage. The study looked back at data collected on all nonpregnant Danish women aged 18-49 years who lived in Denmark for some or all of the period between 2004 and 2021.
Mean age of the 1.7 million women in the study was 30 years, and the mean follow-up period was about 7 years. More than 364,700 participants used LG-IUDs.
During the study period, 2916 women had an ischemic stroke, and 367 experienced intracerebral hemorrhage.
Among IUD users, the incidence of stroke was 19.2 per 100,000 person years. For women who didn’t use contraception, the rate was 25.2.
Overall, those who used an IUD had a 22% lower risk for ischemic stroke than those who didn’t (incidence rate ratio [IRR], 0.78; 95% CI, 0.70-0.88).
The incidence of brain bleeds was similar in both groups.
Does Age Matter?
The incidence of stroke did not differ significantly between the three age groups analyzed in the study: Women aged 18-29 years, 30-39 years, and 40-49 years. Incidence rates of intracerebral hemorrhage were similar between age groups 30-39 years and 40-49 years, but the risk was higher for those aged 18-29 years than for those aged 40-49 years (IRR, 4.49; 95% CI, 1.65-12.19).
The researchers urged caution in interpreting the apparent higher risk for brain bleeds in younger women, noting that the overall number of events was low, resulting in wide CIs.
Investigators also found that women who moved to Denmark from non-Western countries had a significantly lower stroke rate than native Danes. Incidence rates of intracerebral hemorrhage were not affected by country of origin.
The research team noted that they had only indirect information about women’s stroke risk factors including diabetes, high blood pressure, and migraine and had no information about smoking, alcohol consumption, and body mass index.
“Regarding a possible potential for stroke prevention, our study cannot stand alone and requires confirmation in further research. Even though the incidence rate for [ischemic stroke] and [intracerebral hemorrhage] did not significantly change after adjustment for various factors, bias…cannot be fully ruled out,” the researchers wrote.
The study was funded by the Aase og Ejnar Danielsens Fond and Familien Hede Nielsens Fond. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Women who used levonorgestrel-releasing intrauterine devices (LG-IUDs) were 22% less likely to have a stroke than those who did not use hormonal contraception, new research suggested.
The Danish study, which included 1.7 million women, also showed no increased risk for intracerebral hemorrhage for those using the IUDs.
“The finding raises the question of whether levonorgestrel, in addition to its contraceptive properties, could have the potential to prevent (ischemic stroke),” wrote corresponding author Tom Skyhøj Olsen, MD, PhD, of Bispebjerg University Hospital, Copenhagen, Denmark, and coauthors.
The research was published online on May 16, 2024, in the journal Stroke.
A Big-Picture Look
Commonly used combined hormonal contraceptives that contain both progestins and ethinylestradiol are linked to an increased risk for ischemic stroke. Previous research suggested that progestin-only options, including LG-IUDs, are not associated with elevated risk and may even lower the risk. The IUDs had also been previously associated with lower risk for thromboembolism.
The new study was a large-scale investigation of all reproductive-age women in Denmark that compared stroke risk in those who used the progestin-only IUDs with those who didn’t use hormonal contraception. It also examined the risk for intracerebral hemorrhage, which had not been previously studied.
The historic cohort study drew on several large national databases in Denmark, including the Danish Stroke Registry, to evaluate the interplay between IUD contraception, stroke, and intracerebral hemorrhage. The study looked back at data collected on all nonpregnant Danish women aged 18-49 years who lived in Denmark for some or all of the period between 2004 and 2021.
Mean age of the 1.7 million women in the study was 30 years, and the mean follow-up period was about 7 years. More than 364,700 participants used LG-IUDs.
During the study period, 2916 women had an ischemic stroke, and 367 experienced intracerebral hemorrhage.
Among IUD users, the incidence of stroke was 19.2 per 100,000 person years. For women who didn’t use contraception, the rate was 25.2.
Overall, those who used an IUD had a 22% lower risk for ischemic stroke than those who didn’t (incidence rate ratio [IRR], 0.78; 95% CI, 0.70-0.88).
The incidence of brain bleeds was similar in both groups.
Does Age Matter?
The incidence of stroke did not differ significantly between the three age groups analyzed in the study: Women aged 18-29 years, 30-39 years, and 40-49 years. Incidence rates of intracerebral hemorrhage were similar between age groups 30-39 years and 40-49 years, but the risk was higher for those aged 18-29 years than for those aged 40-49 years (IRR, 4.49; 95% CI, 1.65-12.19).
The researchers urged caution in interpreting the apparent higher risk for brain bleeds in younger women, noting that the overall number of events was low, resulting in wide CIs.
Investigators also found that women who moved to Denmark from non-Western countries had a significantly lower stroke rate than native Danes. Incidence rates of intracerebral hemorrhage were not affected by country of origin.
The research team noted that they had only indirect information about women’s stroke risk factors including diabetes, high blood pressure, and migraine and had no information about smoking, alcohol consumption, and body mass index.
“Regarding a possible potential for stroke prevention, our study cannot stand alone and requires confirmation in further research. Even though the incidence rate for [ischemic stroke] and [intracerebral hemorrhage] did not significantly change after adjustment for various factors, bias…cannot be fully ruled out,” the researchers wrote.
The study was funded by the Aase og Ejnar Danielsens Fond and Familien Hede Nielsens Fond. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Women who used levonorgestrel-releasing intrauterine devices (LG-IUDs) were 22% less likely to have a stroke than those who did not use hormonal contraception, new research suggested.
The Danish study, which included 1.7 million women, also showed no increased risk for intracerebral hemorrhage for those using the IUDs.
“The finding raises the question of whether levonorgestrel, in addition to its contraceptive properties, could have the potential to prevent (ischemic stroke),” wrote corresponding author Tom Skyhøj Olsen, MD, PhD, of Bispebjerg University Hospital, Copenhagen, Denmark, and coauthors.
The research was published online on May 16, 2024, in the journal Stroke.
A Big-Picture Look
Commonly used combined hormonal contraceptives that contain both progestins and ethinylestradiol are linked to an increased risk for ischemic stroke. Previous research suggested that progestin-only options, including LG-IUDs, are not associated with elevated risk and may even lower the risk. The IUDs had also been previously associated with lower risk for thromboembolism.
The new study was a large-scale investigation of all reproductive-age women in Denmark that compared stroke risk in those who used the progestin-only IUDs with those who didn’t use hormonal contraception. It also examined the risk for intracerebral hemorrhage, which had not been previously studied.
The historic cohort study drew on several large national databases in Denmark, including the Danish Stroke Registry, to evaluate the interplay between IUD contraception, stroke, and intracerebral hemorrhage. The study looked back at data collected on all nonpregnant Danish women aged 18-49 years who lived in Denmark for some or all of the period between 2004 and 2021.
Mean age of the 1.7 million women in the study was 30 years, and the mean follow-up period was about 7 years. More than 364,700 participants used LG-IUDs.
During the study period, 2916 women had an ischemic stroke, and 367 experienced intracerebral hemorrhage.
Among IUD users, the incidence of stroke was 19.2 per 100,000 person years. For women who didn’t use contraception, the rate was 25.2.
Overall, those who used an IUD had a 22% lower risk for ischemic stroke than those who didn’t (incidence rate ratio [IRR], 0.78; 95% CI, 0.70-0.88).
The incidence of brain bleeds was similar in both groups.
Does Age Matter?
The incidence of stroke did not differ significantly between the three age groups analyzed in the study: Women aged 18-29 years, 30-39 years, and 40-49 years. Incidence rates of intracerebral hemorrhage were similar between age groups 30-39 years and 40-49 years, but the risk was higher for those aged 18-29 years than for those aged 40-49 years (IRR, 4.49; 95% CI, 1.65-12.19).
The researchers urged caution in interpreting the apparent higher risk for brain bleeds in younger women, noting that the overall number of events was low, resulting in wide CIs.
Investigators also found that women who moved to Denmark from non-Western countries had a significantly lower stroke rate than native Danes. Incidence rates of intracerebral hemorrhage were not affected by country of origin.
The research team noted that they had only indirect information about women’s stroke risk factors including diabetes, high blood pressure, and migraine and had no information about smoking, alcohol consumption, and body mass index.
“Regarding a possible potential for stroke prevention, our study cannot stand alone and requires confirmation in further research. Even though the incidence rate for [ischemic stroke] and [intracerebral hemorrhage] did not significantly change after adjustment for various factors, bias…cannot be fully ruled out,” the researchers wrote.
The study was funded by the Aase og Ejnar Danielsens Fond and Familien Hede Nielsens Fond. The authors reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Dupilumab Evaluated as Treatment for Pediatric Alopecia Areata
showed.
“We might be opening a new avenue for a safe, long-term treatment for our children with AA,” the study’s lead investigator, Emma Guttman-Yassky, MD, PhD, professor and chair of dermatology at the Icahn School of Medicine at Mount Sinai, New York City, said in an interview during the annual meeting of the Society for Investigative Dermatology (SID), where the results were presented during a poster session. “I think AA is likely joining the atopic march, which may allow us to adapt some treatments from the atopy world to AA.”
When the original phase 2 and phase 3 trials of dupilumab for patients with moderate to severe AD were being conducted, Dr. Guttman-Yassky, one of the investigators, recalled observing that some patients who also had patch alopecia experienced hair regrowth. “I was scratching my head because, at the time, AA was considered to be only a Th1-driven disease,” she said. “I asked myself, ‘How can this happen?’ I looked in the literature and found many publications linking atopy in general to alopecia areata. The largest of the dermatologic publications showed that eczema and atopy in general are the highest comorbidities in alopecia areata.”
“This and other findings such as IL [interleukin]-13 genetic linkage with AA and high IgE in patients with AA link AA with Th2 immune skewing, particularly in the setting of atopy,” she continued. In addition, she said, in a large biomarker study involving the scalp and blood of patients with AA, “we found increases in Th2 biomarkers that were associated with alopecia severity.”
Case Series of 20 Pediatric Patients
As part of a case series of children with both AD and AA, Dr. Guttman-Yassky and colleagues evaluated hair regrowth using the Severity of Alopecia Tool (SALT) in 20 pediatric patients (mean age, 10.8 years) who were being treated at Mount Sinai. They collected patient demographics, atopic history, immunoglobulin E (IgE) levels, and SALT scores at follow-up visits every 12-16 weeks for more than 72 weeks and performed Spearman correlations between clinical scores, demographics, and IgE levels.
At baseline, the mean SALT score was 54.4, the mean IgE level was 1567.7 IU/mL, and 75% of patients also had a family history of atopy. The mean follow-up was 67.6 weeks. The researchers observed a significant reduction in SALT scores at week 48 compared with baseline (a mean score of 20.4; P < .01) and continued improvement up to at least 72 weeks (P < .01 vs baseline). They also noted that patients who achieved a treatment response at week 24 had baseline IgE levels > 200 IU/mL.
In other findings, baseline IgE positively correlated with improvement in SALT scores at week 36 (P < .05), while baseline SALT scores positively correlated with disease duration (P < .01) and negatively correlated with improvement in SALT scores at weeks 24, 36, and 48 (P < .005). “The robustness of the response surprised me,” Dr. Guttman-Yassky said in the interview. “Dupilumab for AA takes time to work, but once it kicks in, it kicks in. It takes anywhere from 6 to 12 months to see hair regrowth.”
She acknowledged certain limitations of the analysis, including its small sample size and the fact that it was not a standardized trial. “But, based on our data and the adult data, we are very encouraged about the potential of using dupilumab for children with AA,” she said.
Mount Sinai recently announced that the National Institutes of Health awarded a $6.6 million, 5-year grant to Dr. Guttman-Yassky to further investigate dupilumab as a treatment for children with AA. She will lead a multicenter controlled trial of 76 children with alopecia affecting at least 30% of the scalp, who will be randomized 2:1 (dupilumab:placebo) for 48 weeks, followed by 48 weeks of open-label dupilumab for all participants, with 16 weeks of follow-up, for a total of 112 weeks. Participating sites include Mount Sinai, Yale University, Northwestern University, and the University of California, Irvine.
Dr. Guttman-Yassky disclosed that she is a consultant to many pharmaceutical companies, including dupilumab manufacturers Sanofi and Regeneron.
A version of this article appeared on Medscape.com.
showed.
“We might be opening a new avenue for a safe, long-term treatment for our children with AA,” the study’s lead investigator, Emma Guttman-Yassky, MD, PhD, professor and chair of dermatology at the Icahn School of Medicine at Mount Sinai, New York City, said in an interview during the annual meeting of the Society for Investigative Dermatology (SID), where the results were presented during a poster session. “I think AA is likely joining the atopic march, which may allow us to adapt some treatments from the atopy world to AA.”
When the original phase 2 and phase 3 trials of dupilumab for patients with moderate to severe AD were being conducted, Dr. Guttman-Yassky, one of the investigators, recalled observing that some patients who also had patch alopecia experienced hair regrowth. “I was scratching my head because, at the time, AA was considered to be only a Th1-driven disease,” she said. “I asked myself, ‘How can this happen?’ I looked in the literature and found many publications linking atopy in general to alopecia areata. The largest of the dermatologic publications showed that eczema and atopy in general are the highest comorbidities in alopecia areata.”
“This and other findings such as IL [interleukin]-13 genetic linkage with AA and high IgE in patients with AA link AA with Th2 immune skewing, particularly in the setting of atopy,” she continued. In addition, she said, in a large biomarker study involving the scalp and blood of patients with AA, “we found increases in Th2 biomarkers that were associated with alopecia severity.”
Case Series of 20 Pediatric Patients
As part of a case series of children with both AD and AA, Dr. Guttman-Yassky and colleagues evaluated hair regrowth using the Severity of Alopecia Tool (SALT) in 20 pediatric patients (mean age, 10.8 years) who were being treated at Mount Sinai. They collected patient demographics, atopic history, immunoglobulin E (IgE) levels, and SALT scores at follow-up visits every 12-16 weeks for more than 72 weeks and performed Spearman correlations between clinical scores, demographics, and IgE levels.
At baseline, the mean SALT score was 54.4, the mean IgE level was 1567.7 IU/mL, and 75% of patients also had a family history of atopy. The mean follow-up was 67.6 weeks. The researchers observed a significant reduction in SALT scores at week 48 compared with baseline (a mean score of 20.4; P < .01) and continued improvement up to at least 72 weeks (P < .01 vs baseline). They also noted that patients who achieved a treatment response at week 24 had baseline IgE levels > 200 IU/mL.
In other findings, baseline IgE positively correlated with improvement in SALT scores at week 36 (P < .05), while baseline SALT scores positively correlated with disease duration (P < .01) and negatively correlated with improvement in SALT scores at weeks 24, 36, and 48 (P < .005). “The robustness of the response surprised me,” Dr. Guttman-Yassky said in the interview. “Dupilumab for AA takes time to work, but once it kicks in, it kicks in. It takes anywhere from 6 to 12 months to see hair regrowth.”
She acknowledged certain limitations of the analysis, including its small sample size and the fact that it was not a standardized trial. “But, based on our data and the adult data, we are very encouraged about the potential of using dupilumab for children with AA,” she said.
Mount Sinai recently announced that the National Institutes of Health awarded a $6.6 million, 5-year grant to Dr. Guttman-Yassky to further investigate dupilumab as a treatment for children with AA. She will lead a multicenter controlled trial of 76 children with alopecia affecting at least 30% of the scalp, who will be randomized 2:1 (dupilumab:placebo) for 48 weeks, followed by 48 weeks of open-label dupilumab for all participants, with 16 weeks of follow-up, for a total of 112 weeks. Participating sites include Mount Sinai, Yale University, Northwestern University, and the University of California, Irvine.
Dr. Guttman-Yassky disclosed that she is a consultant to many pharmaceutical companies, including dupilumab manufacturers Sanofi and Regeneron.
A version of this article appeared on Medscape.com.
showed.
“We might be opening a new avenue for a safe, long-term treatment for our children with AA,” the study’s lead investigator, Emma Guttman-Yassky, MD, PhD, professor and chair of dermatology at the Icahn School of Medicine at Mount Sinai, New York City, said in an interview during the annual meeting of the Society for Investigative Dermatology (SID), where the results were presented during a poster session. “I think AA is likely joining the atopic march, which may allow us to adapt some treatments from the atopy world to AA.”
When the original phase 2 and phase 3 trials of dupilumab for patients with moderate to severe AD were being conducted, Dr. Guttman-Yassky, one of the investigators, recalled observing that some patients who also had patch alopecia experienced hair regrowth. “I was scratching my head because, at the time, AA was considered to be only a Th1-driven disease,” she said. “I asked myself, ‘How can this happen?’ I looked in the literature and found many publications linking atopy in general to alopecia areata. The largest of the dermatologic publications showed that eczema and atopy in general are the highest comorbidities in alopecia areata.”
“This and other findings such as IL [interleukin]-13 genetic linkage with AA and high IgE in patients with AA link AA with Th2 immune skewing, particularly in the setting of atopy,” she continued. In addition, she said, in a large biomarker study involving the scalp and blood of patients with AA, “we found increases in Th2 biomarkers that were associated with alopecia severity.”
Case Series of 20 Pediatric Patients
As part of a case series of children with both AD and AA, Dr. Guttman-Yassky and colleagues evaluated hair regrowth using the Severity of Alopecia Tool (SALT) in 20 pediatric patients (mean age, 10.8 years) who were being treated at Mount Sinai. They collected patient demographics, atopic history, immunoglobulin E (IgE) levels, and SALT scores at follow-up visits every 12-16 weeks for more than 72 weeks and performed Spearman correlations between clinical scores, demographics, and IgE levels.
At baseline, the mean SALT score was 54.4, the mean IgE level was 1567.7 IU/mL, and 75% of patients also had a family history of atopy. The mean follow-up was 67.6 weeks. The researchers observed a significant reduction in SALT scores at week 48 compared with baseline (a mean score of 20.4; P < .01) and continued improvement up to at least 72 weeks (P < .01 vs baseline). They also noted that patients who achieved a treatment response at week 24 had baseline IgE levels > 200 IU/mL.
In other findings, baseline IgE positively correlated with improvement in SALT scores at week 36 (P < .05), while baseline SALT scores positively correlated with disease duration (P < .01) and negatively correlated with improvement in SALT scores at weeks 24, 36, and 48 (P < .005). “The robustness of the response surprised me,” Dr. Guttman-Yassky said in the interview. “Dupilumab for AA takes time to work, but once it kicks in, it kicks in. It takes anywhere from 6 to 12 months to see hair regrowth.”
She acknowledged certain limitations of the analysis, including its small sample size and the fact that it was not a standardized trial. “But, based on our data and the adult data, we are very encouraged about the potential of using dupilumab for children with AA,” she said.
Mount Sinai recently announced that the National Institutes of Health awarded a $6.6 million, 5-year grant to Dr. Guttman-Yassky to further investigate dupilumab as a treatment for children with AA. She will lead a multicenter controlled trial of 76 children with alopecia affecting at least 30% of the scalp, who will be randomized 2:1 (dupilumab:placebo) for 48 weeks, followed by 48 weeks of open-label dupilumab for all participants, with 16 weeks of follow-up, for a total of 112 weeks. Participating sites include Mount Sinai, Yale University, Northwestern University, and the University of California, Irvine.
Dr. Guttman-Yassky disclosed that she is a consultant to many pharmaceutical companies, including dupilumab manufacturers Sanofi and Regeneron.
A version of this article appeared on Medscape.com.
FROM SID 2024
More Women Report First Hip Fracture in Their 60s
TOPLINE:
Women with low bone density are more likely to report their first fragility hip fracture in their 60s rather than at older ages.
METHODOLOGY:
- Researchers used hip fracture data from the National Health and Nutrition Examination Survey for 2009-2010, 2013-2014, and 2017-2018.
- They included women older than 60 years with a bone mineral density T score ≤ −1 at the femur neck, measured by dual-energy x-ray absorptiometry.
- Fragility fractures are defined as a self-reported hip fracture resulting from a fall from standing height or less.
TAKEAWAY:
- The number of women in their 60s who reported their first hip fracture grew by 50% from 2009 to 2018.
- The opposite was true for women in their 70s and 80s who reported fewer first hip fractures over the study period.
- Reported fragility hip fractures in women overall decreased by half from 2009 to 2018.
- The prevalence of women with osteoporosis (T score ≤ −2.5) grew from 18.1% to 21.3% over 10 years.
IN PRACTICE:
The decrease in fractures overall and in women older than 70 years “may be due to increasing awareness and utilization of measures to decrease falls such as exercise, nutrition, health education, and environmental modifications targeted toward the elderly population,” the authors wrote. The findings also underscore the importance of earlier bone health awareness in primary care to curb the rising trend in younger women, they added.
SOURCE:
The study was led by Avica Atri, MD, of Albert Einstein Medical Center in Philadelphia. She presented the findings at ENDO 2024: The Endocrine Society Annual Meeting.
LIMITATIONS:
The study was retrospective in nature and included self-reported health data.
DISCLOSURES:
The study received no commercial funding. The authors have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
TOPLINE:
Women with low bone density are more likely to report their first fragility hip fracture in their 60s rather than at older ages.
METHODOLOGY:
- Researchers used hip fracture data from the National Health and Nutrition Examination Survey for 2009-2010, 2013-2014, and 2017-2018.
- They included women older than 60 years with a bone mineral density T score ≤ −1 at the femur neck, measured by dual-energy x-ray absorptiometry.
- Fragility fractures are defined as a self-reported hip fracture resulting from a fall from standing height or less.
TAKEAWAY:
- The number of women in their 60s who reported their first hip fracture grew by 50% from 2009 to 2018.
- The opposite was true for women in their 70s and 80s who reported fewer first hip fractures over the study period.
- Reported fragility hip fractures in women overall decreased by half from 2009 to 2018.
- The prevalence of women with osteoporosis (T score ≤ −2.5) grew from 18.1% to 21.3% over 10 years.
IN PRACTICE:
The decrease in fractures overall and in women older than 70 years “may be due to increasing awareness and utilization of measures to decrease falls such as exercise, nutrition, health education, and environmental modifications targeted toward the elderly population,” the authors wrote. The findings also underscore the importance of earlier bone health awareness in primary care to curb the rising trend in younger women, they added.
SOURCE:
The study was led by Avica Atri, MD, of Albert Einstein Medical Center in Philadelphia. She presented the findings at ENDO 2024: The Endocrine Society Annual Meeting.
LIMITATIONS:
The study was retrospective in nature and included self-reported health data.
DISCLOSURES:
The study received no commercial funding. The authors have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
TOPLINE:
Women with low bone density are more likely to report their first fragility hip fracture in their 60s rather than at older ages.
METHODOLOGY:
- Researchers used hip fracture data from the National Health and Nutrition Examination Survey for 2009-2010, 2013-2014, and 2017-2018.
- They included women older than 60 years with a bone mineral density T score ≤ −1 at the femur neck, measured by dual-energy x-ray absorptiometry.
- Fragility fractures are defined as a self-reported hip fracture resulting from a fall from standing height or less.
TAKEAWAY:
- The number of women in their 60s who reported their first hip fracture grew by 50% from 2009 to 2018.
- The opposite was true for women in their 70s and 80s who reported fewer first hip fractures over the study period.
- Reported fragility hip fractures in women overall decreased by half from 2009 to 2018.
- The prevalence of women with osteoporosis (T score ≤ −2.5) grew from 18.1% to 21.3% over 10 years.
IN PRACTICE:
The decrease in fractures overall and in women older than 70 years “may be due to increasing awareness and utilization of measures to decrease falls such as exercise, nutrition, health education, and environmental modifications targeted toward the elderly population,” the authors wrote. The findings also underscore the importance of earlier bone health awareness in primary care to curb the rising trend in younger women, they added.
SOURCE:
The study was led by Avica Atri, MD, of Albert Einstein Medical Center in Philadelphia. She presented the findings at ENDO 2024: The Endocrine Society Annual Meeting.
LIMITATIONS:
The study was retrospective in nature and included self-reported health data.
DISCLOSURES:
The study received no commercial funding. The authors have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.