User login
At Last, a Nasal Epinephrine Spray
This summer, the US Food and Drug Administration (FDA) fast-tracked approval of the first-in-its-class nasal epinephrine (neffy). It’s a very welcome addition to our anaphylaxis treatment armamentarium. In healthy volunteers, neffy achieved similar serum epinephrine levels, rises in blood pressure, and pulse compared with IM epinephrine.
The Need for Neffy
It was just a few days ago that I saw a new patient with fire ant anaphylaxis. The last time he tried to use an injectable epinephrine pen, he made two mistakes. First, he placed the wrong end against his thigh, and when it did not inject, he depressed it with his thumb — in other words, he injected his thumb with epinephrine. Of course, that cannot happen with neffy.
I recall a few years ago, a child experienced anaphylaxis but the parent was hesitant to administer the EAI (epinephrine autoinjector). The parent drove to the emergency room but was delayed by traffic, and by the time they reached the ER, the patient had suffered a respiratory arrest and passed away.
Patients are not the only ones who are hesitant to administer epinephrine. Some clinicians do not treat anaphylaxis appropriately. As an allergist, I see patients after-the-fact for diagnosis and management. Patients often tell me of systemic allergic reactions treated with IV antihistamines/corticosteroids and even sometimes with nebulized beta agonists, but not epinephrine.
My opinion is that it’s not just needle phobia. As I mentioned, in my Medscape commentary “Injectable Epinephrine: An Epidemic of Misuse,” I believe it’s due to a misunderstanding of the guidelines and a sense that epinephrine is a potent medication to be used sparingly. Clinicians and patients must understand that epinephrine is a naturally occurring hormone and administration leads to serum levels seen under other natural circumstances (eg, stress — the fight-or-flight surge). The aforementioned article also includes a patient handout, “Don’t Fear Epinephrine,” which I encourage you to read and distribute.
The potential benefits of neffy are clear:
- It should overcome fear of injection ergo being more likely to be used, and used earlier, by both patient/family member and clinicians.
- It’s easier to carry than many larger devices (though not the AUVI-Q).
- It cannot be injected incorrectly.
- Expiration is 8 months longer than the EAI.
- There are no pharmacist substitutions (as there is no equivalent device).
Potential Problems With Neffy and Some Suggested Solutions
As promising and beneficial as it is, I wonder about a few training issues. In the office, patients can be trained with a (reusable) injectable epinephrine trainer but not with a nasal spray device trainer in the office (an important alternative is a small model of a nose in the office for patient education). A training device should also be included in the neffy prescription, as with the EAI.
Neffy and Patients With Nasal Polyps or Nasal Surgery
It’s more complicated than that neffy cannot be used with patients who have had nasal polyps or nasal surgery. It’s really about how much healthy nasal mucosa is required for absorption. Nasal surgery may be simple or complex. Nasal polyps may be obstructive or resolved with nasal steroid or biologic therapy. Nasal polyps affect 2% of the population, but 35% of pediatric food allergy (FA) patients develop allergic rhinitis (AR), and these AR symptoms present even when not triggered by FA. AR is present at baseline in patients with FA. How does this influence neffy absorption? For FA patients who have anaphylactic reactions with severe nasal reactions, neffy absorption could be further compromised, something that has not been studied.
Insurance Coverage
As we don’t yet know the comparative efficacy of neffy in anaphylactic episodes, it’s likely that patients, especially with more severe food sensitivities, will be prescribed both the nasal and IM devices. The question remains whether insurance will cover both.
In “mild cases,” I suspect that doctors might be more inclined to prescribe neffy.
Conclusion
Delay in epinephrine use is frequent despite the clear indication during anaphylactic episodes, which in turn increases risk for mortality. Neffy will probably save many lives.
Dr. Stadtmauer serves on the advisory board of Medscape. He is in private practice in New York City and is affiliated with the Mount Sinai School of Medicine.
A version of this article first appeared on Medscape.com.
This summer, the US Food and Drug Administration (FDA) fast-tracked approval of the first-in-its-class nasal epinephrine (neffy). It’s a very welcome addition to our anaphylaxis treatment armamentarium. In healthy volunteers, neffy achieved similar serum epinephrine levels, rises in blood pressure, and pulse compared with IM epinephrine.
The Need for Neffy
It was just a few days ago that I saw a new patient with fire ant anaphylaxis. The last time he tried to use an injectable epinephrine pen, he made two mistakes. First, he placed the wrong end against his thigh, and when it did not inject, he depressed it with his thumb — in other words, he injected his thumb with epinephrine. Of course, that cannot happen with neffy.
I recall a few years ago, a child experienced anaphylaxis but the parent was hesitant to administer the EAI (epinephrine autoinjector). The parent drove to the emergency room but was delayed by traffic, and by the time they reached the ER, the patient had suffered a respiratory arrest and passed away.
Patients are not the only ones who are hesitant to administer epinephrine. Some clinicians do not treat anaphylaxis appropriately. As an allergist, I see patients after-the-fact for diagnosis and management. Patients often tell me of systemic allergic reactions treated with IV antihistamines/corticosteroids and even sometimes with nebulized beta agonists, but not epinephrine.
My opinion is that it’s not just needle phobia. As I mentioned, in my Medscape commentary “Injectable Epinephrine: An Epidemic of Misuse,” I believe it’s due to a misunderstanding of the guidelines and a sense that epinephrine is a potent medication to be used sparingly. Clinicians and patients must understand that epinephrine is a naturally occurring hormone and administration leads to serum levels seen under other natural circumstances (eg, stress — the fight-or-flight surge). The aforementioned article also includes a patient handout, “Don’t Fear Epinephrine,” which I encourage you to read and distribute.
The potential benefits of neffy are clear:
- It should overcome fear of injection ergo being more likely to be used, and used earlier, by both patient/family member and clinicians.
- It’s easier to carry than many larger devices (though not the AUVI-Q).
- It cannot be injected incorrectly.
- Expiration is 8 months longer than the EAI.
- There are no pharmacist substitutions (as there is no equivalent device).
Potential Problems With Neffy and Some Suggested Solutions
As promising and beneficial as it is, I wonder about a few training issues. In the office, patients can be trained with a (reusable) injectable epinephrine trainer but not with a nasal spray device trainer in the office (an important alternative is a small model of a nose in the office for patient education). A training device should also be included in the neffy prescription, as with the EAI.
Neffy and Patients With Nasal Polyps or Nasal Surgery
It’s more complicated than that neffy cannot be used with patients who have had nasal polyps or nasal surgery. It’s really about how much healthy nasal mucosa is required for absorption. Nasal surgery may be simple or complex. Nasal polyps may be obstructive or resolved with nasal steroid or biologic therapy. Nasal polyps affect 2% of the population, but 35% of pediatric food allergy (FA) patients develop allergic rhinitis (AR), and these AR symptoms present even when not triggered by FA. AR is present at baseline in patients with FA. How does this influence neffy absorption? For FA patients who have anaphylactic reactions with severe nasal reactions, neffy absorption could be further compromised, something that has not been studied.
Insurance Coverage
As we don’t yet know the comparative efficacy of neffy in anaphylactic episodes, it’s likely that patients, especially with more severe food sensitivities, will be prescribed both the nasal and IM devices. The question remains whether insurance will cover both.
In “mild cases,” I suspect that doctors might be more inclined to prescribe neffy.
Conclusion
Delay in epinephrine use is frequent despite the clear indication during anaphylactic episodes, which in turn increases risk for mortality. Neffy will probably save many lives.
Dr. Stadtmauer serves on the advisory board of Medscape. He is in private practice in New York City and is affiliated with the Mount Sinai School of Medicine.
A version of this article first appeared on Medscape.com.
This summer, the US Food and Drug Administration (FDA) fast-tracked approval of the first-in-its-class nasal epinephrine (neffy). It’s a very welcome addition to our anaphylaxis treatment armamentarium. In healthy volunteers, neffy achieved similar serum epinephrine levels, rises in blood pressure, and pulse compared with IM epinephrine.
The Need for Neffy
It was just a few days ago that I saw a new patient with fire ant anaphylaxis. The last time he tried to use an injectable epinephrine pen, he made two mistakes. First, he placed the wrong end against his thigh, and when it did not inject, he depressed it with his thumb — in other words, he injected his thumb with epinephrine. Of course, that cannot happen with neffy.
I recall a few years ago, a child experienced anaphylaxis but the parent was hesitant to administer the EAI (epinephrine autoinjector). The parent drove to the emergency room but was delayed by traffic, and by the time they reached the ER, the patient had suffered a respiratory arrest and passed away.
Patients are not the only ones who are hesitant to administer epinephrine. Some clinicians do not treat anaphylaxis appropriately. As an allergist, I see patients after-the-fact for diagnosis and management. Patients often tell me of systemic allergic reactions treated with IV antihistamines/corticosteroids and even sometimes with nebulized beta agonists, but not epinephrine.
My opinion is that it’s not just needle phobia. As I mentioned, in my Medscape commentary “Injectable Epinephrine: An Epidemic of Misuse,” I believe it’s due to a misunderstanding of the guidelines and a sense that epinephrine is a potent medication to be used sparingly. Clinicians and patients must understand that epinephrine is a naturally occurring hormone and administration leads to serum levels seen under other natural circumstances (eg, stress — the fight-or-flight surge). The aforementioned article also includes a patient handout, “Don’t Fear Epinephrine,” which I encourage you to read and distribute.
The potential benefits of neffy are clear:
- It should overcome fear of injection ergo being more likely to be used, and used earlier, by both patient/family member and clinicians.
- It’s easier to carry than many larger devices (though not the AUVI-Q).
- It cannot be injected incorrectly.
- Expiration is 8 months longer than the EAI.
- There are no pharmacist substitutions (as there is no equivalent device).
Potential Problems With Neffy and Some Suggested Solutions
As promising and beneficial as it is, I wonder about a few training issues. In the office, patients can be trained with a (reusable) injectable epinephrine trainer but not with a nasal spray device trainer in the office (an important alternative is a small model of a nose in the office for patient education). A training device should also be included in the neffy prescription, as with the EAI.
Neffy and Patients With Nasal Polyps or Nasal Surgery
It’s more complicated than that neffy cannot be used with patients who have had nasal polyps or nasal surgery. It’s really about how much healthy nasal mucosa is required for absorption. Nasal surgery may be simple or complex. Nasal polyps may be obstructive or resolved with nasal steroid or biologic therapy. Nasal polyps affect 2% of the population, but 35% of pediatric food allergy (FA) patients develop allergic rhinitis (AR), and these AR symptoms present even when not triggered by FA. AR is present at baseline in patients with FA. How does this influence neffy absorption? For FA patients who have anaphylactic reactions with severe nasal reactions, neffy absorption could be further compromised, something that has not been studied.
Insurance Coverage
As we don’t yet know the comparative efficacy of neffy in anaphylactic episodes, it’s likely that patients, especially with more severe food sensitivities, will be prescribed both the nasal and IM devices. The question remains whether insurance will cover both.
In “mild cases,” I suspect that doctors might be more inclined to prescribe neffy.
Conclusion
Delay in epinephrine use is frequent despite the clear indication during anaphylactic episodes, which in turn increases risk for mortality. Neffy will probably save many lives.
Dr. Stadtmauer serves on the advisory board of Medscape. He is in private practice in New York City and is affiliated with the Mount Sinai School of Medicine.
A version of this article first appeared on Medscape.com.
Cannabis Use Linked to Brain Thinning in Adolescents
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
, research in mice and humans suggested.
The multilevel study demonstrated that tetrahydrocannabinol (THC), an active substance in cannabis, causes shrinkage of dendritic arborization — the neurons’ network of antennae that play a critical role in communication between brain cells.
The connection between dendritic arborization and cortical thickness was hinted at in an earlier study by Tomáš Paus, MD, PhD, professor of psychiatry and addictology at the University of Montreal, Quebec, Canada, and colleagues, who found that cannabis use in early adolescence was associated with lower cortical thickness in boys with a high genetic risk for schizophrenia.
“We speculated at that time that the differences in cortical thickness might be related to differences in dendritic arborization, and our current study confirmed it,” Paus said.
That confirmation came in the mouse part of the study, when coauthor Graciela Piñeyro, MD, PhD, also of the University of Montreal, counted the dendritic branches of mice exposed to THC and compared the total with the number of dendritic branches in unexposed mice. “What surprised me was finding that THC in the mice was targeting the same type of cells and structures that Dr. Paus had predicted would be affected from the human studies,” she said. “Structurally, they were mostly the neurons that contribute to synapses in the cortex, and their branching was reduced.”
Paus explained that in humans, a decrease in input from the affected dendrites “makes it harder for the brain to learn new things, interact with people, cope with new situations, et cetera. In other words, it makes the brain more vulnerable to everything that can happen in a young person’s life.”
The study was published online on October 9 in the Journal of Neuroscience.
Of Mice, Men, and Cannabis
Although associations between cannabis use by teenagers and variations in brain maturation have been well studied, the cellular and molecular underpinnings of these associations were unclear, according to the authors.
To investigate further, they conducted this three-step study. First, they exposed adolescent male mice to THC or a synthetic cannabinoid (WIN 55,212-2) and assessed differentially expressed genes, spine numbers, and the extent of dendritic complexity in the frontal cortex of each mouse.
Next, using MRI, they examined differences in cortical thickness in 34 brain regions in 140 male adolescents who experimented with cannabis before age 16 years and 327 who did not.
Then, they again conducted experiments in mice and found that 13 THC-related genes correlated with variations in cortical thickness. Virtual histology revealed that these 13 genes were coexpressed with cell markers of astrocytes, microglia, and a type of pyramidal cell enriched in genes that regulate dendritic expression.
Similarly, the WIN-related genes correlated with differences in cortical thickness and showed coexpression patterns with the same three cell types.
Furthermore, the affected genes were also found in humans, particularly in the thinner cortical regions of the adolescents who experimented with cannabis.
By acting on microglia, THC seems to promote the removal of synapses and, eventually, the reduction of the dendritic tree in mice, Piñeyro explained. That’s important not only because a similar mechanism may be at work in humans but also because “we now might have a model to test different types of cannabis products to see which ones are producing the greatest effect on neurons and therefore greater removal of synapses through the microglia. This could be a way of testing drugs that are out in the street to see which would be the most or least dangerous to the synapses in the brain.”
‘Significant Implications’
Commenting on the study, Yasmin Hurd, PhD, Ward-Coleman chair of translational neuroscience at the Icahn School of Medicine at Mount Sinai and director of the Addiction Institute of Mount Sinai in New York City, said, “These findings are in line with previous results, so they are feasible. This study adds more depth by showing that cortical genes that were differentially altered by adolescent THC correlated with cannabis-related changes in cortical thickness based on human neuroimaging data.” Hurd did not participate in the research.
“The results emphasize that consumption of potent cannabis products during adolescence can impact cortical function, which has significant implications for decision-making and risky behavior as well. It also can increase vulnerability to psychiatric disorders such as schizophrenia.”
Although a mouse model is “not truly the same as the human condition, the fact that the animal model also showed evidence of the morphological changes indicative of reduced cortical thickness, [like] the humans, is strong,” she said.
Additional research could include women and assess potential sex differences, she added.
Ronald Ellis, MD, PhD, an investigator in the Center for Medicinal Cannabis Research at the University of California, San Diego School of Medicine, said, “The findings are plausible and extend prior work showing evidence of increased risk for psychotic disorders later in life in adolescents who use cannabis.” Ellis did not participate in the research.
“Future studies should explore how these findings might vary across different demographic groups, which could provide a more inclusive understanding of how cannabis impacts the brain,” he said. “Additionally, longitudinal studies to track changes in the brain over time could help to establish causal relationships more robustly.
“The take-home message to clinicians at this point is to discuss cannabis use history carefully and confidentially with adolescent patients to better provide advice on its potential risks,” he concluded.
Paus added that he would tell patients, “If you’re going to use cannabis, don’t start early. If you have to, then do so in moderation. And if you have family history of mental illness, be very careful.”
No funding for the study was reported. Paus, Piñeyro, Hurd, and Ellis declared having no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM THE JOURNAL OF NEUROSCIENCE
Outpatient CAR T: Safe, Effective, Accessible
In one recent study, an industry-funded phase 2 trial, researchers found similar outcomes from outpatient and inpatient CAR T-cell therapy for relapsed/refractory large B-cell lymphoma with lisocabtagene maraleucel (Breyanzi).
Another recent study reported that outpatient treatment of B cell non-Hodgkin lymphoma with tisagenlecleucel (Kymriah) had similar efficacy to inpatient treatment. Meanwhile, a 2023 review of CAR T-cell therapy in various settings found similar outcomes in outpatient and inpatient treatment.
“The future of CAR T-cell therapy lies in balancing safety with accessibility,” said Rayne Rouce, MD, a pediatric oncologist at Texas Children’s Cancer Center in Houston, Texas, in an interview. “Expanding CAR T-cell therapy beyond large medical centers is a critical next step.”
Great Outcomes, Low Access
Since 2017, the FDA has approved six CAR T-cell therapies, which target cancer by harnessing the power of a patient’s own T cells. As an Oregon Health & Sciences University/Knight Cancer Center website explains, T cells are removed from the patient’s body, “genetically modified to make the chimeric antigen receptor, or CAR, [which] protein binds to specific proteins on the surface of cancer cells.”
Modified cells are grown and then infused back into the body, where they “multiply and may be able to destroy all the cancer cells.”
As Rouce puts it, “CAR T-cells have revolutionized the treatment of relapsed or refractory blood cancers.” One or more of the therapies have been approved to treat types of lymphoblastic leukemia, B-cell lymphoma, follicular lymphoma, mantle cell lymphoma, and multiple myeloma.
A 2023 review of clinical trial data reported complete response rates of 40%-54% in aggressive B-cell lymphoma, 67% in mantle cell lymphoma, and 69%-74% in indolent B cell lymphoma.
“Commercialization of CAR T-cell therapy brought hope that access would expand beyond the major academic medical centers with the highly specialized infrastructure and advanced laboratories required to manufacture and ultimately treat patients,” Rouce said. “However, it quickly became clear that patients who are underinsured or uninsured — or who live outside the network of the well-resourced institutions that house these therapies — are still unable to access these potentially life-saving therapies.”
A 2024 report estimated the cost of CAR T-cell therapy as $700,000-$1 million and said only a small percentage of those who could benefit from the treatment actually get it. For example, an estimated 10,000 patients with diffuse large B-cell lymphoma alone could benefit from CAR T therapy annually, but a survey of 200 US healthcare centers in 2021 found that 1900 procedures were performed overall for all indications.
Distance to Treatment Is a Major Obstacle
Even if patients have insurance plans willing to cover CAR T-cell therapy, they may not be able get care. While more than 150 US centers are certified to administer the therapy, “distance to major medical centers with CAR T capabilities is a major obstacle,” Yuliya Linhares, MD, chief of lymphoma at Miami Cancer Institute in Miami, Florida, said in an interview.
“I have had patients who chose to not proceed with CAR T therapy due to inability to travel the distance to the medical center for pre-CAR T appointments and assessments and a lack of caretakers who are available to stay nearby,” Linhares said.
Indeed, the challenges facing patients in rural and underserved urban areas can be overwhelming, Hoda Badr, PhD, professor of medicine at Baylor College of Medicine in Houston, Texas, said in an interview.
“They must take time off work, arrange accommodations near treatment sites, and manage travel costs, all of which strain limited financial resources. The inability to afford these additional expenses can lead to delays in receiving care or patients forgoing the treatment altogether,” Badr said. She added that “the psychological and social burden of being away from family and community support systems during treatment can intensify the stress of an already difficult situation.”
A statistic tells the story of the urban/community divide. CAR T-cell therapy administration at academic centers after leukapheresis — the separation and collection of white blood cells — is reported to be at around 90%, while it’s only 47% in community-based practices that have to refer patients elsewhere, Linhares noted.
Researchers Explore CAR T-Cell Therapy in the Community
Linhares is lead author of the phase 2 trial that explored administration of lisocabtagene maraleucel in 82 patients with relapsed/refractory large B-cell lymphoma. The findings were published Sept. 30 in Blood Advances.
The OUTREACH trial, funded by Juno/Bristol-Myers Squibb, treated patients in the third line and beyond at community medical centers (outpatient-monitored, 70%; inpatient-monitored, 30%). The trial didn’t require facilities to be certified by the Foundation for the Accreditation of Cellular Therapy (FACT); all had to be non-tertiary cancer centers that weren’t associated with a university. In order to administer therapy on the outpatient basis, the centers had to have phase 1 or hematopoietic stem cell transplant capabilities.
As Linhares explained, 72% of participating centers hadn’t provided CAR T-cell therapy before, and 44% did not have FACT accreditation. “About 32% of patients received CAR T at CAR T naive sites, while 70% of patients received CAR T as outpatients. Investigators had to decide whether patients qualified for the outpatient observation or had to be admitted for the inpatient observation,” she noted.
Community Outcomes Were Comparable to Major Trial
As for the results, grade 3 or higher adverse events occurred at a similar frequency among outpatients and inpatients at 74% and 76%, Linhares said. There were no grade 5 adverse events, and 25% of patients treated as outpatients were never hospitalized.
Response rates were similar to those in the major TRANSCEND trial with the objective response rates rate of 80% and complete response rates of 54%.
“Overall,” Linhares said, “our study demonstrated that with the availability of standard operating procedures, specially trained staff and a multidisciplinary team trained in CAR T toxicity management, inpatient and outpatient CAR T administration is feasible at specialized community medical centers.”
In 2023, another study examined patients with B-cell non-Hodgkin lymphoma who were treated on an outpatient basis with tisagenlecleucel. Researchers reported that outpatient therapy was “feasible and associated with similar efficacy outcomes as inpatient treatment.”
And a 2023 systematic literature review identified 11 studies that reported outpatient vs inpatient outcomes in CAR T-cell therapy and found “comparable response rates (80-82% in outpatient and 72-80% in inpatient).” Costs were cheaper in the outpatient setting.
Research findings like these are good news, Baylor College of Medicine’s Badr said. “Outpatient administration could help to scale the availability of this therapy to a broader range of healthcare settings, including those serving underserved populations. Findings indicate promising safety profiles, which is encouraging for expanding access.”
Not Every Patient Can Tolerate Outpatient Care
Linhares noted that the patients who received outpatient care in the lisocabtagene maraleucel study were in better shape than those in the inpatient group. Those selected for inpatient care had “higher disease risk characteristics, including high grade B cell lymphoma histology, higher disease burden, and having received bridging therapy. This points to the fact that the investigators properly selected patients who were at a higher risk of complications for inpatient observation. Additionally, some patients stayed as inpatient due to social factors, which increases length of stay independently of disease characteristics.”
Specifically, reasons for inpatient monitoring were disease characteristics (48%) including tumor burden and risk of adverse events; psychosocial factors (32%) including lack of caregiver support or transportation; COVID-19 precautions (8%); pre-infusion adverse events (8%) of fever and vasovagal reaction; and principal investigator decision (4%) due to limited hospital experience with CAR T-cell therapy.
Texas Children’s Cancer Center’s Rouce said “certain patients, particularly those with higher risk for complications or those who require intensive monitoring, may not be suited for outpatient CAR T-cell therapy. This may be due to other comorbidities or baseline factors known to predispose to CAR T-related toxicities. However, evidence-based risk mitigation algorithms may still allow closely monitored outpatient treatment, with recognition that hospital admission for incipient side effects may be necessary.”
What’s Next for Access to Therapy?
Rouce noted that her institution, like many others, is offering CAR T-cell therapy on an outpatient basis. “Additionally, continued scientific innovation, such as immediately available, off-the-shelf cell therapies and inducible safety switches, will ultimately improve access,” she said.
Linhares noted a recent advance and highlighted research that’s now in progress. “CAR Ts now have an indication as a second-line therapy in relapsed/refractory large B-cell lymphoma, and there are ongoing clinical trials that will potentially move CAR Ts into the first line,” she said. “Some trials are exploring allogeneic, readily available off-the-shelf CAR T for the treatment of minimal residual disease positive large B-cell lymphoma after completion of first-line therapy.”
These potential advances “are increasing the need for CAR T-capable medical centers,” Linhares noted. “More and more medical centers with expert hematology teams are becoming CAR T-certified, with more patients having access to CAR T.”
Still, she said, “I don’t think access is nearly as good as it should be. Many patients in rural areas are still unable to get this life-saving treatment. “However, “it is very possible that other novel targeted therapies, such as bispecific antibodies, will be used in place of CAR T in areas with poor CAR T access. Bispecific antibody efficacy in various B cell lymphoma histologies are being currently explored.”
Rouce discloses relationships with Novartis and Pfizer. Linhares reports ties with Kyowa Kirin, AbbVie, ADC, BeiGene, Genentech, Gilead, GlaxoSmithKline, Seagen, and TG. Badr has no disclosures.
A version of this article appeared on Medscape.com.
In one recent study, an industry-funded phase 2 trial, researchers found similar outcomes from outpatient and inpatient CAR T-cell therapy for relapsed/refractory large B-cell lymphoma with lisocabtagene maraleucel (Breyanzi).
Another recent study reported that outpatient treatment of B cell non-Hodgkin lymphoma with tisagenlecleucel (Kymriah) had similar efficacy to inpatient treatment. Meanwhile, a 2023 review of CAR T-cell therapy in various settings found similar outcomes in outpatient and inpatient treatment.
“The future of CAR T-cell therapy lies in balancing safety with accessibility,” said Rayne Rouce, MD, a pediatric oncologist at Texas Children’s Cancer Center in Houston, Texas, in an interview. “Expanding CAR T-cell therapy beyond large medical centers is a critical next step.”
Great Outcomes, Low Access
Since 2017, the FDA has approved six CAR T-cell therapies, which target cancer by harnessing the power of a patient’s own T cells. As an Oregon Health & Sciences University/Knight Cancer Center website explains, T cells are removed from the patient’s body, “genetically modified to make the chimeric antigen receptor, or CAR, [which] protein binds to specific proteins on the surface of cancer cells.”
Modified cells are grown and then infused back into the body, where they “multiply and may be able to destroy all the cancer cells.”
As Rouce puts it, “CAR T-cells have revolutionized the treatment of relapsed or refractory blood cancers.” One or more of the therapies have been approved to treat types of lymphoblastic leukemia, B-cell lymphoma, follicular lymphoma, mantle cell lymphoma, and multiple myeloma.
A 2023 review of clinical trial data reported complete response rates of 40%-54% in aggressive B-cell lymphoma, 67% in mantle cell lymphoma, and 69%-74% in indolent B cell lymphoma.
“Commercialization of CAR T-cell therapy brought hope that access would expand beyond the major academic medical centers with the highly specialized infrastructure and advanced laboratories required to manufacture and ultimately treat patients,” Rouce said. “However, it quickly became clear that patients who are underinsured or uninsured — or who live outside the network of the well-resourced institutions that house these therapies — are still unable to access these potentially life-saving therapies.”
A 2024 report estimated the cost of CAR T-cell therapy as $700,000-$1 million and said only a small percentage of those who could benefit from the treatment actually get it. For example, an estimated 10,000 patients with diffuse large B-cell lymphoma alone could benefit from CAR T therapy annually, but a survey of 200 US healthcare centers in 2021 found that 1900 procedures were performed overall for all indications.
Distance to Treatment Is a Major Obstacle
Even if patients have insurance plans willing to cover CAR T-cell therapy, they may not be able get care. While more than 150 US centers are certified to administer the therapy, “distance to major medical centers with CAR T capabilities is a major obstacle,” Yuliya Linhares, MD, chief of lymphoma at Miami Cancer Institute in Miami, Florida, said in an interview.
“I have had patients who chose to not proceed with CAR T therapy due to inability to travel the distance to the medical center for pre-CAR T appointments and assessments and a lack of caretakers who are available to stay nearby,” Linhares said.
Indeed, the challenges facing patients in rural and underserved urban areas can be overwhelming, Hoda Badr, PhD, professor of medicine at Baylor College of Medicine in Houston, Texas, said in an interview.
“They must take time off work, arrange accommodations near treatment sites, and manage travel costs, all of which strain limited financial resources. The inability to afford these additional expenses can lead to delays in receiving care or patients forgoing the treatment altogether,” Badr said. She added that “the psychological and social burden of being away from family and community support systems during treatment can intensify the stress of an already difficult situation.”
A statistic tells the story of the urban/community divide. CAR T-cell therapy administration at academic centers after leukapheresis — the separation and collection of white blood cells — is reported to be at around 90%, while it’s only 47% in community-based practices that have to refer patients elsewhere, Linhares noted.
Researchers Explore CAR T-Cell Therapy in the Community
Linhares is lead author of the phase 2 trial that explored administration of lisocabtagene maraleucel in 82 patients with relapsed/refractory large B-cell lymphoma. The findings were published Sept. 30 in Blood Advances.
The OUTREACH trial, funded by Juno/Bristol-Myers Squibb, treated patients in the third line and beyond at community medical centers (outpatient-monitored, 70%; inpatient-monitored, 30%). The trial didn’t require facilities to be certified by the Foundation for the Accreditation of Cellular Therapy (FACT); all had to be non-tertiary cancer centers that weren’t associated with a university. In order to administer therapy on the outpatient basis, the centers had to have phase 1 or hematopoietic stem cell transplant capabilities.
As Linhares explained, 72% of participating centers hadn’t provided CAR T-cell therapy before, and 44% did not have FACT accreditation. “About 32% of patients received CAR T at CAR T naive sites, while 70% of patients received CAR T as outpatients. Investigators had to decide whether patients qualified for the outpatient observation or had to be admitted for the inpatient observation,” she noted.
Community Outcomes Were Comparable to Major Trial
As for the results, grade 3 or higher adverse events occurred at a similar frequency among outpatients and inpatients at 74% and 76%, Linhares said. There were no grade 5 adverse events, and 25% of patients treated as outpatients were never hospitalized.
Response rates were similar to those in the major TRANSCEND trial with the objective response rates rate of 80% and complete response rates of 54%.
“Overall,” Linhares said, “our study demonstrated that with the availability of standard operating procedures, specially trained staff and a multidisciplinary team trained in CAR T toxicity management, inpatient and outpatient CAR T administration is feasible at specialized community medical centers.”
In 2023, another study examined patients with B-cell non-Hodgkin lymphoma who were treated on an outpatient basis with tisagenlecleucel. Researchers reported that outpatient therapy was “feasible and associated with similar efficacy outcomes as inpatient treatment.”
And a 2023 systematic literature review identified 11 studies that reported outpatient vs inpatient outcomes in CAR T-cell therapy and found “comparable response rates (80-82% in outpatient and 72-80% in inpatient).” Costs were cheaper in the outpatient setting.
Research findings like these are good news, Baylor College of Medicine’s Badr said. “Outpatient administration could help to scale the availability of this therapy to a broader range of healthcare settings, including those serving underserved populations. Findings indicate promising safety profiles, which is encouraging for expanding access.”
Not Every Patient Can Tolerate Outpatient Care
Linhares noted that the patients who received outpatient care in the lisocabtagene maraleucel study were in better shape than those in the inpatient group. Those selected for inpatient care had “higher disease risk characteristics, including high grade B cell lymphoma histology, higher disease burden, and having received bridging therapy. This points to the fact that the investigators properly selected patients who were at a higher risk of complications for inpatient observation. Additionally, some patients stayed as inpatient due to social factors, which increases length of stay independently of disease characteristics.”
Specifically, reasons for inpatient monitoring were disease characteristics (48%) including tumor burden and risk of adverse events; psychosocial factors (32%) including lack of caregiver support or transportation; COVID-19 precautions (8%); pre-infusion adverse events (8%) of fever and vasovagal reaction; and principal investigator decision (4%) due to limited hospital experience with CAR T-cell therapy.
Texas Children’s Cancer Center’s Rouce said “certain patients, particularly those with higher risk for complications or those who require intensive monitoring, may not be suited for outpatient CAR T-cell therapy. This may be due to other comorbidities or baseline factors known to predispose to CAR T-related toxicities. However, evidence-based risk mitigation algorithms may still allow closely monitored outpatient treatment, with recognition that hospital admission for incipient side effects may be necessary.”
What’s Next for Access to Therapy?
Rouce noted that her institution, like many others, is offering CAR T-cell therapy on an outpatient basis. “Additionally, continued scientific innovation, such as immediately available, off-the-shelf cell therapies and inducible safety switches, will ultimately improve access,” she said.
Linhares noted a recent advance and highlighted research that’s now in progress. “CAR Ts now have an indication as a second-line therapy in relapsed/refractory large B-cell lymphoma, and there are ongoing clinical trials that will potentially move CAR Ts into the first line,” she said. “Some trials are exploring allogeneic, readily available off-the-shelf CAR T for the treatment of minimal residual disease positive large B-cell lymphoma after completion of first-line therapy.”
These potential advances “are increasing the need for CAR T-capable medical centers,” Linhares noted. “More and more medical centers with expert hematology teams are becoming CAR T-certified, with more patients having access to CAR T.”
Still, she said, “I don’t think access is nearly as good as it should be. Many patients in rural areas are still unable to get this life-saving treatment. “However, “it is very possible that other novel targeted therapies, such as bispecific antibodies, will be used in place of CAR T in areas with poor CAR T access. Bispecific antibody efficacy in various B cell lymphoma histologies are being currently explored.”
Rouce discloses relationships with Novartis and Pfizer. Linhares reports ties with Kyowa Kirin, AbbVie, ADC, BeiGene, Genentech, Gilead, GlaxoSmithKline, Seagen, and TG. Badr has no disclosures.
A version of this article appeared on Medscape.com.
In one recent study, an industry-funded phase 2 trial, researchers found similar outcomes from outpatient and inpatient CAR T-cell therapy for relapsed/refractory large B-cell lymphoma with lisocabtagene maraleucel (Breyanzi).
Another recent study reported that outpatient treatment of B cell non-Hodgkin lymphoma with tisagenlecleucel (Kymriah) had similar efficacy to inpatient treatment. Meanwhile, a 2023 review of CAR T-cell therapy in various settings found similar outcomes in outpatient and inpatient treatment.
“The future of CAR T-cell therapy lies in balancing safety with accessibility,” said Rayne Rouce, MD, a pediatric oncologist at Texas Children’s Cancer Center in Houston, Texas, in an interview. “Expanding CAR T-cell therapy beyond large medical centers is a critical next step.”
Great Outcomes, Low Access
Since 2017, the FDA has approved six CAR T-cell therapies, which target cancer by harnessing the power of a patient’s own T cells. As an Oregon Health & Sciences University/Knight Cancer Center website explains, T cells are removed from the patient’s body, “genetically modified to make the chimeric antigen receptor, or CAR, [which] protein binds to specific proteins on the surface of cancer cells.”
Modified cells are grown and then infused back into the body, where they “multiply and may be able to destroy all the cancer cells.”
As Rouce puts it, “CAR T-cells have revolutionized the treatment of relapsed or refractory blood cancers.” One or more of the therapies have been approved to treat types of lymphoblastic leukemia, B-cell lymphoma, follicular lymphoma, mantle cell lymphoma, and multiple myeloma.
A 2023 review of clinical trial data reported complete response rates of 40%-54% in aggressive B-cell lymphoma, 67% in mantle cell lymphoma, and 69%-74% in indolent B cell lymphoma.
“Commercialization of CAR T-cell therapy brought hope that access would expand beyond the major academic medical centers with the highly specialized infrastructure and advanced laboratories required to manufacture and ultimately treat patients,” Rouce said. “However, it quickly became clear that patients who are underinsured or uninsured — or who live outside the network of the well-resourced institutions that house these therapies — are still unable to access these potentially life-saving therapies.”
A 2024 report estimated the cost of CAR T-cell therapy as $700,000-$1 million and said only a small percentage of those who could benefit from the treatment actually get it. For example, an estimated 10,000 patients with diffuse large B-cell lymphoma alone could benefit from CAR T therapy annually, but a survey of 200 US healthcare centers in 2021 found that 1900 procedures were performed overall for all indications.
Distance to Treatment Is a Major Obstacle
Even if patients have insurance plans willing to cover CAR T-cell therapy, they may not be able get care. While more than 150 US centers are certified to administer the therapy, “distance to major medical centers with CAR T capabilities is a major obstacle,” Yuliya Linhares, MD, chief of lymphoma at Miami Cancer Institute in Miami, Florida, said in an interview.
“I have had patients who chose to not proceed with CAR T therapy due to inability to travel the distance to the medical center for pre-CAR T appointments and assessments and a lack of caretakers who are available to stay nearby,” Linhares said.
Indeed, the challenges facing patients in rural and underserved urban areas can be overwhelming, Hoda Badr, PhD, professor of medicine at Baylor College of Medicine in Houston, Texas, said in an interview.
“They must take time off work, arrange accommodations near treatment sites, and manage travel costs, all of which strain limited financial resources. The inability to afford these additional expenses can lead to delays in receiving care or patients forgoing the treatment altogether,” Badr said. She added that “the psychological and social burden of being away from family and community support systems during treatment can intensify the stress of an already difficult situation.”
A statistic tells the story of the urban/community divide. CAR T-cell therapy administration at academic centers after leukapheresis — the separation and collection of white blood cells — is reported to be at around 90%, while it’s only 47% in community-based practices that have to refer patients elsewhere, Linhares noted.
Researchers Explore CAR T-Cell Therapy in the Community
Linhares is lead author of the phase 2 trial that explored administration of lisocabtagene maraleucel in 82 patients with relapsed/refractory large B-cell lymphoma. The findings were published Sept. 30 in Blood Advances.
The OUTREACH trial, funded by Juno/Bristol-Myers Squibb, treated patients in the third line and beyond at community medical centers (outpatient-monitored, 70%; inpatient-monitored, 30%). The trial didn’t require facilities to be certified by the Foundation for the Accreditation of Cellular Therapy (FACT); all had to be non-tertiary cancer centers that weren’t associated with a university. In order to administer therapy on the outpatient basis, the centers had to have phase 1 or hematopoietic stem cell transplant capabilities.
As Linhares explained, 72% of participating centers hadn’t provided CAR T-cell therapy before, and 44% did not have FACT accreditation. “About 32% of patients received CAR T at CAR T naive sites, while 70% of patients received CAR T as outpatients. Investigators had to decide whether patients qualified for the outpatient observation or had to be admitted for the inpatient observation,” she noted.
Community Outcomes Were Comparable to Major Trial
As for the results, grade 3 or higher adverse events occurred at a similar frequency among outpatients and inpatients at 74% and 76%, Linhares said. There were no grade 5 adverse events, and 25% of patients treated as outpatients were never hospitalized.
Response rates were similar to those in the major TRANSCEND trial with the objective response rates rate of 80% and complete response rates of 54%.
“Overall,” Linhares said, “our study demonstrated that with the availability of standard operating procedures, specially trained staff and a multidisciplinary team trained in CAR T toxicity management, inpatient and outpatient CAR T administration is feasible at specialized community medical centers.”
In 2023, another study examined patients with B-cell non-Hodgkin lymphoma who were treated on an outpatient basis with tisagenlecleucel. Researchers reported that outpatient therapy was “feasible and associated with similar efficacy outcomes as inpatient treatment.”
And a 2023 systematic literature review identified 11 studies that reported outpatient vs inpatient outcomes in CAR T-cell therapy and found “comparable response rates (80-82% in outpatient and 72-80% in inpatient).” Costs were cheaper in the outpatient setting.
Research findings like these are good news, Baylor College of Medicine’s Badr said. “Outpatient administration could help to scale the availability of this therapy to a broader range of healthcare settings, including those serving underserved populations. Findings indicate promising safety profiles, which is encouraging for expanding access.”
Not Every Patient Can Tolerate Outpatient Care
Linhares noted that the patients who received outpatient care in the lisocabtagene maraleucel study were in better shape than those in the inpatient group. Those selected for inpatient care had “higher disease risk characteristics, including high grade B cell lymphoma histology, higher disease burden, and having received bridging therapy. This points to the fact that the investigators properly selected patients who were at a higher risk of complications for inpatient observation. Additionally, some patients stayed as inpatient due to social factors, which increases length of stay independently of disease characteristics.”
Specifically, reasons for inpatient monitoring were disease characteristics (48%) including tumor burden and risk of adverse events; psychosocial factors (32%) including lack of caregiver support or transportation; COVID-19 precautions (8%); pre-infusion adverse events (8%) of fever and vasovagal reaction; and principal investigator decision (4%) due to limited hospital experience with CAR T-cell therapy.
Texas Children’s Cancer Center’s Rouce said “certain patients, particularly those with higher risk for complications or those who require intensive monitoring, may not be suited for outpatient CAR T-cell therapy. This may be due to other comorbidities or baseline factors known to predispose to CAR T-related toxicities. However, evidence-based risk mitigation algorithms may still allow closely monitored outpatient treatment, with recognition that hospital admission for incipient side effects may be necessary.”
What’s Next for Access to Therapy?
Rouce noted that her institution, like many others, is offering CAR T-cell therapy on an outpatient basis. “Additionally, continued scientific innovation, such as immediately available, off-the-shelf cell therapies and inducible safety switches, will ultimately improve access,” she said.
Linhares noted a recent advance and highlighted research that’s now in progress. “CAR Ts now have an indication as a second-line therapy in relapsed/refractory large B-cell lymphoma, and there are ongoing clinical trials that will potentially move CAR Ts into the first line,” she said. “Some trials are exploring allogeneic, readily available off-the-shelf CAR T for the treatment of minimal residual disease positive large B-cell lymphoma after completion of first-line therapy.”
These potential advances “are increasing the need for CAR T-capable medical centers,” Linhares noted. “More and more medical centers with expert hematology teams are becoming CAR T-certified, with more patients having access to CAR T.”
Still, she said, “I don’t think access is nearly as good as it should be. Many patients in rural areas are still unable to get this life-saving treatment. “However, “it is very possible that other novel targeted therapies, such as bispecific antibodies, will be used in place of CAR T in areas with poor CAR T access. Bispecific antibody efficacy in various B cell lymphoma histologies are being currently explored.”
Rouce discloses relationships with Novartis and Pfizer. Linhares reports ties with Kyowa Kirin, AbbVie, ADC, BeiGene, Genentech, Gilead, GlaxoSmithKline, Seagen, and TG. Badr has no disclosures.
A version of this article appeared on Medscape.com.
Parent Perceptions Drive Diet Changes for Children With Atopic Dermatitis
based on survey data from nearly 300 parents.
Although atopic dermatitis can be associated with an increased risk for food allergies, major allergy organizations do not currently recommend elimination diets as a treatment for atopic dermatitis, said Nadia Makkoukdji, MD, a pediatrician at Jackson Memorial Hospital, Miami, in a presentation at the American College of Allergy, Asthma, and Immunology (ACAAI) Annual Scientific Meeting.
“A fear of drastic dietary changes often prevents families from seeking the care their children need,” Makkoukdji said in an interview. In the clinical setting, Makkoukdji noted that she has seen many patients who have started food elimination diets on their own or as recommended by other doctors, and that these diets can lead to dangers such as the development of immunoglobulin E–mediated food allergies on reintroduction of eliminated foods and malnutrition. They can also produce “emotional stress in children and anxiety or depression, while also adding stress to parents and the entire family.”
Makkoukdji conducted the study to explore parents’ perceptions of these diets in management of their children’s atopic dermatitis, she said.
In the study, Makkoukdji and colleagues sought to understand parents’ perceptions of the role of diet in atopic dermatitis in their children. The researchers reviewed surveys from 298 parents of children with atopic dermatitis who were seen at a single academic center. Parents completed the surveys in the emergency department or in an allergy, dermatology, and general pediatrics clinic.
Overall, 42% of parents identified food triggers for their child’s atopic dermatitis. The most commonly identified triggers were milk (32%), tree nuts/seeds/peanuts (16%), and eggs (11%).
Of the parents who reported food triggers, 23% removed the suspected trigger food from the child’s diet completely, 20% removed suspected trigger foods from their own diets while breastfeeding, and 19% changed their infant’s formula.
In the wake of the elimination diets, 38% of the parents reported no improvement in their child’s atopic dermatitis, 35% reported a 25% improvement, and 9% reported complete resolution. The majority (79%) reintroduced eliminated foods and reported no recurrence of atopic dermatitis symptoms.
The researchers were surprised by how many parents changed their child’s diet in the belief that certain foods exacerbated their child’s atopic dermatitis, “although this perception aligns with the common concern that food allergens can trigger or worsen atopic dermatitis flares,” Makkoukdji said.
The current study highlights the need for more awareness of the limited impact of dietary modifications on atopic dermatitis in the absence of confirmed food allergies, Makkoukdji said. “Our study shows that food elimination diets are still commonly being used by parents in the local Miami population.”
The findings were limited by several factors, including the use of data from a single center and the focus only on pediatric patients, but the primary goal was to assess parental perceptions of AD flares in relation to dietary choices, said Makkoukdji. “Future studies that include larger and more diverse populations would be valuable for the field.”
Dietary Modifications Don’t Live Up to Hype
“Food continues to be one of the most discussed aspects of atopic dermatitis,” Peter Lio, MD, clinical assistant professor of dermatology and pediatrics at Northwestern University Feinberg School of Medicine, Chicago, Illinois, said in an interview.
“Almost all of my patients and families ask about dietary modifications, even though almost all of them have experimented with it to some degree,” said Lio. In his experience, diet plays a small role, if any, in the day-to-day management of atopic dermatitis.
This lack of effect of dietary changes is often frustrating to patients because of the persistent “common wisdom” that points to diet as a root cause of atopic dermatitis, Lio said. “Many practitioners continue to recommend excluding foods such as gluten or dairy from the diet, but generally these are only of modest help,” and although patients wish that dietary changes would fix the problem, most are left wondering why these changes didn’t help them.
The current study findings “reflect my own experience after nearly 20 years of being deeply immersed in the world of atopic dermatitis,” Lio said. Although the takeaway message does not argue against eating healthy foods, some foods do seem to make AD worse in some patients and may have nonallergic pro-inflammatory effects.
“In those cases, it is reasonable to limit or avoid those foods. However, it is extremely difficult to tell what food or foods are driving flare-ups when things are out of control, so dietary modification is generally not the best place to start,” he said.
True food allergies are much more common in patients with atopic dermatitis compared with individuals without atopic dermatitis, but the current study is not addressing these types of allergies, Lio emphasized. “If someone has true allergy to peanuts, for example, they should not be eating them; we also know that they are not ‘cheating’ because these patients would not merely have an eczema flare; they would have urticaria, angioedema, or anaphylaxis. There is tremendous confusion around this point and lots of confusion around allergy testing and its limitations.”
In addition, patients with atopic dermatitis are more likely than those without atopic dermatitis to have abnormalities in the gut microbiome and gut barrier, Lio said.
Abnormalities in the gut microbiome are different from the concept of allergy and may fall into the more complex category of barrier and microbiome disruptors, he said. Therefore, “the food category may not be nearly as important as the specific preparation of the food along with the additives (such as preservatives and emulsifiers) that may actually be driving the problem.”
Although in the past many clinicians advised patients to try cutting out certain foods to see whether atopic dermatitis symptoms improved, this strategy is not without risk, said Lio. “There have been incredible advancements in understanding the role of the gut in tolerization to foods.” Recent research has shown that by eating foods regularly, particularly those such as peanuts that seem to have more allergic potential, the body becomes tolerant, and this prevents the development of true food allergies.
As for additional research, many questions remain about the effects of types of foods, processing methods, and timing of introduction of foods on atopic dermatitis, Lio noted.
“Atopic dermatitis is a systemic condition with the immune system, with the skin/gut/respiratory barriers and microbiome involved; I think we now have a broader view of how big and complex the landscape really is,” he said.
The study received no outside funding. The researchers had no financial conflicts to disclose. Lio had no disclosures relevant to elimination diets but disclosed serving on the speakers bureau for AbbVie, Arcutis Biotherapeutics, Eli Lilly, Galderma, Hyphens Pharma, Incyte, La Roche–Posay/L’Oréal, Pfizer, Pierre Fabre Dermatologie, Regeneron/Sanofi Genzyme, and Verrica Pharmaceuticals; serving on consulting/advisory boards; or having stock options for many pharmaceutical companies. Lio also disclosed a patent pending for a Theraplex product with royalties paid and is a board member and Scientific Advisory Committee member emeritus of the National Eczema Association.
A version of this article first appeared on Medscape.com.
based on survey data from nearly 300 parents.
Although atopic dermatitis can be associated with an increased risk for food allergies, major allergy organizations do not currently recommend elimination diets as a treatment for atopic dermatitis, said Nadia Makkoukdji, MD, a pediatrician at Jackson Memorial Hospital, Miami, in a presentation at the American College of Allergy, Asthma, and Immunology (ACAAI) Annual Scientific Meeting.
“A fear of drastic dietary changes often prevents families from seeking the care their children need,” Makkoukdji said in an interview. In the clinical setting, Makkoukdji noted that she has seen many patients who have started food elimination diets on their own or as recommended by other doctors, and that these diets can lead to dangers such as the development of immunoglobulin E–mediated food allergies on reintroduction of eliminated foods and malnutrition. They can also produce “emotional stress in children and anxiety or depression, while also adding stress to parents and the entire family.”
Makkoukdji conducted the study to explore parents’ perceptions of these diets in management of their children’s atopic dermatitis, she said.
In the study, Makkoukdji and colleagues sought to understand parents’ perceptions of the role of diet in atopic dermatitis in their children. The researchers reviewed surveys from 298 parents of children with atopic dermatitis who were seen at a single academic center. Parents completed the surveys in the emergency department or in an allergy, dermatology, and general pediatrics clinic.
Overall, 42% of parents identified food triggers for their child’s atopic dermatitis. The most commonly identified triggers were milk (32%), tree nuts/seeds/peanuts (16%), and eggs (11%).
Of the parents who reported food triggers, 23% removed the suspected trigger food from the child’s diet completely, 20% removed suspected trigger foods from their own diets while breastfeeding, and 19% changed their infant’s formula.
In the wake of the elimination diets, 38% of the parents reported no improvement in their child’s atopic dermatitis, 35% reported a 25% improvement, and 9% reported complete resolution. The majority (79%) reintroduced eliminated foods and reported no recurrence of atopic dermatitis symptoms.
The researchers were surprised by how many parents changed their child’s diet in the belief that certain foods exacerbated their child’s atopic dermatitis, “although this perception aligns with the common concern that food allergens can trigger or worsen atopic dermatitis flares,” Makkoukdji said.
The current study highlights the need for more awareness of the limited impact of dietary modifications on atopic dermatitis in the absence of confirmed food allergies, Makkoukdji said. “Our study shows that food elimination diets are still commonly being used by parents in the local Miami population.”
The findings were limited by several factors, including the use of data from a single center and the focus only on pediatric patients, but the primary goal was to assess parental perceptions of AD flares in relation to dietary choices, said Makkoukdji. “Future studies that include larger and more diverse populations would be valuable for the field.”
Dietary Modifications Don’t Live Up to Hype
“Food continues to be one of the most discussed aspects of atopic dermatitis,” Peter Lio, MD, clinical assistant professor of dermatology and pediatrics at Northwestern University Feinberg School of Medicine, Chicago, Illinois, said in an interview.
“Almost all of my patients and families ask about dietary modifications, even though almost all of them have experimented with it to some degree,” said Lio. In his experience, diet plays a small role, if any, in the day-to-day management of atopic dermatitis.
This lack of effect of dietary changes is often frustrating to patients because of the persistent “common wisdom” that points to diet as a root cause of atopic dermatitis, Lio said. “Many practitioners continue to recommend excluding foods such as gluten or dairy from the diet, but generally these are only of modest help,” and although patients wish that dietary changes would fix the problem, most are left wondering why these changes didn’t help them.
The current study findings “reflect my own experience after nearly 20 years of being deeply immersed in the world of atopic dermatitis,” Lio said. Although the takeaway message does not argue against eating healthy foods, some foods do seem to make AD worse in some patients and may have nonallergic pro-inflammatory effects.
“In those cases, it is reasonable to limit or avoid those foods. However, it is extremely difficult to tell what food or foods are driving flare-ups when things are out of control, so dietary modification is generally not the best place to start,” he said.
True food allergies are much more common in patients with atopic dermatitis compared with individuals without atopic dermatitis, but the current study is not addressing these types of allergies, Lio emphasized. “If someone has true allergy to peanuts, for example, they should not be eating them; we also know that they are not ‘cheating’ because these patients would not merely have an eczema flare; they would have urticaria, angioedema, or anaphylaxis. There is tremendous confusion around this point and lots of confusion around allergy testing and its limitations.”
In addition, patients with atopic dermatitis are more likely than those without atopic dermatitis to have abnormalities in the gut microbiome and gut barrier, Lio said.
Abnormalities in the gut microbiome are different from the concept of allergy and may fall into the more complex category of barrier and microbiome disruptors, he said. Therefore, “the food category may not be nearly as important as the specific preparation of the food along with the additives (such as preservatives and emulsifiers) that may actually be driving the problem.”
Although in the past many clinicians advised patients to try cutting out certain foods to see whether atopic dermatitis symptoms improved, this strategy is not without risk, said Lio. “There have been incredible advancements in understanding the role of the gut in tolerization to foods.” Recent research has shown that by eating foods regularly, particularly those such as peanuts that seem to have more allergic potential, the body becomes tolerant, and this prevents the development of true food allergies.
As for additional research, many questions remain about the effects of types of foods, processing methods, and timing of introduction of foods on atopic dermatitis, Lio noted.
“Atopic dermatitis is a systemic condition with the immune system, with the skin/gut/respiratory barriers and microbiome involved; I think we now have a broader view of how big and complex the landscape really is,” he said.
The study received no outside funding. The researchers had no financial conflicts to disclose. Lio had no disclosures relevant to elimination diets but disclosed serving on the speakers bureau for AbbVie, Arcutis Biotherapeutics, Eli Lilly, Galderma, Hyphens Pharma, Incyte, La Roche–Posay/L’Oréal, Pfizer, Pierre Fabre Dermatologie, Regeneron/Sanofi Genzyme, and Verrica Pharmaceuticals; serving on consulting/advisory boards; or having stock options for many pharmaceutical companies. Lio also disclosed a patent pending for a Theraplex product with royalties paid and is a board member and Scientific Advisory Committee member emeritus of the National Eczema Association.
A version of this article first appeared on Medscape.com.
based on survey data from nearly 300 parents.
Although atopic dermatitis can be associated with an increased risk for food allergies, major allergy organizations do not currently recommend elimination diets as a treatment for atopic dermatitis, said Nadia Makkoukdji, MD, a pediatrician at Jackson Memorial Hospital, Miami, in a presentation at the American College of Allergy, Asthma, and Immunology (ACAAI) Annual Scientific Meeting.
“A fear of drastic dietary changes often prevents families from seeking the care their children need,” Makkoukdji said in an interview. In the clinical setting, Makkoukdji noted that she has seen many patients who have started food elimination diets on their own or as recommended by other doctors, and that these diets can lead to dangers such as the development of immunoglobulin E–mediated food allergies on reintroduction of eliminated foods and malnutrition. They can also produce “emotional stress in children and anxiety or depression, while also adding stress to parents and the entire family.”
Makkoukdji conducted the study to explore parents’ perceptions of these diets in management of their children’s atopic dermatitis, she said.
In the study, Makkoukdji and colleagues sought to understand parents’ perceptions of the role of diet in atopic dermatitis in their children. The researchers reviewed surveys from 298 parents of children with atopic dermatitis who were seen at a single academic center. Parents completed the surveys in the emergency department or in an allergy, dermatology, and general pediatrics clinic.
Overall, 42% of parents identified food triggers for their child’s atopic dermatitis. The most commonly identified triggers were milk (32%), tree nuts/seeds/peanuts (16%), and eggs (11%).
Of the parents who reported food triggers, 23% removed the suspected trigger food from the child’s diet completely, 20% removed suspected trigger foods from their own diets while breastfeeding, and 19% changed their infant’s formula.
In the wake of the elimination diets, 38% of the parents reported no improvement in their child’s atopic dermatitis, 35% reported a 25% improvement, and 9% reported complete resolution. The majority (79%) reintroduced eliminated foods and reported no recurrence of atopic dermatitis symptoms.
The researchers were surprised by how many parents changed their child’s diet in the belief that certain foods exacerbated their child’s atopic dermatitis, “although this perception aligns with the common concern that food allergens can trigger or worsen atopic dermatitis flares,” Makkoukdji said.
The current study highlights the need for more awareness of the limited impact of dietary modifications on atopic dermatitis in the absence of confirmed food allergies, Makkoukdji said. “Our study shows that food elimination diets are still commonly being used by parents in the local Miami population.”
The findings were limited by several factors, including the use of data from a single center and the focus only on pediatric patients, but the primary goal was to assess parental perceptions of AD flares in relation to dietary choices, said Makkoukdji. “Future studies that include larger and more diverse populations would be valuable for the field.”
Dietary Modifications Don’t Live Up to Hype
“Food continues to be one of the most discussed aspects of atopic dermatitis,” Peter Lio, MD, clinical assistant professor of dermatology and pediatrics at Northwestern University Feinberg School of Medicine, Chicago, Illinois, said in an interview.
“Almost all of my patients and families ask about dietary modifications, even though almost all of them have experimented with it to some degree,” said Lio. In his experience, diet plays a small role, if any, in the day-to-day management of atopic dermatitis.
This lack of effect of dietary changes is often frustrating to patients because of the persistent “common wisdom” that points to diet as a root cause of atopic dermatitis, Lio said. “Many practitioners continue to recommend excluding foods such as gluten or dairy from the diet, but generally these are only of modest help,” and although patients wish that dietary changes would fix the problem, most are left wondering why these changes didn’t help them.
The current study findings “reflect my own experience after nearly 20 years of being deeply immersed in the world of atopic dermatitis,” Lio said. Although the takeaway message does not argue against eating healthy foods, some foods do seem to make AD worse in some patients and may have nonallergic pro-inflammatory effects.
“In those cases, it is reasonable to limit or avoid those foods. However, it is extremely difficult to tell what food or foods are driving flare-ups when things are out of control, so dietary modification is generally not the best place to start,” he said.
True food allergies are much more common in patients with atopic dermatitis compared with individuals without atopic dermatitis, but the current study is not addressing these types of allergies, Lio emphasized. “If someone has true allergy to peanuts, for example, they should not be eating them; we also know that they are not ‘cheating’ because these patients would not merely have an eczema flare; they would have urticaria, angioedema, or anaphylaxis. There is tremendous confusion around this point and lots of confusion around allergy testing and its limitations.”
In addition, patients with atopic dermatitis are more likely than those without atopic dermatitis to have abnormalities in the gut microbiome and gut barrier, Lio said.
Abnormalities in the gut microbiome are different from the concept of allergy and may fall into the more complex category of barrier and microbiome disruptors, he said. Therefore, “the food category may not be nearly as important as the specific preparation of the food along with the additives (such as preservatives and emulsifiers) that may actually be driving the problem.”
Although in the past many clinicians advised patients to try cutting out certain foods to see whether atopic dermatitis symptoms improved, this strategy is not without risk, said Lio. “There have been incredible advancements in understanding the role of the gut in tolerization to foods.” Recent research has shown that by eating foods regularly, particularly those such as peanuts that seem to have more allergic potential, the body becomes tolerant, and this prevents the development of true food allergies.
As for additional research, many questions remain about the effects of types of foods, processing methods, and timing of introduction of foods on atopic dermatitis, Lio noted.
“Atopic dermatitis is a systemic condition with the immune system, with the skin/gut/respiratory barriers and microbiome involved; I think we now have a broader view of how big and complex the landscape really is,” he said.
The study received no outside funding. The researchers had no financial conflicts to disclose. Lio had no disclosures relevant to elimination diets but disclosed serving on the speakers bureau for AbbVie, Arcutis Biotherapeutics, Eli Lilly, Galderma, Hyphens Pharma, Incyte, La Roche–Posay/L’Oréal, Pfizer, Pierre Fabre Dermatologie, Regeneron/Sanofi Genzyme, and Verrica Pharmaceuticals; serving on consulting/advisory boards; or having stock options for many pharmaceutical companies. Lio also disclosed a patent pending for a Theraplex product with royalties paid and is a board member and Scientific Advisory Committee member emeritus of the National Eczema Association.
A version of this article first appeared on Medscape.com.
FROM ACAAI 2024
JIA Treatment Has Increasingly Involved New DMARDs Since 2001
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Maternal BMI and Eating Disorders Tied to Mental Health in Kids
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Obesity: A Social Vulnerability
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Preventing Pediatric Migraine
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Just Call It ‘Chronic Rhinitis’ and Reach for These Treatments
This transcript has been edited for clarity.
Matthew F. Watto, MD: I’m here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams. Paul, are you ready to talk about rhinitis?
Paul N. Williams, MD: I’m excited. It’s always the season to talk about rhinitis.
Watto: We had a great guest for this podcast, Rhinitis and Environmental Allergies with Dr. Olajumoke Fadugba from Penn Medicine. She’s an allergist and immunologist. One of her pet peeves is when people just call everything “allergic rhinitis” because we should be calling it “chronic rhinitis,” if it’s chronic. That’s an umbrella term, and there are many buckets underneath it that people could fall into.
When you’re taking a history, you have to figure out whether it’s perennial (meaning it happens year round) because certain things can cause that. Cat dander is around all the time, so people with cats might have sinus symptoms all year. Dust mites are another one, and it’s pretty hard to avoid those. Those are some perennial allergens.
Then there is allergic vs nonallergic rhinitis, which is something I hadn’t really put too much thought into.
Williams: I didn’t realize exactly how nuanced it got. Nonallergic rhinitis can still be seasonal because changes in temperature and humidity can trigger the rhinitis. And it matters what medications you use for what.
Watto: Here are some ways you can try to figure out if rhinitis is allergic or nonallergic. Ask the patient if they have itchy eyes and are sneezing a lot. That can be more of an allergic rhinitis, but both allergic and nonallergic rhinitis have the congestion, the rhinorrhea, so you can’t figure it out based on that alone.
Dr. Fadugba said that one clue that it might be nonallergic rhinitis is the age of onset. If the symptoms are later in onset (older age), then 30%-40% of rhinitis is nonallergic. If the patient has never had allergies and now all of a sudden they have new chronic sinus symptoms, it’s probably nonallergic rhinitis. It’s a diagnosis of exclusion.
I guess they need allergy testing?
Williams: If you want to make a definitive diagnosis, you need to rule it out. I suspect that you might be able to get away with some empirical treatment. If they get better, you can feel like a winner because getting booked in for allergy testing can be a little bit of a challenge.
Watto: The main treatment difference is that the oral antihistamines do not really seem to work for nonallergic rhinitis, but they can help with allergic rhinitis. Weirdly, the nasal antihistamines and nasal steroids do seem to work for both allergic and nonallergic rhinitis.
I don’t understand the mechanism there, but if you think someone might have nonallergic rhinitis, I wouldn’t go with the oral antihistamines as your first-line treatment. I would go with a nasal spray; you pretty much can’t go wrong with either an antihistamine or a steroid nasal spray.
Williams: We typically start with the nasal sprays. That’s kind of first-line for almost everybody, allergic or nonallergic. You’re probably going to start with an intranasal steroid, and then it’s kind of dealer’s choice what the patient can tolerate and afford. Sometimes you can get them covered by insurance, at least in my experience.
I will say that this is one of the medications — like nicotine patches and other things — where we as doctors don’t really counsel patients on how to use it appropriately. So with our expert, we revisited the idea of the patient pointing the nasal spray laterally, toward their ear basically, and not spraying toward their brain. There should not be a slurping sound afterward, because “if you taste it, you waste it,” as the allergists and immunologists say. It’s supposed to sit up there and not be swallowed immediately.
If your patient is sensitive to the floral flavor of some of the fluticasones (which I don’t mind so much as a user myself), then you can try mometasone or the other formulations. They are all roughly equivalent.
Speaking of medications, which medications can cause rhinitis? Any meds we commonly use in primary care?
Williams: Apparently the combined hormonal oral contraceptives can do it. Also the phosphodiesterase 5 (PDE-5) inhibitors. Drugs that cause vasodilation can also do it. Some of the antihypertensives. I’ve seen beta-blockers and angiotensin-converting enzyme (ACE) inhibitors listed specifically, and some of the medications for benign prostatic hyperplasia (BPH). So there are a couple of medications that you can think about as a potential cause of rhinitis, although my suspicion is not going to be as high as for some of the other causes.
Watto: We mentioned medication treatments for patients who are really bothered by rhinorrhea, and maybe they are already on a steroid or an antihistamine.
You can try nasal ipratropium for people that have really prominent rhinorrhea. Dr. Fadugba said that can work well, and it’s usually taken three or four times a day. I’ve had good success prescribing it for my patients. Another one that I have never prescribed, but that Dr. Fadugba said is available over the counter, is intranasal cromolyn — a mast cell stabilizer. She said it can be beneficial.
Let’s say I had a cat allergy and I was going to visit Paul. I could use the intranasal cromolyn ahead of time to reduce rhinitis when I’m around the cats.
Paul, what about montelukast? I never know what to do with that one.
Williams: I’ve seen it prescribed as a last-ditch attempt to fix chronic rhinitis. Dr. Fadugba said she only ever prescribes it for patients who have rhinitis symptoms and asthma and never just for chronic rhinitis because it doesn’t work. And also, there have been some new black-box warnings from the US Food and Drug Administration (FDA). So unless there’s a solid indication for it, montelukast is not something you should just prescribe to try to see if it will work. That’s probably not the right approach for this.
But if the patient has challenging control asthma, and as a component, challenging nasal symptoms as well, it might be a reasonable medication to try.
Watto: And finally, Paul, how does climate change possibly have anything to do with rhinitis?
Williams: I feel like I’m just seeing more and more of the stuff every year. I don’t know if I’m more sensitive to it or because I’m having more symptoms myself, but it turns out the prevalence actually is going up.
We’re seeing more of it in part because it’s getting hotter outside, which is in turn worsening the production of allergens and increasing the allergen exposure and the severity of the symptoms that go along with it. More people are having more severe disease because the world is changing as a result of the stuff that we do. So fix that. But also be mindful and expect to see even more of these problems as you move forward in your careers.
Watto: Dr. Fadugba gave us so many great tips. You can listen to the full podcast episode here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, disclosed ties with The Curbsiders.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Matthew F. Watto, MD: I’m here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams. Paul, are you ready to talk about rhinitis?
Paul N. Williams, MD: I’m excited. It’s always the season to talk about rhinitis.
Watto: We had a great guest for this podcast, Rhinitis and Environmental Allergies with Dr. Olajumoke Fadugba from Penn Medicine. She’s an allergist and immunologist. One of her pet peeves is when people just call everything “allergic rhinitis” because we should be calling it “chronic rhinitis,” if it’s chronic. That’s an umbrella term, and there are many buckets underneath it that people could fall into.
When you’re taking a history, you have to figure out whether it’s perennial (meaning it happens year round) because certain things can cause that. Cat dander is around all the time, so people with cats might have sinus symptoms all year. Dust mites are another one, and it’s pretty hard to avoid those. Those are some perennial allergens.
Then there is allergic vs nonallergic rhinitis, which is something I hadn’t really put too much thought into.
Williams: I didn’t realize exactly how nuanced it got. Nonallergic rhinitis can still be seasonal because changes in temperature and humidity can trigger the rhinitis. And it matters what medications you use for what.
Watto: Here are some ways you can try to figure out if rhinitis is allergic or nonallergic. Ask the patient if they have itchy eyes and are sneezing a lot. That can be more of an allergic rhinitis, but both allergic and nonallergic rhinitis have the congestion, the rhinorrhea, so you can’t figure it out based on that alone.
Dr. Fadugba said that one clue that it might be nonallergic rhinitis is the age of onset. If the symptoms are later in onset (older age), then 30%-40% of rhinitis is nonallergic. If the patient has never had allergies and now all of a sudden they have new chronic sinus symptoms, it’s probably nonallergic rhinitis. It’s a diagnosis of exclusion.
I guess they need allergy testing?
Williams: If you want to make a definitive diagnosis, you need to rule it out. I suspect that you might be able to get away with some empirical treatment. If they get better, you can feel like a winner because getting booked in for allergy testing can be a little bit of a challenge.
Watto: The main treatment difference is that the oral antihistamines do not really seem to work for nonallergic rhinitis, but they can help with allergic rhinitis. Weirdly, the nasal antihistamines and nasal steroids do seem to work for both allergic and nonallergic rhinitis.
I don’t understand the mechanism there, but if you think someone might have nonallergic rhinitis, I wouldn’t go with the oral antihistamines as your first-line treatment. I would go with a nasal spray; you pretty much can’t go wrong with either an antihistamine or a steroid nasal spray.
Williams: We typically start with the nasal sprays. That’s kind of first-line for almost everybody, allergic or nonallergic. You’re probably going to start with an intranasal steroid, and then it’s kind of dealer’s choice what the patient can tolerate and afford. Sometimes you can get them covered by insurance, at least in my experience.
I will say that this is one of the medications — like nicotine patches and other things — where we as doctors don’t really counsel patients on how to use it appropriately. So with our expert, we revisited the idea of the patient pointing the nasal spray laterally, toward their ear basically, and not spraying toward their brain. There should not be a slurping sound afterward, because “if you taste it, you waste it,” as the allergists and immunologists say. It’s supposed to sit up there and not be swallowed immediately.
If your patient is sensitive to the floral flavor of some of the fluticasones (which I don’t mind so much as a user myself), then you can try mometasone or the other formulations. They are all roughly equivalent.
Speaking of medications, which medications can cause rhinitis? Any meds we commonly use in primary care?
Williams: Apparently the combined hormonal oral contraceptives can do it. Also the phosphodiesterase 5 (PDE-5) inhibitors. Drugs that cause vasodilation can also do it. Some of the antihypertensives. I’ve seen beta-blockers and angiotensin-converting enzyme (ACE) inhibitors listed specifically, and some of the medications for benign prostatic hyperplasia (BPH). So there are a couple of medications that you can think about as a potential cause of rhinitis, although my suspicion is not going to be as high as for some of the other causes.
Watto: We mentioned medication treatments for patients who are really bothered by rhinorrhea, and maybe they are already on a steroid or an antihistamine.
You can try nasal ipratropium for people that have really prominent rhinorrhea. Dr. Fadugba said that can work well, and it’s usually taken three or four times a day. I’ve had good success prescribing it for my patients. Another one that I have never prescribed, but that Dr. Fadugba said is available over the counter, is intranasal cromolyn — a mast cell stabilizer. She said it can be beneficial.
Let’s say I had a cat allergy and I was going to visit Paul. I could use the intranasal cromolyn ahead of time to reduce rhinitis when I’m around the cats.
Paul, what about montelukast? I never know what to do with that one.
Williams: I’ve seen it prescribed as a last-ditch attempt to fix chronic rhinitis. Dr. Fadugba said she only ever prescribes it for patients who have rhinitis symptoms and asthma and never just for chronic rhinitis because it doesn’t work. And also, there have been some new black-box warnings from the US Food and Drug Administration (FDA). So unless there’s a solid indication for it, montelukast is not something you should just prescribe to try to see if it will work. That’s probably not the right approach for this.
But if the patient has challenging control asthma, and as a component, challenging nasal symptoms as well, it might be a reasonable medication to try.
Watto: And finally, Paul, how does climate change possibly have anything to do with rhinitis?
Williams: I feel like I’m just seeing more and more of the stuff every year. I don’t know if I’m more sensitive to it or because I’m having more symptoms myself, but it turns out the prevalence actually is going up.
We’re seeing more of it in part because it’s getting hotter outside, which is in turn worsening the production of allergens and increasing the allergen exposure and the severity of the symptoms that go along with it. More people are having more severe disease because the world is changing as a result of the stuff that we do. So fix that. But also be mindful and expect to see even more of these problems as you move forward in your careers.
Watto: Dr. Fadugba gave us so many great tips. You can listen to the full podcast episode here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, disclosed ties with The Curbsiders.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Matthew F. Watto, MD: I’m here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams. Paul, are you ready to talk about rhinitis?
Paul N. Williams, MD: I’m excited. It’s always the season to talk about rhinitis.
Watto: We had a great guest for this podcast, Rhinitis and Environmental Allergies with Dr. Olajumoke Fadugba from Penn Medicine. She’s an allergist and immunologist. One of her pet peeves is when people just call everything “allergic rhinitis” because we should be calling it “chronic rhinitis,” if it’s chronic. That’s an umbrella term, and there are many buckets underneath it that people could fall into.
When you’re taking a history, you have to figure out whether it’s perennial (meaning it happens year round) because certain things can cause that. Cat dander is around all the time, so people with cats might have sinus symptoms all year. Dust mites are another one, and it’s pretty hard to avoid those. Those are some perennial allergens.
Then there is allergic vs nonallergic rhinitis, which is something I hadn’t really put too much thought into.
Williams: I didn’t realize exactly how nuanced it got. Nonallergic rhinitis can still be seasonal because changes in temperature and humidity can trigger the rhinitis. And it matters what medications you use for what.
Watto: Here are some ways you can try to figure out if rhinitis is allergic or nonallergic. Ask the patient if they have itchy eyes and are sneezing a lot. That can be more of an allergic rhinitis, but both allergic and nonallergic rhinitis have the congestion, the rhinorrhea, so you can’t figure it out based on that alone.
Dr. Fadugba said that one clue that it might be nonallergic rhinitis is the age of onset. If the symptoms are later in onset (older age), then 30%-40% of rhinitis is nonallergic. If the patient has never had allergies and now all of a sudden they have new chronic sinus symptoms, it’s probably nonallergic rhinitis. It’s a diagnosis of exclusion.
I guess they need allergy testing?
Williams: If you want to make a definitive diagnosis, you need to rule it out. I suspect that you might be able to get away with some empirical treatment. If they get better, you can feel like a winner because getting booked in for allergy testing can be a little bit of a challenge.
Watto: The main treatment difference is that the oral antihistamines do not really seem to work for nonallergic rhinitis, but they can help with allergic rhinitis. Weirdly, the nasal antihistamines and nasal steroids do seem to work for both allergic and nonallergic rhinitis.
I don’t understand the mechanism there, but if you think someone might have nonallergic rhinitis, I wouldn’t go with the oral antihistamines as your first-line treatment. I would go with a nasal spray; you pretty much can’t go wrong with either an antihistamine or a steroid nasal spray.
Williams: We typically start with the nasal sprays. That’s kind of first-line for almost everybody, allergic or nonallergic. You’re probably going to start with an intranasal steroid, and then it’s kind of dealer’s choice what the patient can tolerate and afford. Sometimes you can get them covered by insurance, at least in my experience.
I will say that this is one of the medications — like nicotine patches and other things — where we as doctors don’t really counsel patients on how to use it appropriately. So with our expert, we revisited the idea of the patient pointing the nasal spray laterally, toward their ear basically, and not spraying toward their brain. There should not be a slurping sound afterward, because “if you taste it, you waste it,” as the allergists and immunologists say. It’s supposed to sit up there and not be swallowed immediately.
If your patient is sensitive to the floral flavor of some of the fluticasones (which I don’t mind so much as a user myself), then you can try mometasone or the other formulations. They are all roughly equivalent.
Speaking of medications, which medications can cause rhinitis? Any meds we commonly use in primary care?
Williams: Apparently the combined hormonal oral contraceptives can do it. Also the phosphodiesterase 5 (PDE-5) inhibitors. Drugs that cause vasodilation can also do it. Some of the antihypertensives. I’ve seen beta-blockers and angiotensin-converting enzyme (ACE) inhibitors listed specifically, and some of the medications for benign prostatic hyperplasia (BPH). So there are a couple of medications that you can think about as a potential cause of rhinitis, although my suspicion is not going to be as high as for some of the other causes.
Watto: We mentioned medication treatments for patients who are really bothered by rhinorrhea, and maybe they are already on a steroid or an antihistamine.
You can try nasal ipratropium for people that have really prominent rhinorrhea. Dr. Fadugba said that can work well, and it’s usually taken three or four times a day. I’ve had good success prescribing it for my patients. Another one that I have never prescribed, but that Dr. Fadugba said is available over the counter, is intranasal cromolyn — a mast cell stabilizer. She said it can be beneficial.
Let’s say I had a cat allergy and I was going to visit Paul. I could use the intranasal cromolyn ahead of time to reduce rhinitis when I’m around the cats.
Paul, what about montelukast? I never know what to do with that one.
Williams: I’ve seen it prescribed as a last-ditch attempt to fix chronic rhinitis. Dr. Fadugba said she only ever prescribes it for patients who have rhinitis symptoms and asthma and never just for chronic rhinitis because it doesn’t work. And also, there have been some new black-box warnings from the US Food and Drug Administration (FDA). So unless there’s a solid indication for it, montelukast is not something you should just prescribe to try to see if it will work. That’s probably not the right approach for this.
But if the patient has challenging control asthma, and as a component, challenging nasal symptoms as well, it might be a reasonable medication to try.
Watto: And finally, Paul, how does climate change possibly have anything to do with rhinitis?
Williams: I feel like I’m just seeing more and more of the stuff every year. I don’t know if I’m more sensitive to it or because I’m having more symptoms myself, but it turns out the prevalence actually is going up.
We’re seeing more of it in part because it’s getting hotter outside, which is in turn worsening the production of allergens and increasing the allergen exposure and the severity of the symptoms that go along with it. More people are having more severe disease because the world is changing as a result of the stuff that we do. So fix that. But also be mindful and expect to see even more of these problems as you move forward in your careers.
Watto: Dr. Fadugba gave us so many great tips. You can listen to the full podcast episode here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, disclosed ties with The Curbsiders.
A version of this article first appeared on Medscape.com.
Are Three Cycles of Chemotherapy as Effective as Six for Retinoblastoma?
TOPLINE:
The three-cycle regimen also resulted in fewer adverse events and lower costs.
METHODOLOGY:
- The introduction of chemotherapy has increased survival rates for patients with retinoblastoma, but the optimal number of postoperative adjuvant cycles remains unclear due to scant randomized clinical trial data for high-risk patients.
- In the new trial, participants at two premier eye centers in China were randomly assigned to receive either three (n = 94) or six (n = 93) cycles of carboplatin, etoposide, and vincristine (CEV) chemotherapy after enucleation.
- The primary endpoint was 5-year DFS, and the secondary endpoints were overall survival, safety, economic burden, and quality of life.
- Patients were followed up every 3 months for the first 2 years and then every 6 months thereafter, with a median follow-up of 79 months.
- Adverse events were graded using the National Cancer Institute Common Terminology Criteria for Adverse Events (version 5.0).
TAKEAWAY:
- The 5-year DFS rates were 90.4% and 89.2% for the three- and six-cycle groups, respectively, meeting the noninferiority criterion (P = .003).
- The six-cycle group experienced a higher frequency of adverse events, including neutropenia, anemia, and nausea, than the three-cycle group.
- The quality-of-life scores were higher in the three-cycle group, particularly in physical, emotional, and social functioning parameters.
- The total, direct, and indirect costs were significantly lower in the three-cycle group than in the six-cycle group.
IN PRACTICE:
“A three-cycle CEV regimen demonstrated noninferiority, compared with a six-cycle approach, and was and proved to be an efficacious adjuvant chemotherapy regimen for individuals diagnosed with pathologically high-risk retinoblastoma,” the authors of the study wrote.
In an accompanying editorial, Ning Li, MD, and colleagues wrote that the findings “could lead to changes in clinical practice, reducing treatment burden and costs without compromising patient outcomes.”
SOURCE:
This study was led by Huijing Ye, MD, PhD, State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University in Guangzhou, China. Both the study and editorial were published online in JAMA.
LIMITATIONS:
The open-label design of the study might introduce bias, although an independent, blinded committee evaluated the clinical outcomes. The 12% noninferiority margin was notably substantial, considering the rarity of retinoblastoma and the wide range of survival rates. The criteria for adjuvant therapy, especially regarding choroidal invasion, were debatable and required further follow-up to clarify the prognosis related to various pathologic features.
DISCLOSURES:
This study was supported by the Sun Yat-Sen University Clinical Research 5010 Program and the Shanghai Committee of Science and Technology. No relevant conflict of interest was disclosed by the authors of the paper or the editorial.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The three-cycle regimen also resulted in fewer adverse events and lower costs.
METHODOLOGY:
- The introduction of chemotherapy has increased survival rates for patients with retinoblastoma, but the optimal number of postoperative adjuvant cycles remains unclear due to scant randomized clinical trial data for high-risk patients.
- In the new trial, participants at two premier eye centers in China were randomly assigned to receive either three (n = 94) or six (n = 93) cycles of carboplatin, etoposide, and vincristine (CEV) chemotherapy after enucleation.
- The primary endpoint was 5-year DFS, and the secondary endpoints were overall survival, safety, economic burden, and quality of life.
- Patients were followed up every 3 months for the first 2 years and then every 6 months thereafter, with a median follow-up of 79 months.
- Adverse events were graded using the National Cancer Institute Common Terminology Criteria for Adverse Events (version 5.0).
TAKEAWAY:
- The 5-year DFS rates were 90.4% and 89.2% for the three- and six-cycle groups, respectively, meeting the noninferiority criterion (P = .003).
- The six-cycle group experienced a higher frequency of adverse events, including neutropenia, anemia, and nausea, than the three-cycle group.
- The quality-of-life scores were higher in the three-cycle group, particularly in physical, emotional, and social functioning parameters.
- The total, direct, and indirect costs were significantly lower in the three-cycle group than in the six-cycle group.
IN PRACTICE:
“A three-cycle CEV regimen demonstrated noninferiority, compared with a six-cycle approach, and was and proved to be an efficacious adjuvant chemotherapy regimen for individuals diagnosed with pathologically high-risk retinoblastoma,” the authors of the study wrote.
In an accompanying editorial, Ning Li, MD, and colleagues wrote that the findings “could lead to changes in clinical practice, reducing treatment burden and costs without compromising patient outcomes.”
SOURCE:
This study was led by Huijing Ye, MD, PhD, State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University in Guangzhou, China. Both the study and editorial were published online in JAMA.
LIMITATIONS:
The open-label design of the study might introduce bias, although an independent, blinded committee evaluated the clinical outcomes. The 12% noninferiority margin was notably substantial, considering the rarity of retinoblastoma and the wide range of survival rates. The criteria for adjuvant therapy, especially regarding choroidal invasion, were debatable and required further follow-up to clarify the prognosis related to various pathologic features.
DISCLOSURES:
This study was supported by the Sun Yat-Sen University Clinical Research 5010 Program and the Shanghai Committee of Science and Technology. No relevant conflict of interest was disclosed by the authors of the paper or the editorial.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The three-cycle regimen also resulted in fewer adverse events and lower costs.
METHODOLOGY:
- The introduction of chemotherapy has increased survival rates for patients with retinoblastoma, but the optimal number of postoperative adjuvant cycles remains unclear due to scant randomized clinical trial data for high-risk patients.
- In the new trial, participants at two premier eye centers in China were randomly assigned to receive either three (n = 94) or six (n = 93) cycles of carboplatin, etoposide, and vincristine (CEV) chemotherapy after enucleation.
- The primary endpoint was 5-year DFS, and the secondary endpoints were overall survival, safety, economic burden, and quality of life.
- Patients were followed up every 3 months for the first 2 years and then every 6 months thereafter, with a median follow-up of 79 months.
- Adverse events were graded using the National Cancer Institute Common Terminology Criteria for Adverse Events (version 5.0).
TAKEAWAY:
- The 5-year DFS rates were 90.4% and 89.2% for the three- and six-cycle groups, respectively, meeting the noninferiority criterion (P = .003).
- The six-cycle group experienced a higher frequency of adverse events, including neutropenia, anemia, and nausea, than the three-cycle group.
- The quality-of-life scores were higher in the three-cycle group, particularly in physical, emotional, and social functioning parameters.
- The total, direct, and indirect costs were significantly lower in the three-cycle group than in the six-cycle group.
IN PRACTICE:
“A three-cycle CEV regimen demonstrated noninferiority, compared with a six-cycle approach, and was and proved to be an efficacious adjuvant chemotherapy regimen for individuals diagnosed with pathologically high-risk retinoblastoma,” the authors of the study wrote.
In an accompanying editorial, Ning Li, MD, and colleagues wrote that the findings “could lead to changes in clinical practice, reducing treatment burden and costs without compromising patient outcomes.”
SOURCE:
This study was led by Huijing Ye, MD, PhD, State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University in Guangzhou, China. Both the study and editorial were published online in JAMA.
LIMITATIONS:
The open-label design of the study might introduce bias, although an independent, blinded committee evaluated the clinical outcomes. The 12% noninferiority margin was notably substantial, considering the rarity of retinoblastoma and the wide range of survival rates. The criteria for adjuvant therapy, especially regarding choroidal invasion, were debatable and required further follow-up to clarify the prognosis related to various pathologic features.
DISCLOSURES:
This study was supported by the Sun Yat-Sen University Clinical Research 5010 Program and the Shanghai Committee of Science and Technology. No relevant conflict of interest was disclosed by the authors of the paper or the editorial.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.