User login
Beware the hidden allergens in nutritional supplements
, Alison Ehrlich, MD, said at the annual meeting of the American Contact Dermatitis Society.
Allergens may be hidden in a range of supplement products, from colorings in vitamin C powders to some vitamins used in hair products and other products.
“In general, our patients do not tell us what supplements they are taking,” said Dr. Ehrlich, a dermatologist who practices in Washington, D.C. Antiaging, sleep, and weight loss/weight control supplements are among the most popular, she said.
Surveys have shown that many patients do not discuss supplement use with their health care providers, in part because they believe their providers would disapprove of supplement use, and patients are not educated about supplements, she said. “This is definitely an area that we should try to learn more about,” she added.
Current regulations regarding dietary supplements stem from the Dietary Supplement Health and Education Act of 1994, which defined dietary supplements as distinct from meals but regulated them as a category of food, not as medications. Dietary supplements can be vitamins, minerals, herbs, and extracts, Dr. Ehrlich said.
“There is not a lot of safety wrapped around how supplements come onto the market,” she explained. “It is not the manufacturer’s responsibility to test these products and make sure they are safe. When they get pulled off the market, it is because safety reports are getting back to the FDA.”
Consequently, a detailed history of supplement use is important, as it may reveal possible allergens as the cause of previously unidentified reactions, she said.
Dr. Ehrlich shared a case involving a patient who claimed to have had a reaction to a “Prevage-like” product that was labeled as a crepe repair cream. Listed among the product’s ingredients was idebenone, a synthetic version of the popular antioxidant known as Coenzyme Q.
Be wary of vitamins
Another potential source of allergy is vitamin C supplements, which became especially popular during the pandemic as people sought additional immune system support, Dr. Ehrlich noted. “What kind of vitamin C product our patients are taking is important,” she said. For example, some vitamin C powders contain coloring agents, such as carmine. Some also contain gelatin, which may cause an allergic reaction in individuals with alpha-gal syndrome, she added.
In general, water-soluble vitamins such as vitamins B1 to B9, B12, and C are more likely to cause an immediate reaction, Dr. Ehrlich said. Fat-soluble vitamins, such as vitamins A, D, E, and K, are more likely to cause a delayed reaction of allergic contact dermatitis.
Dr. Ehrlich described some unusual reactions to vitamins that have been reported, including a systemic allergy associated with vitamin B1 (thiamine), burning mouth syndrome associated with vitamin B3 (nicotinate), contact urticaria associated with vitamin B5 (panthenol), systemic allergy and generalized ACD associated with vitamin E (tocopherol), and erythema multiforme–like ACD associated with vitamin K1.
Notably, vitamin B5 has been associated with ACD as an ingredient in hair products, moisturizers, and wound care products, as well as B-complex vitamins and fortified foods, Dr. Ehrlich said.
Herbs and spices can act as allergens as well. Turmeric is a spice that has become a popular supplement ingredient, she said. Turmeric and curcumin (found in turmeric) can be used as a dye for its yellow color as well as a flavoring but has been associated with allergic reactions. Another popular herbal supplement, ginkgo biloba, has been marketed as a product that improves memory and cognition. It is available in pill form and in herbal teas.
“It’s really important to think about what herbal products our patients are taking, and not just in pill form,” Dr. Ehrlich said. “We need to expand our thoughts on what the herbs are in.”
Consider food additives as allergens
Food additives, in the form of colorants, preservatives, or flavoring agents, can cause allergic reactions, Dr. Ehrlich noted.
The question of whether food-additive contact sensitivity has a role in the occurrence of atopic dermatitis (AD) in children remains unclear, she said. However, a study published in 2020 found that 62% of children with AD had positive patch test reactions to at least one food-additive allergen, compared with 20% of children without AD. The additives responsible for the most reactions were azorubine (24.4%); formic acid (15.6%); and carmine, cochineal red, and amaranth (13.3% for each).
Common colorant culprits in allergic reactions include carmine, annatto, tartrazine, and spices (such as paprika and saffron), Dr. Ehrlich said. Carmine is used in meat to prevent photo-oxidation and to preserve a red color, and it has other uses as well, she said. Carmine has been associated with ACD, AD flares, and immediate hypersensitivity. Annatto is used in foods, including processed foods, butter, and cheese, to provide a yellow color. It is also found in some lipsticks and has been associated with urticaria and angioedema, she noted.
Food preservatives that have been associated with allergic reactions include butylated hydroxyanisole and sulfites, Dr. Ehrlich said. Sulfites are used to prevent food from turning brown, and it may be present in dried fruit, fruit juice, molasses, pickled foods, vinegar, and wine.
Reports of ACD in response to sodium metabisulfite have been increasing, she noted. Other sulfite reactions may occur with exposure to other products, such as cosmetics, body washes, and swimming pool water, she said.
Awareness of allergens in supplements is important “because the number of our patients taking supplements for different reasons is increasing” and allergens in supplements could account for flares, Dr. Ehrlich said. Clinicians should encourage patients to tell them what supplements they use. Clinicians should review the ingredients in these supplements with their patients to identify potential allergens that may be causing reactions, she advised.
Dr. Ehrlich has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, Alison Ehrlich, MD, said at the annual meeting of the American Contact Dermatitis Society.
Allergens may be hidden in a range of supplement products, from colorings in vitamin C powders to some vitamins used in hair products and other products.
“In general, our patients do not tell us what supplements they are taking,” said Dr. Ehrlich, a dermatologist who practices in Washington, D.C. Antiaging, sleep, and weight loss/weight control supplements are among the most popular, she said.
Surveys have shown that many patients do not discuss supplement use with their health care providers, in part because they believe their providers would disapprove of supplement use, and patients are not educated about supplements, she said. “This is definitely an area that we should try to learn more about,” she added.
Current regulations regarding dietary supplements stem from the Dietary Supplement Health and Education Act of 1994, which defined dietary supplements as distinct from meals but regulated them as a category of food, not as medications. Dietary supplements can be vitamins, minerals, herbs, and extracts, Dr. Ehrlich said.
“There is not a lot of safety wrapped around how supplements come onto the market,” she explained. “It is not the manufacturer’s responsibility to test these products and make sure they are safe. When they get pulled off the market, it is because safety reports are getting back to the FDA.”
Consequently, a detailed history of supplement use is important, as it may reveal possible allergens as the cause of previously unidentified reactions, she said.
Dr. Ehrlich shared a case involving a patient who claimed to have had a reaction to a “Prevage-like” product that was labeled as a crepe repair cream. Listed among the product’s ingredients was idebenone, a synthetic version of the popular antioxidant known as Coenzyme Q.
Be wary of vitamins
Another potential source of allergy is vitamin C supplements, which became especially popular during the pandemic as people sought additional immune system support, Dr. Ehrlich noted. “What kind of vitamin C product our patients are taking is important,” she said. For example, some vitamin C powders contain coloring agents, such as carmine. Some also contain gelatin, which may cause an allergic reaction in individuals with alpha-gal syndrome, she added.
In general, water-soluble vitamins such as vitamins B1 to B9, B12, and C are more likely to cause an immediate reaction, Dr. Ehrlich said. Fat-soluble vitamins, such as vitamins A, D, E, and K, are more likely to cause a delayed reaction of allergic contact dermatitis.
Dr. Ehrlich described some unusual reactions to vitamins that have been reported, including a systemic allergy associated with vitamin B1 (thiamine), burning mouth syndrome associated with vitamin B3 (nicotinate), contact urticaria associated with vitamin B5 (panthenol), systemic allergy and generalized ACD associated with vitamin E (tocopherol), and erythema multiforme–like ACD associated with vitamin K1.
Notably, vitamin B5 has been associated with ACD as an ingredient in hair products, moisturizers, and wound care products, as well as B-complex vitamins and fortified foods, Dr. Ehrlich said.
Herbs and spices can act as allergens as well. Turmeric is a spice that has become a popular supplement ingredient, she said. Turmeric and curcumin (found in turmeric) can be used as a dye for its yellow color as well as a flavoring but has been associated with allergic reactions. Another popular herbal supplement, ginkgo biloba, has been marketed as a product that improves memory and cognition. It is available in pill form and in herbal teas.
“It’s really important to think about what herbal products our patients are taking, and not just in pill form,” Dr. Ehrlich said. “We need to expand our thoughts on what the herbs are in.”
Consider food additives as allergens
Food additives, in the form of colorants, preservatives, or flavoring agents, can cause allergic reactions, Dr. Ehrlich noted.
The question of whether food-additive contact sensitivity has a role in the occurrence of atopic dermatitis (AD) in children remains unclear, she said. However, a study published in 2020 found that 62% of children with AD had positive patch test reactions to at least one food-additive allergen, compared with 20% of children without AD. The additives responsible for the most reactions were azorubine (24.4%); formic acid (15.6%); and carmine, cochineal red, and amaranth (13.3% for each).
Common colorant culprits in allergic reactions include carmine, annatto, tartrazine, and spices (such as paprika and saffron), Dr. Ehrlich said. Carmine is used in meat to prevent photo-oxidation and to preserve a red color, and it has other uses as well, she said. Carmine has been associated with ACD, AD flares, and immediate hypersensitivity. Annatto is used in foods, including processed foods, butter, and cheese, to provide a yellow color. It is also found in some lipsticks and has been associated with urticaria and angioedema, she noted.
Food preservatives that have been associated with allergic reactions include butylated hydroxyanisole and sulfites, Dr. Ehrlich said. Sulfites are used to prevent food from turning brown, and it may be present in dried fruit, fruit juice, molasses, pickled foods, vinegar, and wine.
Reports of ACD in response to sodium metabisulfite have been increasing, she noted. Other sulfite reactions may occur with exposure to other products, such as cosmetics, body washes, and swimming pool water, she said.
Awareness of allergens in supplements is important “because the number of our patients taking supplements for different reasons is increasing” and allergens in supplements could account for flares, Dr. Ehrlich said. Clinicians should encourage patients to tell them what supplements they use. Clinicians should review the ingredients in these supplements with their patients to identify potential allergens that may be causing reactions, she advised.
Dr. Ehrlich has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, Alison Ehrlich, MD, said at the annual meeting of the American Contact Dermatitis Society.
Allergens may be hidden in a range of supplement products, from colorings in vitamin C powders to some vitamins used in hair products and other products.
“In general, our patients do not tell us what supplements they are taking,” said Dr. Ehrlich, a dermatologist who practices in Washington, D.C. Antiaging, sleep, and weight loss/weight control supplements are among the most popular, she said.
Surveys have shown that many patients do not discuss supplement use with their health care providers, in part because they believe their providers would disapprove of supplement use, and patients are not educated about supplements, she said. “This is definitely an area that we should try to learn more about,” she added.
Current regulations regarding dietary supplements stem from the Dietary Supplement Health and Education Act of 1994, which defined dietary supplements as distinct from meals but regulated them as a category of food, not as medications. Dietary supplements can be vitamins, minerals, herbs, and extracts, Dr. Ehrlich said.
“There is not a lot of safety wrapped around how supplements come onto the market,” she explained. “It is not the manufacturer’s responsibility to test these products and make sure they are safe. When they get pulled off the market, it is because safety reports are getting back to the FDA.”
Consequently, a detailed history of supplement use is important, as it may reveal possible allergens as the cause of previously unidentified reactions, she said.
Dr. Ehrlich shared a case involving a patient who claimed to have had a reaction to a “Prevage-like” product that was labeled as a crepe repair cream. Listed among the product’s ingredients was idebenone, a synthetic version of the popular antioxidant known as Coenzyme Q.
Be wary of vitamins
Another potential source of allergy is vitamin C supplements, which became especially popular during the pandemic as people sought additional immune system support, Dr. Ehrlich noted. “What kind of vitamin C product our patients are taking is important,” she said. For example, some vitamin C powders contain coloring agents, such as carmine. Some also contain gelatin, which may cause an allergic reaction in individuals with alpha-gal syndrome, she added.
In general, water-soluble vitamins such as vitamins B1 to B9, B12, and C are more likely to cause an immediate reaction, Dr. Ehrlich said. Fat-soluble vitamins, such as vitamins A, D, E, and K, are more likely to cause a delayed reaction of allergic contact dermatitis.
Dr. Ehrlich described some unusual reactions to vitamins that have been reported, including a systemic allergy associated with vitamin B1 (thiamine), burning mouth syndrome associated with vitamin B3 (nicotinate), contact urticaria associated with vitamin B5 (panthenol), systemic allergy and generalized ACD associated with vitamin E (tocopherol), and erythema multiforme–like ACD associated with vitamin K1.
Notably, vitamin B5 has been associated with ACD as an ingredient in hair products, moisturizers, and wound care products, as well as B-complex vitamins and fortified foods, Dr. Ehrlich said.
Herbs and spices can act as allergens as well. Turmeric is a spice that has become a popular supplement ingredient, she said. Turmeric and curcumin (found in turmeric) can be used as a dye for its yellow color as well as a flavoring but has been associated with allergic reactions. Another popular herbal supplement, ginkgo biloba, has been marketed as a product that improves memory and cognition. It is available in pill form and in herbal teas.
“It’s really important to think about what herbal products our patients are taking, and not just in pill form,” Dr. Ehrlich said. “We need to expand our thoughts on what the herbs are in.”
Consider food additives as allergens
Food additives, in the form of colorants, preservatives, or flavoring agents, can cause allergic reactions, Dr. Ehrlich noted.
The question of whether food-additive contact sensitivity has a role in the occurrence of atopic dermatitis (AD) in children remains unclear, she said. However, a study published in 2020 found that 62% of children with AD had positive patch test reactions to at least one food-additive allergen, compared with 20% of children without AD. The additives responsible for the most reactions were azorubine (24.4%); formic acid (15.6%); and carmine, cochineal red, and amaranth (13.3% for each).
Common colorant culprits in allergic reactions include carmine, annatto, tartrazine, and spices (such as paprika and saffron), Dr. Ehrlich said. Carmine is used in meat to prevent photo-oxidation and to preserve a red color, and it has other uses as well, she said. Carmine has been associated with ACD, AD flares, and immediate hypersensitivity. Annatto is used in foods, including processed foods, butter, and cheese, to provide a yellow color. It is also found in some lipsticks and has been associated with urticaria and angioedema, she noted.
Food preservatives that have been associated with allergic reactions include butylated hydroxyanisole and sulfites, Dr. Ehrlich said. Sulfites are used to prevent food from turning brown, and it may be present in dried fruit, fruit juice, molasses, pickled foods, vinegar, and wine.
Reports of ACD in response to sodium metabisulfite have been increasing, she noted. Other sulfite reactions may occur with exposure to other products, such as cosmetics, body washes, and swimming pool water, she said.
Awareness of allergens in supplements is important “because the number of our patients taking supplements for different reasons is increasing” and allergens in supplements could account for flares, Dr. Ehrlich said. Clinicians should encourage patients to tell them what supplements they use. Clinicians should review the ingredients in these supplements with their patients to identify potential allergens that may be causing reactions, she advised.
Dr. Ehrlich has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ACDS 2023
Study shows higher obesity-related cancer mortality in areas with more fast food
based on data from a new cross-sectional study of more than 3,000 communities.
Although increased healthy eating has been associated with reduced risk of obesity and with reduced cancer incidence and mortality, access to healthier eating remains a challenge in communities with less access to grocery stores and healthy food options (food deserts) and/or easy access to convenience stores and fast food (food swamps), Malcolm Seth Bevel, PhD, of the Medical College of Georgia, Augusta, and colleagues, wrote in their paper, published in JAMA Oncology.
In addition, data on the association between food deserts and swamps and obesity-related cancer mortality are limited, they said.
“We felt that the study was important given the fact that obesity is an epidemic in the United States, and multiple factors contribute to obesity, especially adverse food environments,” Dr. Bevel said in an interview. “Also, I lived in these areas my whole life, and saw how it affected underserved populations. There was a story that needed to be told, so we’re telling it,” he said in an interview.
In a study, the researchers analyzed food access and cancer mortality data from 3,038 counties across the United States. The food access data came from the U.S. Department of Agriculture Food Environment Atlas (FEA) for the years 2012, 2014, 2015, 2017, and 2020. Data on obesity-related cancer mortality came from the Centers for Disease Control and Prevention for the years from 2010 to 2020.
Food desert scores were calculated through data from the FEA, and food swamp scores were based on the ratio of fast-food restaurants and convenience stores to grocery stores and farmers markets in a modification of the Retail Food Environment Index score.
The researchers used an age-adjusted, multiple regression model to determine the association between food desert and food swamp scores and obesity-related cancer mortality rates. Higher food swamp and food desert scores (defined as 20.0 to 58.0 or higher) were used to classify counties as having fewer healthy food resources. The primary outcome was obesity-related cancer mortality, defined as high or low (71.8 or higher per 100,000 individuals and less than 71.8 per 100,000 individuals, respectively).
Overall, high rates of obesity-related cancer mortality were 77% more likely in the counties that met the criteria for high food swamp scores (adjusted odds ratio 1.77). In addition, researchers found a positive dose-response relationship among three levels of both food desert scores and food swamp scores and obesity-related cancer mortality.
A total of 758 counties had obesity-related cancer mortality rates in the highest quartile. Compared to counties with low rates of obesity-related cancer mortality, counties with high rates of obesity-related cancer mortality also had a higher percentage of non-Hispanic Black residents (3.26% vs. 1.77%), higher percentage of adults older than 65 years (15.71% vs. 15.40%), higher rates of adult obesity (33.0% vs. 32.10%), and higher rates of adult diabetes (12.50% vs. 10.70%).
Possible explanations for the results include the lack of interest in grocery stores in neighborhoods with a population with a lower socioeconomic status, which can create a food desert, the researchers wrote in their discussion. “Coupled with the increasing growth rate of fast-food restaurants in recent years and the intentional advertisement of unhealthy foods in urban neighborhoods with [people of lower income], the food desert may transform into a food swamp,” they said.
The findings were limited by several factors including the study design, which did not allow for showing a causal association of food deserts and food swamps with obesity-related cancer mortality, the researchers noted. Other limitations included the use of groups rather than individuals, the potential misclassification of food stores, and the use of county-level data on race, ethnicity, and income, they wrote.
The results indicate that “food swamps appear to be a growing epidemic across the U.S., likely because of systemic issues, and should draw concern and conversation from local and state officials,” the researchers concluded.
Community-level investments can benefit individual health
Dr. Bevel said he was not surprised by the findings, as he has seen firsthand the lack of healthy food options and growth of unhealthy food options, especially for certain populations in certain communities. “Typically, these are people who have lower socioeconomic status, primarily non-Hispanic Black or African American or Hispanic American,” he said “I have watched people have to choose between getting fruits/vegetables versus their medications or running to fast food places to feed their families. What is truly surprising is that we’re not talking about people’s lived environment enough for my taste,” he said.
“I hope that our data and results can inform local and state policymakers to truly invest in all communities, such as funding for community gardens, and realize that adverse food environments, including the barriers in navigating these environments, have significant consequences on real people,” said Dr. Bevel. “Also, I hope that the results can help clinicians realize that a patient’s lived environment can truly affect their obesity and/or obesity-related cancer status; being cognizant of that is the first step in holistic, comprehensive care,” he said.
“One role that oncologists might be able to play in improving patients’ access to healthier food is to create and/or implement healthy lifestyle programs with gardening components to combat the poorest food environments that their patients likely reside in,” said Dr. Bevel. Clinicians also could consider the innovative approach of “food prescriptions” to help reduce the effects of deprived, built environments, he noted.
Looking ahead, next steps for research include determining the severity of association between food swamps and obesity-related cancer by varying factors such as cancer type, and examining any potential racial disparities between people living in these environments and obesity-related cancer, Dr. Bevel added.
Data provide foundation for multilevel interventions
The current study findings “raise a clarion call to elevate the discussion on food availability and access to ensure an equitable emphasis on both the importance of lifestyle factors and the upstream structural, economic, and environmental contexts that shape these behaviors at the individual level,” Karriem S. Watson, DHSc, MS, MPH, of the National Institutes of Health, Bethesda, Md., and Angela Odoms-Young, PhD, of Cornell University, Ithaca, N.Y., wrote in an accompanying editorial.
The findings provide a foundation for studies of obesity-related cancer outcomes that take the community environment into consideration, they added.
The causes of both obesity and cancer are complex, and the study findings suggest that the links between unhealthy food environments and obesity-related cancer may go beyond dietary consumption alone and extend to social and psychological factors, the editorialists noted.
“Whether dealing with the lack of access to healthy foods or an overabundance of unhealthy food, there is a critical need to develop additional research that explores the associations between obesity-related cancer mortality and food inequities,” they concluded.
The study received no outside funding. The researchers and the editorialists had no financial conflicts to disclose.
based on data from a new cross-sectional study of more than 3,000 communities.
Although increased healthy eating has been associated with reduced risk of obesity and with reduced cancer incidence and mortality, access to healthier eating remains a challenge in communities with less access to grocery stores and healthy food options (food deserts) and/or easy access to convenience stores and fast food (food swamps), Malcolm Seth Bevel, PhD, of the Medical College of Georgia, Augusta, and colleagues, wrote in their paper, published in JAMA Oncology.
In addition, data on the association between food deserts and swamps and obesity-related cancer mortality are limited, they said.
“We felt that the study was important given the fact that obesity is an epidemic in the United States, and multiple factors contribute to obesity, especially adverse food environments,” Dr. Bevel said in an interview. “Also, I lived in these areas my whole life, and saw how it affected underserved populations. There was a story that needed to be told, so we’re telling it,” he said in an interview.
In a study, the researchers analyzed food access and cancer mortality data from 3,038 counties across the United States. The food access data came from the U.S. Department of Agriculture Food Environment Atlas (FEA) for the years 2012, 2014, 2015, 2017, and 2020. Data on obesity-related cancer mortality came from the Centers for Disease Control and Prevention for the years from 2010 to 2020.
Food desert scores were calculated through data from the FEA, and food swamp scores were based on the ratio of fast-food restaurants and convenience stores to grocery stores and farmers markets in a modification of the Retail Food Environment Index score.
The researchers used an age-adjusted, multiple regression model to determine the association between food desert and food swamp scores and obesity-related cancer mortality rates. Higher food swamp and food desert scores (defined as 20.0 to 58.0 or higher) were used to classify counties as having fewer healthy food resources. The primary outcome was obesity-related cancer mortality, defined as high or low (71.8 or higher per 100,000 individuals and less than 71.8 per 100,000 individuals, respectively).
Overall, high rates of obesity-related cancer mortality were 77% more likely in the counties that met the criteria for high food swamp scores (adjusted odds ratio 1.77). In addition, researchers found a positive dose-response relationship among three levels of both food desert scores and food swamp scores and obesity-related cancer mortality.
A total of 758 counties had obesity-related cancer mortality rates in the highest quartile. Compared to counties with low rates of obesity-related cancer mortality, counties with high rates of obesity-related cancer mortality also had a higher percentage of non-Hispanic Black residents (3.26% vs. 1.77%), higher percentage of adults older than 65 years (15.71% vs. 15.40%), higher rates of adult obesity (33.0% vs. 32.10%), and higher rates of adult diabetes (12.50% vs. 10.70%).
Possible explanations for the results include the lack of interest in grocery stores in neighborhoods with a population with a lower socioeconomic status, which can create a food desert, the researchers wrote in their discussion. “Coupled with the increasing growth rate of fast-food restaurants in recent years and the intentional advertisement of unhealthy foods in urban neighborhoods with [people of lower income], the food desert may transform into a food swamp,” they said.
The findings were limited by several factors including the study design, which did not allow for showing a causal association of food deserts and food swamps with obesity-related cancer mortality, the researchers noted. Other limitations included the use of groups rather than individuals, the potential misclassification of food stores, and the use of county-level data on race, ethnicity, and income, they wrote.
The results indicate that “food swamps appear to be a growing epidemic across the U.S., likely because of systemic issues, and should draw concern and conversation from local and state officials,” the researchers concluded.
Community-level investments can benefit individual health
Dr. Bevel said he was not surprised by the findings, as he has seen firsthand the lack of healthy food options and growth of unhealthy food options, especially for certain populations in certain communities. “Typically, these are people who have lower socioeconomic status, primarily non-Hispanic Black or African American or Hispanic American,” he said “I have watched people have to choose between getting fruits/vegetables versus their medications or running to fast food places to feed their families. What is truly surprising is that we’re not talking about people’s lived environment enough for my taste,” he said.
“I hope that our data and results can inform local and state policymakers to truly invest in all communities, such as funding for community gardens, and realize that adverse food environments, including the barriers in navigating these environments, have significant consequences on real people,” said Dr. Bevel. “Also, I hope that the results can help clinicians realize that a patient’s lived environment can truly affect their obesity and/or obesity-related cancer status; being cognizant of that is the first step in holistic, comprehensive care,” he said.
“One role that oncologists might be able to play in improving patients’ access to healthier food is to create and/or implement healthy lifestyle programs with gardening components to combat the poorest food environments that their patients likely reside in,” said Dr. Bevel. Clinicians also could consider the innovative approach of “food prescriptions” to help reduce the effects of deprived, built environments, he noted.
Looking ahead, next steps for research include determining the severity of association between food swamps and obesity-related cancer by varying factors such as cancer type, and examining any potential racial disparities between people living in these environments and obesity-related cancer, Dr. Bevel added.
Data provide foundation for multilevel interventions
The current study findings “raise a clarion call to elevate the discussion on food availability and access to ensure an equitable emphasis on both the importance of lifestyle factors and the upstream structural, economic, and environmental contexts that shape these behaviors at the individual level,” Karriem S. Watson, DHSc, MS, MPH, of the National Institutes of Health, Bethesda, Md., and Angela Odoms-Young, PhD, of Cornell University, Ithaca, N.Y., wrote in an accompanying editorial.
The findings provide a foundation for studies of obesity-related cancer outcomes that take the community environment into consideration, they added.
The causes of both obesity and cancer are complex, and the study findings suggest that the links between unhealthy food environments and obesity-related cancer may go beyond dietary consumption alone and extend to social and psychological factors, the editorialists noted.
“Whether dealing with the lack of access to healthy foods or an overabundance of unhealthy food, there is a critical need to develop additional research that explores the associations between obesity-related cancer mortality and food inequities,” they concluded.
The study received no outside funding. The researchers and the editorialists had no financial conflicts to disclose.
based on data from a new cross-sectional study of more than 3,000 communities.
Although increased healthy eating has been associated with reduced risk of obesity and with reduced cancer incidence and mortality, access to healthier eating remains a challenge in communities with less access to grocery stores and healthy food options (food deserts) and/or easy access to convenience stores and fast food (food swamps), Malcolm Seth Bevel, PhD, of the Medical College of Georgia, Augusta, and colleagues, wrote in their paper, published in JAMA Oncology.
In addition, data on the association between food deserts and swamps and obesity-related cancer mortality are limited, they said.
“We felt that the study was important given the fact that obesity is an epidemic in the United States, and multiple factors contribute to obesity, especially adverse food environments,” Dr. Bevel said in an interview. “Also, I lived in these areas my whole life, and saw how it affected underserved populations. There was a story that needed to be told, so we’re telling it,” he said in an interview.
In a study, the researchers analyzed food access and cancer mortality data from 3,038 counties across the United States. The food access data came from the U.S. Department of Agriculture Food Environment Atlas (FEA) for the years 2012, 2014, 2015, 2017, and 2020. Data on obesity-related cancer mortality came from the Centers for Disease Control and Prevention for the years from 2010 to 2020.
Food desert scores were calculated through data from the FEA, and food swamp scores were based on the ratio of fast-food restaurants and convenience stores to grocery stores and farmers markets in a modification of the Retail Food Environment Index score.
The researchers used an age-adjusted, multiple regression model to determine the association between food desert and food swamp scores and obesity-related cancer mortality rates. Higher food swamp and food desert scores (defined as 20.0 to 58.0 or higher) were used to classify counties as having fewer healthy food resources. The primary outcome was obesity-related cancer mortality, defined as high or low (71.8 or higher per 100,000 individuals and less than 71.8 per 100,000 individuals, respectively).
Overall, high rates of obesity-related cancer mortality were 77% more likely in the counties that met the criteria for high food swamp scores (adjusted odds ratio 1.77). In addition, researchers found a positive dose-response relationship among three levels of both food desert scores and food swamp scores and obesity-related cancer mortality.
A total of 758 counties had obesity-related cancer mortality rates in the highest quartile. Compared to counties with low rates of obesity-related cancer mortality, counties with high rates of obesity-related cancer mortality also had a higher percentage of non-Hispanic Black residents (3.26% vs. 1.77%), higher percentage of adults older than 65 years (15.71% vs. 15.40%), higher rates of adult obesity (33.0% vs. 32.10%), and higher rates of adult diabetes (12.50% vs. 10.70%).
Possible explanations for the results include the lack of interest in grocery stores in neighborhoods with a population with a lower socioeconomic status, which can create a food desert, the researchers wrote in their discussion. “Coupled with the increasing growth rate of fast-food restaurants in recent years and the intentional advertisement of unhealthy foods in urban neighborhoods with [people of lower income], the food desert may transform into a food swamp,” they said.
The findings were limited by several factors including the study design, which did not allow for showing a causal association of food deserts and food swamps with obesity-related cancer mortality, the researchers noted. Other limitations included the use of groups rather than individuals, the potential misclassification of food stores, and the use of county-level data on race, ethnicity, and income, they wrote.
The results indicate that “food swamps appear to be a growing epidemic across the U.S., likely because of systemic issues, and should draw concern and conversation from local and state officials,” the researchers concluded.
Community-level investments can benefit individual health
Dr. Bevel said he was not surprised by the findings, as he has seen firsthand the lack of healthy food options and growth of unhealthy food options, especially for certain populations in certain communities. “Typically, these are people who have lower socioeconomic status, primarily non-Hispanic Black or African American or Hispanic American,” he said “I have watched people have to choose between getting fruits/vegetables versus their medications or running to fast food places to feed their families. What is truly surprising is that we’re not talking about people’s lived environment enough for my taste,” he said.
“I hope that our data and results can inform local and state policymakers to truly invest in all communities, such as funding for community gardens, and realize that adverse food environments, including the barriers in navigating these environments, have significant consequences on real people,” said Dr. Bevel. “Also, I hope that the results can help clinicians realize that a patient’s lived environment can truly affect their obesity and/or obesity-related cancer status; being cognizant of that is the first step in holistic, comprehensive care,” he said.
“One role that oncologists might be able to play in improving patients’ access to healthier food is to create and/or implement healthy lifestyle programs with gardening components to combat the poorest food environments that their patients likely reside in,” said Dr. Bevel. Clinicians also could consider the innovative approach of “food prescriptions” to help reduce the effects of deprived, built environments, he noted.
Looking ahead, next steps for research include determining the severity of association between food swamps and obesity-related cancer by varying factors such as cancer type, and examining any potential racial disparities between people living in these environments and obesity-related cancer, Dr. Bevel added.
Data provide foundation for multilevel interventions
The current study findings “raise a clarion call to elevate the discussion on food availability and access to ensure an equitable emphasis on both the importance of lifestyle factors and the upstream structural, economic, and environmental contexts that shape these behaviors at the individual level,” Karriem S. Watson, DHSc, MS, MPH, of the National Institutes of Health, Bethesda, Md., and Angela Odoms-Young, PhD, of Cornell University, Ithaca, N.Y., wrote in an accompanying editorial.
The findings provide a foundation for studies of obesity-related cancer outcomes that take the community environment into consideration, they added.
The causes of both obesity and cancer are complex, and the study findings suggest that the links between unhealthy food environments and obesity-related cancer may go beyond dietary consumption alone and extend to social and psychological factors, the editorialists noted.
“Whether dealing with the lack of access to healthy foods or an overabundance of unhealthy food, there is a critical need to develop additional research that explores the associations between obesity-related cancer mortality and food inequities,” they concluded.
The study received no outside funding. The researchers and the editorialists had no financial conflicts to disclose.
FROM JAMA ONCOLOGY
Plasma monitoring supports earlier osimertinib treatment in lung cancer patients
Previous studies have suggested that molecular progression of disease in patients with EGFR-mutant NSCLC, as measured by sequential plasma EGFR T790M, may precede radiological progression, as measured by Response Evaluation Criteria in Solid Tumors (RECIST).
However, the impact of these measures on timing of treatment changes and patient outcomes has not been examined, wrote Jordi Remon, MD, of Paris (France)–Saclay University and colleagues, in Annals of Oncology.
The European Organization for Research Treatment and Cancer Lung Cancer Group designed a phase 2 clinical trial known as APPLE to evaluate the use of sequential plasma EGFR T790M and determine the optimal sequencing for gefitinib and osimertinib in patients with EGFR-mutant NSCLC.
The researchers reported results from two randomized arms of the APPLE trial. In arm B, 52 patients received gefitinib until emergence of circulating tumor DNA (ctDNA) EGFR T790M mutation, based on the cobas EGFR test v2 (a real-time PCR test), or progression of disease based on Response Evaluation Criteria in Solid Tumors (RECIST). In arm C, 51 patients received gefitinib until disease progression based on RECIST. Both arms then switched to osimertinib. Patients randomized to a third arm (arm A) received osimertinib upfront until progression of disease based on RECIST, and they were not included in the current study.
The primary endpoint was progression-free survival (PFS) while receiving osimertinib at 18 months in patients who were originally randomized to gefitinib, then switched to osimertinib at the emergence of circulating tumor DNA. Secondary endpoints included PFS, overall response rate, overall survival, and brain PFS.
Patients entered the study between November 2017 and February 2020. A total of 75% and 65% of those in arms B and C, respectively, were female, approximately 65% had the mutation EGFR Del19, and approximately one-third had baseline brain metastases. In arm B, 17% of patients switched to osimertinib based on the emergence of ctDNA T790M mutation before progressive disease based on RECIST. The median time to molecular disease progression was 266 days.
More patients in arm B met the primary endpoint of PFS while receiving osimertinib at 18 months (67.2%) than in arm C (53.5%), after a median follow-up of 30 months.
As for secondary endpoints, the median PFS in the two arms was 22.0 months and 20.2 months, respectively. Median overall survival was 42.8 months in arm C and was not reached in arm B. The median brain PFS was 24.4 months for arm B and 21.4 months for arm C.
The benefits seen in the osimertinib patients may be due in part to the timing of the switch to correspond with molecular or radiological disease progression, the researchers wrote in their discussion.
In the future, more research is needed to determine whether molecular monitoring may impact patients’ outcomes, compared with monitoring based on radiological progression, they said.
The findings were limited by several factors, mainly the rapid evolution in the treatment landscape of EGFR-mutant NSCLC, the researchers noted.
Osimertinib is currently considered the preferred first-line treatment by most physicians, they said. “The APPLE trial is the first prospective study supporting the role of dynamic adaptive strategies based on ctDNA monitoring in patients with EGFR-mutant advanced NSCLC.”
The study was supported by AstraZeneca. Lead author Dr. Remon had no financial conflicts to disclose. Corresponding author Dr. Dziadziuszko disclosed honoraria for consultancy or lectures from AstraZeneca, Roche, Novartis, MSD, Takeda, Pfizer, Amgen, and Bristol-Myers Squibb.
Previous studies have suggested that molecular progression of disease in patients with EGFR-mutant NSCLC, as measured by sequential plasma EGFR T790M, may precede radiological progression, as measured by Response Evaluation Criteria in Solid Tumors (RECIST).
However, the impact of these measures on timing of treatment changes and patient outcomes has not been examined, wrote Jordi Remon, MD, of Paris (France)–Saclay University and colleagues, in Annals of Oncology.
The European Organization for Research Treatment and Cancer Lung Cancer Group designed a phase 2 clinical trial known as APPLE to evaluate the use of sequential plasma EGFR T790M and determine the optimal sequencing for gefitinib and osimertinib in patients with EGFR-mutant NSCLC.
The researchers reported results from two randomized arms of the APPLE trial. In arm B, 52 patients received gefitinib until emergence of circulating tumor DNA (ctDNA) EGFR T790M mutation, based on the cobas EGFR test v2 (a real-time PCR test), or progression of disease based on Response Evaluation Criteria in Solid Tumors (RECIST). In arm C, 51 patients received gefitinib until disease progression based on RECIST. Both arms then switched to osimertinib. Patients randomized to a third arm (arm A) received osimertinib upfront until progression of disease based on RECIST, and they were not included in the current study.
The primary endpoint was progression-free survival (PFS) while receiving osimertinib at 18 months in patients who were originally randomized to gefitinib, then switched to osimertinib at the emergence of circulating tumor DNA. Secondary endpoints included PFS, overall response rate, overall survival, and brain PFS.
Patients entered the study between November 2017 and February 2020. A total of 75% and 65% of those in arms B and C, respectively, were female, approximately 65% had the mutation EGFR Del19, and approximately one-third had baseline brain metastases. In arm B, 17% of patients switched to osimertinib based on the emergence of ctDNA T790M mutation before progressive disease based on RECIST. The median time to molecular disease progression was 266 days.
More patients in arm B met the primary endpoint of PFS while receiving osimertinib at 18 months (67.2%) than in arm C (53.5%), after a median follow-up of 30 months.
As for secondary endpoints, the median PFS in the two arms was 22.0 months and 20.2 months, respectively. Median overall survival was 42.8 months in arm C and was not reached in arm B. The median brain PFS was 24.4 months for arm B and 21.4 months for arm C.
The benefits seen in the osimertinib patients may be due in part to the timing of the switch to correspond with molecular or radiological disease progression, the researchers wrote in their discussion.
In the future, more research is needed to determine whether molecular monitoring may impact patients’ outcomes, compared with monitoring based on radiological progression, they said.
The findings were limited by several factors, mainly the rapid evolution in the treatment landscape of EGFR-mutant NSCLC, the researchers noted.
Osimertinib is currently considered the preferred first-line treatment by most physicians, they said. “The APPLE trial is the first prospective study supporting the role of dynamic adaptive strategies based on ctDNA monitoring in patients with EGFR-mutant advanced NSCLC.”
The study was supported by AstraZeneca. Lead author Dr. Remon had no financial conflicts to disclose. Corresponding author Dr. Dziadziuszko disclosed honoraria for consultancy or lectures from AstraZeneca, Roche, Novartis, MSD, Takeda, Pfizer, Amgen, and Bristol-Myers Squibb.
Previous studies have suggested that molecular progression of disease in patients with EGFR-mutant NSCLC, as measured by sequential plasma EGFR T790M, may precede radiological progression, as measured by Response Evaluation Criteria in Solid Tumors (RECIST).
However, the impact of these measures on timing of treatment changes and patient outcomes has not been examined, wrote Jordi Remon, MD, of Paris (France)–Saclay University and colleagues, in Annals of Oncology.
The European Organization for Research Treatment and Cancer Lung Cancer Group designed a phase 2 clinical trial known as APPLE to evaluate the use of sequential plasma EGFR T790M and determine the optimal sequencing for gefitinib and osimertinib in patients with EGFR-mutant NSCLC.
The researchers reported results from two randomized arms of the APPLE trial. In arm B, 52 patients received gefitinib until emergence of circulating tumor DNA (ctDNA) EGFR T790M mutation, based on the cobas EGFR test v2 (a real-time PCR test), or progression of disease based on Response Evaluation Criteria in Solid Tumors (RECIST). In arm C, 51 patients received gefitinib until disease progression based on RECIST. Both arms then switched to osimertinib. Patients randomized to a third arm (arm A) received osimertinib upfront until progression of disease based on RECIST, and they were not included in the current study.
The primary endpoint was progression-free survival (PFS) while receiving osimertinib at 18 months in patients who were originally randomized to gefitinib, then switched to osimertinib at the emergence of circulating tumor DNA. Secondary endpoints included PFS, overall response rate, overall survival, and brain PFS.
Patients entered the study between November 2017 and February 2020. A total of 75% and 65% of those in arms B and C, respectively, were female, approximately 65% had the mutation EGFR Del19, and approximately one-third had baseline brain metastases. In arm B, 17% of patients switched to osimertinib based on the emergence of ctDNA T790M mutation before progressive disease based on RECIST. The median time to molecular disease progression was 266 days.
More patients in arm B met the primary endpoint of PFS while receiving osimertinib at 18 months (67.2%) than in arm C (53.5%), after a median follow-up of 30 months.
As for secondary endpoints, the median PFS in the two arms was 22.0 months and 20.2 months, respectively. Median overall survival was 42.8 months in arm C and was not reached in arm B. The median brain PFS was 24.4 months for arm B and 21.4 months for arm C.
The benefits seen in the osimertinib patients may be due in part to the timing of the switch to correspond with molecular or radiological disease progression, the researchers wrote in their discussion.
In the future, more research is needed to determine whether molecular monitoring may impact patients’ outcomes, compared with monitoring based on radiological progression, they said.
The findings were limited by several factors, mainly the rapid evolution in the treatment landscape of EGFR-mutant NSCLC, the researchers noted.
Osimertinib is currently considered the preferred first-line treatment by most physicians, they said. “The APPLE trial is the first prospective study supporting the role of dynamic adaptive strategies based on ctDNA monitoring in patients with EGFR-mutant advanced NSCLC.”
The study was supported by AstraZeneca. Lead author Dr. Remon had no financial conflicts to disclose. Corresponding author Dr. Dziadziuszko disclosed honoraria for consultancy or lectures from AstraZeneca, Roche, Novartis, MSD, Takeda, Pfizer, Amgen, and Bristol-Myers Squibb.
FROM ANNALS OF ONCOLOGY
Survey reveals room for improvement in teen substance use screening
WASHINGTON – , Deepa Camenga, MD, said in a presentation at the 2023 Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics recommends universal screening for substance use in adolescents during annual health visits, but current screening rates and practices among primary care pediatricians in the United States are unknown, said Dr. Camenga, an associate professor at Yale University, New Haven, Conn.
Uniformity in screening is lacking
Dr. Camenga presented data from the 2021 AAP Periodic Survey, which included 1,683 nonretired AAP members in the United States. Residents were excluded. The current analysis included 471 pediatricians who reported providing health supervision to adolescents. Overall, 284 of the 471 included respondents (60%) reported always screening adolescent patients for substance use during a health supervision visit. Of these, 42% reported using a standardized screening instrument, Dr. Camenga said.
The majority (70%) of pediatricians who used a standardized screening tool opted for the CRAFFT tool (Car, Relax, Alone, Forget, Friends, Trouble) designed for ages 12-21 years. Another 21% reported using an unspecified screening tool, 4% used RAAPS (Rapid Assessment for Adolescent Preventive Services), 3% used S2BI (Screening to Brief Intervention), and 1% used BSTAD (Brief Screener for Tobacco, Alcohol, and other Drugs).
A total of 77% of respondents reported screening their adolescent patients for substance use without a parent or guardian present. Approximately half (52%) used paper-based screening, 22% used electronic screening, 21% used verbal screening, and 6% reported other methods.
A total of 68% and 70% of respondents, respectively, agreed or strongly agreed that top barriers to screening were the lack of an onsite provider for counseling and the lack of readily available treatment options. Other reported barriers included lack of knowledge or information, patient reluctance to discuss substance use, too many other priorities during the visit, and inadequate payment. Only 6% of respondents strongly agreed that lack of time was a barrier, said Dr. Camenga.
Screening frequency and screening practices varied by geographic region, Dr. Camenga said. Pediatricians in the South and Midwest were only half as likely as those in the Northeast to report always screening adolescents for substance use (adjusted odds ratio, 0.43 and 0.53, respectively; P < .05). Similarly, compared with pediatricians in the Northeast, those in the South, Midwest, and West were significantly less likely to report using a standardized instrument for substance use screening (aOR, 0.53, 0.24, and 0.52, respectively; P < 0.001 for all).
The disparities in screening by geographic region show that there is room for improvement in this area, said Dr. Camenga. Systems-level interventions such as treatment financing and access to telehealth services could improve primary care access to substance use treatment professionals, she said.
At the practice level, embedding screening and referral tools into electronic health records could potentially improve screening rates. Many primary care pediatricians do not receive training in identifying and assessing substance use in their patients, or in first-line treatment, Dr. Camenga said.
“We have to invest in a ‘train the trainer’ type of model,” she emphasized.
Data highlight regional resource gaps
The current study is important because it highlights potential missed opportunities to screen adolescents for substance use, said Sarah Yale, MD, assistant professor of pediatrics at the Medical College of Wisconsin, Milwaukee, in an interview. Dr. Yale said that the disparities in screening by region are interesting and should serve as a focus for resource investment because the lack of specialists for referral and treatment options in these areas is likely a contributing factor.
However, lack of training also plays a role, said Dr. Yale, who was not involved in the study but served as a moderator of the presentation session at the meeting. Many pediatricians in practice have not been trained in substance use screening, and the fact that many of those who did try to screen were not using a standardized screening tool indicates a need for provider education, she said. The take-home message for clinicians is to find ways to include substance use screening in the care of their adolescent patients. Additionally, more research is needed to assess how best to integrate screening tools into visits, whether on paper, electronically, or verbally, and to include training on substance use screening during pediatric medical training.
The survey was conducted by the American Academy of Pediatrics Research Division. This year’s survey was supported by the Conrad N. Hilton Foundation. Dr. Camenga had no financial conflicts to disclose. Dr. Yale had no financial conflicts to disclose.
WASHINGTON – , Deepa Camenga, MD, said in a presentation at the 2023 Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics recommends universal screening for substance use in adolescents during annual health visits, but current screening rates and practices among primary care pediatricians in the United States are unknown, said Dr. Camenga, an associate professor at Yale University, New Haven, Conn.
Uniformity in screening is lacking
Dr. Camenga presented data from the 2021 AAP Periodic Survey, which included 1,683 nonretired AAP members in the United States. Residents were excluded. The current analysis included 471 pediatricians who reported providing health supervision to adolescents. Overall, 284 of the 471 included respondents (60%) reported always screening adolescent patients for substance use during a health supervision visit. Of these, 42% reported using a standardized screening instrument, Dr. Camenga said.
The majority (70%) of pediatricians who used a standardized screening tool opted for the CRAFFT tool (Car, Relax, Alone, Forget, Friends, Trouble) designed for ages 12-21 years. Another 21% reported using an unspecified screening tool, 4% used RAAPS (Rapid Assessment for Adolescent Preventive Services), 3% used S2BI (Screening to Brief Intervention), and 1% used BSTAD (Brief Screener for Tobacco, Alcohol, and other Drugs).
A total of 77% of respondents reported screening their adolescent patients for substance use without a parent or guardian present. Approximately half (52%) used paper-based screening, 22% used electronic screening, 21% used verbal screening, and 6% reported other methods.
A total of 68% and 70% of respondents, respectively, agreed or strongly agreed that top barriers to screening were the lack of an onsite provider for counseling and the lack of readily available treatment options. Other reported barriers included lack of knowledge or information, patient reluctance to discuss substance use, too many other priorities during the visit, and inadequate payment. Only 6% of respondents strongly agreed that lack of time was a barrier, said Dr. Camenga.
Screening frequency and screening practices varied by geographic region, Dr. Camenga said. Pediatricians in the South and Midwest were only half as likely as those in the Northeast to report always screening adolescents for substance use (adjusted odds ratio, 0.43 and 0.53, respectively; P < .05). Similarly, compared with pediatricians in the Northeast, those in the South, Midwest, and West were significantly less likely to report using a standardized instrument for substance use screening (aOR, 0.53, 0.24, and 0.52, respectively; P < 0.001 for all).
The disparities in screening by geographic region show that there is room for improvement in this area, said Dr. Camenga. Systems-level interventions such as treatment financing and access to telehealth services could improve primary care access to substance use treatment professionals, she said.
At the practice level, embedding screening and referral tools into electronic health records could potentially improve screening rates. Many primary care pediatricians do not receive training in identifying and assessing substance use in their patients, or in first-line treatment, Dr. Camenga said.
“We have to invest in a ‘train the trainer’ type of model,” she emphasized.
Data highlight regional resource gaps
The current study is important because it highlights potential missed opportunities to screen adolescents for substance use, said Sarah Yale, MD, assistant professor of pediatrics at the Medical College of Wisconsin, Milwaukee, in an interview. Dr. Yale said that the disparities in screening by region are interesting and should serve as a focus for resource investment because the lack of specialists for referral and treatment options in these areas is likely a contributing factor.
However, lack of training also plays a role, said Dr. Yale, who was not involved in the study but served as a moderator of the presentation session at the meeting. Many pediatricians in practice have not been trained in substance use screening, and the fact that many of those who did try to screen were not using a standardized screening tool indicates a need for provider education, she said. The take-home message for clinicians is to find ways to include substance use screening in the care of their adolescent patients. Additionally, more research is needed to assess how best to integrate screening tools into visits, whether on paper, electronically, or verbally, and to include training on substance use screening during pediatric medical training.
The survey was conducted by the American Academy of Pediatrics Research Division. This year’s survey was supported by the Conrad N. Hilton Foundation. Dr. Camenga had no financial conflicts to disclose. Dr. Yale had no financial conflicts to disclose.
WASHINGTON – , Deepa Camenga, MD, said in a presentation at the 2023 Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics recommends universal screening for substance use in adolescents during annual health visits, but current screening rates and practices among primary care pediatricians in the United States are unknown, said Dr. Camenga, an associate professor at Yale University, New Haven, Conn.
Uniformity in screening is lacking
Dr. Camenga presented data from the 2021 AAP Periodic Survey, which included 1,683 nonretired AAP members in the United States. Residents were excluded. The current analysis included 471 pediatricians who reported providing health supervision to adolescents. Overall, 284 of the 471 included respondents (60%) reported always screening adolescent patients for substance use during a health supervision visit. Of these, 42% reported using a standardized screening instrument, Dr. Camenga said.
The majority (70%) of pediatricians who used a standardized screening tool opted for the CRAFFT tool (Car, Relax, Alone, Forget, Friends, Trouble) designed for ages 12-21 years. Another 21% reported using an unspecified screening tool, 4% used RAAPS (Rapid Assessment for Adolescent Preventive Services), 3% used S2BI (Screening to Brief Intervention), and 1% used BSTAD (Brief Screener for Tobacco, Alcohol, and other Drugs).
A total of 77% of respondents reported screening their adolescent patients for substance use without a parent or guardian present. Approximately half (52%) used paper-based screening, 22% used electronic screening, 21% used verbal screening, and 6% reported other methods.
A total of 68% and 70% of respondents, respectively, agreed or strongly agreed that top barriers to screening were the lack of an onsite provider for counseling and the lack of readily available treatment options. Other reported barriers included lack of knowledge or information, patient reluctance to discuss substance use, too many other priorities during the visit, and inadequate payment. Only 6% of respondents strongly agreed that lack of time was a barrier, said Dr. Camenga.
Screening frequency and screening practices varied by geographic region, Dr. Camenga said. Pediatricians in the South and Midwest were only half as likely as those in the Northeast to report always screening adolescents for substance use (adjusted odds ratio, 0.43 and 0.53, respectively; P < .05). Similarly, compared with pediatricians in the Northeast, those in the South, Midwest, and West were significantly less likely to report using a standardized instrument for substance use screening (aOR, 0.53, 0.24, and 0.52, respectively; P < 0.001 for all).
The disparities in screening by geographic region show that there is room for improvement in this area, said Dr. Camenga. Systems-level interventions such as treatment financing and access to telehealth services could improve primary care access to substance use treatment professionals, she said.
At the practice level, embedding screening and referral tools into electronic health records could potentially improve screening rates. Many primary care pediatricians do not receive training in identifying and assessing substance use in their patients, or in first-line treatment, Dr. Camenga said.
“We have to invest in a ‘train the trainer’ type of model,” she emphasized.
Data highlight regional resource gaps
The current study is important because it highlights potential missed opportunities to screen adolescents for substance use, said Sarah Yale, MD, assistant professor of pediatrics at the Medical College of Wisconsin, Milwaukee, in an interview. Dr. Yale said that the disparities in screening by region are interesting and should serve as a focus for resource investment because the lack of specialists for referral and treatment options in these areas is likely a contributing factor.
However, lack of training also plays a role, said Dr. Yale, who was not involved in the study but served as a moderator of the presentation session at the meeting. Many pediatricians in practice have not been trained in substance use screening, and the fact that many of those who did try to screen were not using a standardized screening tool indicates a need for provider education, she said. The take-home message for clinicians is to find ways to include substance use screening in the care of their adolescent patients. Additionally, more research is needed to assess how best to integrate screening tools into visits, whether on paper, electronically, or verbally, and to include training on substance use screening during pediatric medical training.
The survey was conducted by the American Academy of Pediatrics Research Division. This year’s survey was supported by the Conrad N. Hilton Foundation. Dr. Camenga had no financial conflicts to disclose. Dr. Yale had no financial conflicts to disclose.
AT PAS 2023
Active older women show heightened AFib risk
Older women with high levels of physical activity showed twice the risk of atrial fibrillation (AFib) over 10 years as they did for cardiac disease or stroke, based on data from 46 cross-country skiers.
Although previous research suggests that women derive greater health benefits from endurance sports, compared with men, women are generally underrepresented in sports cardiology research, and most previous studies have focused on younger women, Marius Myrstad, MD, of Baerum Hospital, Gjettum, Norway, said in a presentation at the annual congress of the European Association of Preventive Cardiology.
Previous research also has shown an increased risk of AFib in male endurance athletes, but similar data on women are lacking, Dr. Myrstad said.
The researchers reviewed data from the Birkebeiner Ageing Study, a study of Norwegian cross-country skiers aged 65 years and older who were followed for 10 years. The participants were competitors in the 2009/2010 Birkebeiner race, a 54-km cross country ski race in Norway.
Participants responded to a questionnaire addressing cardiovascular disease risk factors, exercise habits, and other health issues. The mean age at baseline was 67.5 year. A total of 34 participants (76%) were available for follow-up visits in 2014, and 36 attended a follow-up visit in 2020. Cumulative exposure to exercise was 26 years.
A total of 86% of the women reported moderate to vigorous exercise in the past year at baseline; 61% did so at the 2020 follow-up visit. One of the participants died during the study period.
“The baseline prevalence of cardiovascular conditions was very low,” Dr. Myrstad noted.
However, despite a low prevalence of cardiovascular risk factors, the risk of AFib in the study population was twice as high as for other cardiac diseases and stroke (15.6%, 7.1%, and 7.1%, respectively).
The mechanism of action for the increased AFib remains unclear, but the current study highlights the need for large, prospective studies of female athletes to address not only AFib, but also exercise-induced cardiac remodeling and cardiovascular health in general, said Dr. Myrstad.
The findings were limited by the small sample size and use of self-reports, Dr. Myrstad said, and more research is needed to clarify the association between increased AFib and high-level athletic activity in women.
“We should strive to close the gap between female and male athletes in sports cardiology research,” he added.
Consider the big picture of AFib risk
This study is important because of the growing recognition that atrial fibrillation may be a preventable disease, said Gregory Marcus, MD, a cardiologist at the University of California, San Francisco, said in an interview.
“Various behaviors or exposures that are under the control of the individual patient may reveal especially powerful means to help reduce risk,” he added.
Dr. Marcus said he was not surprised by the current study findings, as they reflect those of other studies suggesting a heightened risk for atrial fibrillation associated with very excessive exercise. However, the study was limited by the relatively small size and lack of a comparison group, he said. In addition, “The study was observational, and therefore the possibility that factors other than the predictor of interest, in this case intensive endurance exercise, were truly causal of atrial fibrillation could not be excluded,” he noted.
“It is very important to place this specialized analysis in the greater context of the full weight of evidence related to physical activity and atrial fibrillation,” said Dr. Marcus. “Specifically, when it comes to the general public and the great majority of patients we see in clinical practice, encouraging more physical activity is generally the best approach to reduce risks of atrial fibrillation,” he said. “It appears to be only in extraordinarily rigorous and prolonged endurance exercise that higher risks of atrial fibrillation may result,” he noted.
However, “Exercise also has many other benefits, related to overall cardiovascular health, brain health, bone health, and even cancer risk reduction, such that, even among the highly trained endurance athletes, the net benefit versus risk remains unknown,” he said.
“While the risk of atrial fibrillation in these highly trained endurance athletes was higher than expected, it still occurred in the minority,” Dr. Marcus said. “Therefore, there are certainly other factors yet to be identified that influence this heightened atrial fibrillation risk, and future research aimed at elucidating these other factors may help identify individuals more or less prone to atrial fibrillation or other behaviors that can help mitigate that risk.”
Dr. Myrstad disclosed lecture fees from Bayer, Boehringer-Ingelheim, Bristol Myers Squibb, MSD, and Pfizer unrelated to the current study. Dr. Marcus disclosed serving as a consultant for Johnson and Johnson and InCarda, and holding equity as a cofounder of InCarda.
Older women with high levels of physical activity showed twice the risk of atrial fibrillation (AFib) over 10 years as they did for cardiac disease or stroke, based on data from 46 cross-country skiers.
Although previous research suggests that women derive greater health benefits from endurance sports, compared with men, women are generally underrepresented in sports cardiology research, and most previous studies have focused on younger women, Marius Myrstad, MD, of Baerum Hospital, Gjettum, Norway, said in a presentation at the annual congress of the European Association of Preventive Cardiology.
Previous research also has shown an increased risk of AFib in male endurance athletes, but similar data on women are lacking, Dr. Myrstad said.
The researchers reviewed data from the Birkebeiner Ageing Study, a study of Norwegian cross-country skiers aged 65 years and older who were followed for 10 years. The participants were competitors in the 2009/2010 Birkebeiner race, a 54-km cross country ski race in Norway.
Participants responded to a questionnaire addressing cardiovascular disease risk factors, exercise habits, and other health issues. The mean age at baseline was 67.5 year. A total of 34 participants (76%) were available for follow-up visits in 2014, and 36 attended a follow-up visit in 2020. Cumulative exposure to exercise was 26 years.
A total of 86% of the women reported moderate to vigorous exercise in the past year at baseline; 61% did so at the 2020 follow-up visit. One of the participants died during the study period.
“The baseline prevalence of cardiovascular conditions was very low,” Dr. Myrstad noted.
However, despite a low prevalence of cardiovascular risk factors, the risk of AFib in the study population was twice as high as for other cardiac diseases and stroke (15.6%, 7.1%, and 7.1%, respectively).
The mechanism of action for the increased AFib remains unclear, but the current study highlights the need for large, prospective studies of female athletes to address not only AFib, but also exercise-induced cardiac remodeling and cardiovascular health in general, said Dr. Myrstad.
The findings were limited by the small sample size and use of self-reports, Dr. Myrstad said, and more research is needed to clarify the association between increased AFib and high-level athletic activity in women.
“We should strive to close the gap between female and male athletes in sports cardiology research,” he added.
Consider the big picture of AFib risk
This study is important because of the growing recognition that atrial fibrillation may be a preventable disease, said Gregory Marcus, MD, a cardiologist at the University of California, San Francisco, said in an interview.
“Various behaviors or exposures that are under the control of the individual patient may reveal especially powerful means to help reduce risk,” he added.
Dr. Marcus said he was not surprised by the current study findings, as they reflect those of other studies suggesting a heightened risk for atrial fibrillation associated with very excessive exercise. However, the study was limited by the relatively small size and lack of a comparison group, he said. In addition, “The study was observational, and therefore the possibility that factors other than the predictor of interest, in this case intensive endurance exercise, were truly causal of atrial fibrillation could not be excluded,” he noted.
“It is very important to place this specialized analysis in the greater context of the full weight of evidence related to physical activity and atrial fibrillation,” said Dr. Marcus. “Specifically, when it comes to the general public and the great majority of patients we see in clinical practice, encouraging more physical activity is generally the best approach to reduce risks of atrial fibrillation,” he said. “It appears to be only in extraordinarily rigorous and prolonged endurance exercise that higher risks of atrial fibrillation may result,” he noted.
However, “Exercise also has many other benefits, related to overall cardiovascular health, brain health, bone health, and even cancer risk reduction, such that, even among the highly trained endurance athletes, the net benefit versus risk remains unknown,” he said.
“While the risk of atrial fibrillation in these highly trained endurance athletes was higher than expected, it still occurred in the minority,” Dr. Marcus said. “Therefore, there are certainly other factors yet to be identified that influence this heightened atrial fibrillation risk, and future research aimed at elucidating these other factors may help identify individuals more or less prone to atrial fibrillation or other behaviors that can help mitigate that risk.”
Dr. Myrstad disclosed lecture fees from Bayer, Boehringer-Ingelheim, Bristol Myers Squibb, MSD, and Pfizer unrelated to the current study. Dr. Marcus disclosed serving as a consultant for Johnson and Johnson and InCarda, and holding equity as a cofounder of InCarda.
Older women with high levels of physical activity showed twice the risk of atrial fibrillation (AFib) over 10 years as they did for cardiac disease or stroke, based on data from 46 cross-country skiers.
Although previous research suggests that women derive greater health benefits from endurance sports, compared with men, women are generally underrepresented in sports cardiology research, and most previous studies have focused on younger women, Marius Myrstad, MD, of Baerum Hospital, Gjettum, Norway, said in a presentation at the annual congress of the European Association of Preventive Cardiology.
Previous research also has shown an increased risk of AFib in male endurance athletes, but similar data on women are lacking, Dr. Myrstad said.
The researchers reviewed data from the Birkebeiner Ageing Study, a study of Norwegian cross-country skiers aged 65 years and older who were followed for 10 years. The participants were competitors in the 2009/2010 Birkebeiner race, a 54-km cross country ski race in Norway.
Participants responded to a questionnaire addressing cardiovascular disease risk factors, exercise habits, and other health issues. The mean age at baseline was 67.5 year. A total of 34 participants (76%) were available for follow-up visits in 2014, and 36 attended a follow-up visit in 2020. Cumulative exposure to exercise was 26 years.
A total of 86% of the women reported moderate to vigorous exercise in the past year at baseline; 61% did so at the 2020 follow-up visit. One of the participants died during the study period.
“The baseline prevalence of cardiovascular conditions was very low,” Dr. Myrstad noted.
However, despite a low prevalence of cardiovascular risk factors, the risk of AFib in the study population was twice as high as for other cardiac diseases and stroke (15.6%, 7.1%, and 7.1%, respectively).
The mechanism of action for the increased AFib remains unclear, but the current study highlights the need for large, prospective studies of female athletes to address not only AFib, but also exercise-induced cardiac remodeling and cardiovascular health in general, said Dr. Myrstad.
The findings were limited by the small sample size and use of self-reports, Dr. Myrstad said, and more research is needed to clarify the association between increased AFib and high-level athletic activity in women.
“We should strive to close the gap between female and male athletes in sports cardiology research,” he added.
Consider the big picture of AFib risk
This study is important because of the growing recognition that atrial fibrillation may be a preventable disease, said Gregory Marcus, MD, a cardiologist at the University of California, San Francisco, said in an interview.
“Various behaviors or exposures that are under the control of the individual patient may reveal especially powerful means to help reduce risk,” he added.
Dr. Marcus said he was not surprised by the current study findings, as they reflect those of other studies suggesting a heightened risk for atrial fibrillation associated with very excessive exercise. However, the study was limited by the relatively small size and lack of a comparison group, he said. In addition, “The study was observational, and therefore the possibility that factors other than the predictor of interest, in this case intensive endurance exercise, were truly causal of atrial fibrillation could not be excluded,” he noted.
“It is very important to place this specialized analysis in the greater context of the full weight of evidence related to physical activity and atrial fibrillation,” said Dr. Marcus. “Specifically, when it comes to the general public and the great majority of patients we see in clinical practice, encouraging more physical activity is generally the best approach to reduce risks of atrial fibrillation,” he said. “It appears to be only in extraordinarily rigorous and prolonged endurance exercise that higher risks of atrial fibrillation may result,” he noted.
However, “Exercise also has many other benefits, related to overall cardiovascular health, brain health, bone health, and even cancer risk reduction, such that, even among the highly trained endurance athletes, the net benefit versus risk remains unknown,” he said.
“While the risk of atrial fibrillation in these highly trained endurance athletes was higher than expected, it still occurred in the minority,” Dr. Marcus said. “Therefore, there are certainly other factors yet to be identified that influence this heightened atrial fibrillation risk, and future research aimed at elucidating these other factors may help identify individuals more or less prone to atrial fibrillation or other behaviors that can help mitigate that risk.”
Dr. Myrstad disclosed lecture fees from Bayer, Boehringer-Ingelheim, Bristol Myers Squibb, MSD, and Pfizer unrelated to the current study. Dr. Marcus disclosed serving as a consultant for Johnson and Johnson and InCarda, and holding equity as a cofounder of InCarda.
FROM ESC PREVENTIVE CARDIOLOGY 2023
Cancer pain declines with cannabis use
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
FROM BMJ SUPPORTIVE & PALLIATIVE CARE
Gender-diverse teens face barriers to physical activity
WASHINGTON – in a poster presented at the Pediatric Academic Societies annual meeting. Other barriers included body dissatisfaction and discomfort or pain from binding or tucking, based on data from 160 individuals.
Previous studies suggest that gender-diverse teens have lower levels of physical activity than cisgender teens, but data on the specific barriers to physical activity reported by gender-diverse adolescents are lacking, according to Karishma Desai, BA, a medical student at Northwestern University, Chicago, and colleagues.
The researchers reviewed data from adolescents aged 13-18 years who identified as transgender or nonbinary and lived in the United States. Participants were recruited through flyers, wallet cards, email, and social media. They completed an online survey that included questions on preferred types of physical activity and potential barriers to physical activity. Major barriers were defined as items that “almost always” or “always” got in the way of physical activity.
Overall, 51% of the participants identified as female/transfeminine, 31% as male/transmasculine, 9% as genderqueer or agender, 8% as nonbinary, and 1% as unsure. A total of 86 participants were assigned male at birth, 73 were assigned female, and 1 was assigned intersex or other. Nearly all of the participants (96%) had begun social transition; approximately half (48%) reported using a chest binder, and 75% had been or were currently taking gender-affirming hormones.
Potential negative judgment from others was the top barrier to physical activity (cited by 39% of participants), followed by body dissatisfaction from gender dysphoria (38%) and discomfort with the available options for locker rooms or changing rooms (38%). Approximately one-third (36%) of respondents reported physical discomfort or pain from binding or tucking as a barrier to physical activity, and 34% cited discomfort with requirements for a physical activity uniform or athletic clothing at school. Other gender-diverse specific barriers to physical activity included bullying related to being transgender (31%) and the inability to participate in a group of choice because of gender identity (24%).
In addition, participants cited general barriers to physical activity including bullying related to weight (33%), dissatisfaction with weight or size (31%), and bullying in general or for reasons other than gender status (29%).
However, more than 50% of respondents said they were comfortable or very comfortable (4 or 5 on a 5-point Likert Scale) with physical activity in the settings of coed or all-gender teams (61%) or engaging in individual activities (71%). By contrast, 36% were comfortable or very comfortable with a team, group, or class that aligned with sex assignment at birth.
The majority of participants (81%) were comfortable or very comfortable with their homes or a private location as a setting for physical activity, 54% with a public space such as a park, and 43% with a school setting.
Increasing gender congruence was the biggest facilitator of physical activity, reported by 53% of participants, the researchers noted. Other facilitators of physical activity included increasing body satisfaction (43%), staying healthy to avoid long-term health problems in the future (43%), and staying healthy to prepare for gender-related surgery in the future (18%).
The study findings were limited by the use of self-reports and the use of a convenience sample, as well as the lack of data on race, the researchers noted. However, the results suggest that access to all-gender teams, standardizing physical activity clothing, and increasing inclusive facilities may promote greater physical activity participation by gender-diverse adolescents, and offering private or individual options may increase comfort with physical activity, they concluded.
Study provides teens’ perspectives
The current study is especially timely given the recent passage by the U.S. House of Representatives of the anti-trans sports bill preventing transgender women and girls from playing on sports teams “consistent with their gender identity,” said Margaret Thew, DNP, medical director of adolescent medicine at Children’s Wisconsin in Milwaukee, in an interview. Ms. Thew was not involved in the current study.
“The House bill seeks to amend federal law to require that sex shall be recognized based solely on a person’s reproductive biology and genetics at birth, for the purpose of determining compliance with Title IX in athletics,” Ms. Thew said.
“Despite political responses to sports participation for transgender adolescents, we have not heard the perspective of the teens themselves,” she emphasized. “It is imperative for parents, coaches, and clinicians to hear the adolescents’ concerns so they can advocate for the students and provide the needed support.” In addition, Ms. Thew noted, “these concerns may also provide overdue changes to the required uniforms described for specific sports.”
Ms. Thew said she was surprised by the finding of transgender teens’ comfort with coed teams and individual activities, both of which may be opportunities to promote physical activity for transgender adolescents.
However, she added that she was not surprised by some of the results. “Many transgender adolescents experience the discomfort and further body dysmorphia of being put into gender-conforming attire such as swimwear, spandex shorts for female volleyball players, or field hockey skirts, for example.”
Although many schools are establishing safe, comfortable places for all adolescents to change clothing prior to physical education and sports participation, “resources are limited, and students and parents need to advocate within the school system,” Ms. Thew noted.
“We as a society, including athletic clothing makers, need to hear the testimony of transgender adolescents on the discomfort from body modifications to better support and innovate attire to meet their needs,” she added.
“The take-home message for clinicians is twofold,” said Ms. Thew. “Clinicians need to advocate for transgender patients to have the same opportunities as all teens when it comes to sports participation and physical activity. Also, clinicians need to ask all adolescents about their comfort in participating in physical activity both on club/school teams and independently,” she said. “If barriers are identified, clinicians need to work to support the adolescent with alternative activities/attire that will promote healthy physical activities for overall health.”
The current study also suggests that transgender adolescents who may have interest in, but discomfort with, physical activity should be redirected to coed or individual sports available in their communities, Ms. Thew added.
More research is needed on innovative sports attire that would improve comfort for transgender adolescents and thereby encourage physical activity, Ms. Thew told this news organization. More data also are needed on which sports transgender adolescents participate in and why, and how these activities might be promoted, she said.
Finally, more research will be needed to examine the impact of the recent House bills on physical activity for transgender youth, Ms. Thew said.
The study was supported by the Potocsnak Family Division of Adolescent and Young Adult Medicine at Ann and Robert H. Lurie’s Children’s Hospital of Chicago. The researchers had no financial conflicts to disclose. Ms. Thew had no financial conflicts to disclose, but she serves on the Editorial Advisory Board of Pediatric News.
WASHINGTON – in a poster presented at the Pediatric Academic Societies annual meeting. Other barriers included body dissatisfaction and discomfort or pain from binding or tucking, based on data from 160 individuals.
Previous studies suggest that gender-diverse teens have lower levels of physical activity than cisgender teens, but data on the specific barriers to physical activity reported by gender-diverse adolescents are lacking, according to Karishma Desai, BA, a medical student at Northwestern University, Chicago, and colleagues.
The researchers reviewed data from adolescents aged 13-18 years who identified as transgender or nonbinary and lived in the United States. Participants were recruited through flyers, wallet cards, email, and social media. They completed an online survey that included questions on preferred types of physical activity and potential barriers to physical activity. Major barriers were defined as items that “almost always” or “always” got in the way of physical activity.
Overall, 51% of the participants identified as female/transfeminine, 31% as male/transmasculine, 9% as genderqueer or agender, 8% as nonbinary, and 1% as unsure. A total of 86 participants were assigned male at birth, 73 were assigned female, and 1 was assigned intersex or other. Nearly all of the participants (96%) had begun social transition; approximately half (48%) reported using a chest binder, and 75% had been or were currently taking gender-affirming hormones.
Potential negative judgment from others was the top barrier to physical activity (cited by 39% of participants), followed by body dissatisfaction from gender dysphoria (38%) and discomfort with the available options for locker rooms or changing rooms (38%). Approximately one-third (36%) of respondents reported physical discomfort or pain from binding or tucking as a barrier to physical activity, and 34% cited discomfort with requirements for a physical activity uniform or athletic clothing at school. Other gender-diverse specific barriers to physical activity included bullying related to being transgender (31%) and the inability to participate in a group of choice because of gender identity (24%).
In addition, participants cited general barriers to physical activity including bullying related to weight (33%), dissatisfaction with weight or size (31%), and bullying in general or for reasons other than gender status (29%).
However, more than 50% of respondents said they were comfortable or very comfortable (4 or 5 on a 5-point Likert Scale) with physical activity in the settings of coed or all-gender teams (61%) or engaging in individual activities (71%). By contrast, 36% were comfortable or very comfortable with a team, group, or class that aligned with sex assignment at birth.
The majority of participants (81%) were comfortable or very comfortable with their homes or a private location as a setting for physical activity, 54% with a public space such as a park, and 43% with a school setting.
Increasing gender congruence was the biggest facilitator of physical activity, reported by 53% of participants, the researchers noted. Other facilitators of physical activity included increasing body satisfaction (43%), staying healthy to avoid long-term health problems in the future (43%), and staying healthy to prepare for gender-related surgery in the future (18%).
The study findings were limited by the use of self-reports and the use of a convenience sample, as well as the lack of data on race, the researchers noted. However, the results suggest that access to all-gender teams, standardizing physical activity clothing, and increasing inclusive facilities may promote greater physical activity participation by gender-diverse adolescents, and offering private or individual options may increase comfort with physical activity, they concluded.
Study provides teens’ perspectives
The current study is especially timely given the recent passage by the U.S. House of Representatives of the anti-trans sports bill preventing transgender women and girls from playing on sports teams “consistent with their gender identity,” said Margaret Thew, DNP, medical director of adolescent medicine at Children’s Wisconsin in Milwaukee, in an interview. Ms. Thew was not involved in the current study.
“The House bill seeks to amend federal law to require that sex shall be recognized based solely on a person’s reproductive biology and genetics at birth, for the purpose of determining compliance with Title IX in athletics,” Ms. Thew said.
“Despite political responses to sports participation for transgender adolescents, we have not heard the perspective of the teens themselves,” she emphasized. “It is imperative for parents, coaches, and clinicians to hear the adolescents’ concerns so they can advocate for the students and provide the needed support.” In addition, Ms. Thew noted, “these concerns may also provide overdue changes to the required uniforms described for specific sports.”
Ms. Thew said she was surprised by the finding of transgender teens’ comfort with coed teams and individual activities, both of which may be opportunities to promote physical activity for transgender adolescents.
However, she added that she was not surprised by some of the results. “Many transgender adolescents experience the discomfort and further body dysmorphia of being put into gender-conforming attire such as swimwear, spandex shorts for female volleyball players, or field hockey skirts, for example.”
Although many schools are establishing safe, comfortable places for all adolescents to change clothing prior to physical education and sports participation, “resources are limited, and students and parents need to advocate within the school system,” Ms. Thew noted.
“We as a society, including athletic clothing makers, need to hear the testimony of transgender adolescents on the discomfort from body modifications to better support and innovate attire to meet their needs,” she added.
“The take-home message for clinicians is twofold,” said Ms. Thew. “Clinicians need to advocate for transgender patients to have the same opportunities as all teens when it comes to sports participation and physical activity. Also, clinicians need to ask all adolescents about their comfort in participating in physical activity both on club/school teams and independently,” she said. “If barriers are identified, clinicians need to work to support the adolescent with alternative activities/attire that will promote healthy physical activities for overall health.”
The current study also suggests that transgender adolescents who may have interest in, but discomfort with, physical activity should be redirected to coed or individual sports available in their communities, Ms. Thew added.
More research is needed on innovative sports attire that would improve comfort for transgender adolescents and thereby encourage physical activity, Ms. Thew told this news organization. More data also are needed on which sports transgender adolescents participate in and why, and how these activities might be promoted, she said.
Finally, more research will be needed to examine the impact of the recent House bills on physical activity for transgender youth, Ms. Thew said.
The study was supported by the Potocsnak Family Division of Adolescent and Young Adult Medicine at Ann and Robert H. Lurie’s Children’s Hospital of Chicago. The researchers had no financial conflicts to disclose. Ms. Thew had no financial conflicts to disclose, but she serves on the Editorial Advisory Board of Pediatric News.
WASHINGTON – in a poster presented at the Pediatric Academic Societies annual meeting. Other barriers included body dissatisfaction and discomfort or pain from binding or tucking, based on data from 160 individuals.
Previous studies suggest that gender-diverse teens have lower levels of physical activity than cisgender teens, but data on the specific barriers to physical activity reported by gender-diverse adolescents are lacking, according to Karishma Desai, BA, a medical student at Northwestern University, Chicago, and colleagues.
The researchers reviewed data from adolescents aged 13-18 years who identified as transgender or nonbinary and lived in the United States. Participants were recruited through flyers, wallet cards, email, and social media. They completed an online survey that included questions on preferred types of physical activity and potential barriers to physical activity. Major barriers were defined as items that “almost always” or “always” got in the way of physical activity.
Overall, 51% of the participants identified as female/transfeminine, 31% as male/transmasculine, 9% as genderqueer or agender, 8% as nonbinary, and 1% as unsure. A total of 86 participants were assigned male at birth, 73 were assigned female, and 1 was assigned intersex or other. Nearly all of the participants (96%) had begun social transition; approximately half (48%) reported using a chest binder, and 75% had been or were currently taking gender-affirming hormones.
Potential negative judgment from others was the top barrier to physical activity (cited by 39% of participants), followed by body dissatisfaction from gender dysphoria (38%) and discomfort with the available options for locker rooms or changing rooms (38%). Approximately one-third (36%) of respondents reported physical discomfort or pain from binding or tucking as a barrier to physical activity, and 34% cited discomfort with requirements for a physical activity uniform or athletic clothing at school. Other gender-diverse specific barriers to physical activity included bullying related to being transgender (31%) and the inability to participate in a group of choice because of gender identity (24%).
In addition, participants cited general barriers to physical activity including bullying related to weight (33%), dissatisfaction with weight or size (31%), and bullying in general or for reasons other than gender status (29%).
However, more than 50% of respondents said they were comfortable or very comfortable (4 or 5 on a 5-point Likert Scale) with physical activity in the settings of coed or all-gender teams (61%) or engaging in individual activities (71%). By contrast, 36% were comfortable or very comfortable with a team, group, or class that aligned with sex assignment at birth.
The majority of participants (81%) were comfortable or very comfortable with their homes or a private location as a setting for physical activity, 54% with a public space such as a park, and 43% with a school setting.
Increasing gender congruence was the biggest facilitator of physical activity, reported by 53% of participants, the researchers noted. Other facilitators of physical activity included increasing body satisfaction (43%), staying healthy to avoid long-term health problems in the future (43%), and staying healthy to prepare for gender-related surgery in the future (18%).
The study findings were limited by the use of self-reports and the use of a convenience sample, as well as the lack of data on race, the researchers noted. However, the results suggest that access to all-gender teams, standardizing physical activity clothing, and increasing inclusive facilities may promote greater physical activity participation by gender-diverse adolescents, and offering private or individual options may increase comfort with physical activity, they concluded.
Study provides teens’ perspectives
The current study is especially timely given the recent passage by the U.S. House of Representatives of the anti-trans sports bill preventing transgender women and girls from playing on sports teams “consistent with their gender identity,” said Margaret Thew, DNP, medical director of adolescent medicine at Children’s Wisconsin in Milwaukee, in an interview. Ms. Thew was not involved in the current study.
“The House bill seeks to amend federal law to require that sex shall be recognized based solely on a person’s reproductive biology and genetics at birth, for the purpose of determining compliance with Title IX in athletics,” Ms. Thew said.
“Despite political responses to sports participation for transgender adolescents, we have not heard the perspective of the teens themselves,” she emphasized. “It is imperative for parents, coaches, and clinicians to hear the adolescents’ concerns so they can advocate for the students and provide the needed support.” In addition, Ms. Thew noted, “these concerns may also provide overdue changes to the required uniforms described for specific sports.”
Ms. Thew said she was surprised by the finding of transgender teens’ comfort with coed teams and individual activities, both of which may be opportunities to promote physical activity for transgender adolescents.
However, she added that she was not surprised by some of the results. “Many transgender adolescents experience the discomfort and further body dysmorphia of being put into gender-conforming attire such as swimwear, spandex shorts for female volleyball players, or field hockey skirts, for example.”
Although many schools are establishing safe, comfortable places for all adolescents to change clothing prior to physical education and sports participation, “resources are limited, and students and parents need to advocate within the school system,” Ms. Thew noted.
“We as a society, including athletic clothing makers, need to hear the testimony of transgender adolescents on the discomfort from body modifications to better support and innovate attire to meet their needs,” she added.
“The take-home message for clinicians is twofold,” said Ms. Thew. “Clinicians need to advocate for transgender patients to have the same opportunities as all teens when it comes to sports participation and physical activity. Also, clinicians need to ask all adolescents about their comfort in participating in physical activity both on club/school teams and independently,” she said. “If barriers are identified, clinicians need to work to support the adolescent with alternative activities/attire that will promote healthy physical activities for overall health.”
The current study also suggests that transgender adolescents who may have interest in, but discomfort with, physical activity should be redirected to coed or individual sports available in their communities, Ms. Thew added.
More research is needed on innovative sports attire that would improve comfort for transgender adolescents and thereby encourage physical activity, Ms. Thew told this news organization. More data also are needed on which sports transgender adolescents participate in and why, and how these activities might be promoted, she said.
Finally, more research will be needed to examine the impact of the recent House bills on physical activity for transgender youth, Ms. Thew said.
The study was supported by the Potocsnak Family Division of Adolescent and Young Adult Medicine at Ann and Robert H. Lurie’s Children’s Hospital of Chicago. The researchers had no financial conflicts to disclose. Ms. Thew had no financial conflicts to disclose, but she serves on the Editorial Advisory Board of Pediatric News.
AT PAS 2023
Why is buprenorphine use flatlining?
Opioid overdose deaths are at a record high in the United States, and many of these deaths can be prevented with medications such as buprenorphine, said lead author Kao-Ping Chua, MD, of the University of Michigan, Ann Arbor, in an interview. “However, buprenorphine cannot prevent opioid overdose deaths if patients are never started on the medication or only stay on the medication for a short time. For that reason, rates of buprenorphine initiation and retention are critical metrics for measuring how well the U.S. health care system is responding to the opioid epidemic,” he said.
“At the time we started our study, several other research groups had evaluated U.S. rates of buprenorphine initiation and retention using data through 2020. However, more recent national data were lacking,” Dr. Chua told this news organization. “We felt that this was an important knowledge gap given the many changes in society that have occurred since 2020,” he noted. “For example, it was possible that the relaxation of social distancing measures during 2021 and 2022 might have reduced barriers to health care visits, thereby increasing opportunities to initiate treatment for opioid addiction with buprenorphine,” he said.
Dr. Chua and colleagues used data from the IQVIA Longitudinal Prescription Database, which reports 92% of prescriptions dispensed from retail pharmacies in the United States. “Buprenorphine products included immediate-release and extended-release formulations approved for opioid use disorder but not formulations primarily used to treat pain,” they write.
Monthly buprenorphine initiation was defined as the number of patients initiating therapy per 100,000 individuals. For retention, the researchers used a National Quality Forum-endorsed quality measure that defined retention as continuous use of buprenorphine for at least 180 days.
A total of 3,006,629 patients began buprenorphine therapy during the study period; approximately 43% were female.
During the first years of the study period, from January 2016 through September 2018, the monthly buprenorphine initiation rate increased from 12.5 per 100,000 to 15.9 per 100,000, with a statistically significant monthly percentage change of 0.62% (P < .001).
However, from October 2018 through October 2022, the monthly percentage remained essentially the same (P = .62) with a monthly percentage change of −0.03%.
From March 2020 through December 2020, the median monthly buprenorphine initiation rate was 14.4 per 100,000, only slightly lower than the rates from January 2019 through February 2020 and from January 2021 through October 2022 (15.5 per 100,000 and 15.0 per 100,000, respectively).
Over the entire study period from January 2016 through October 2022, the median monthly retention rate for buprenorphine use was 22.2%. This rate increased minimally, with no significant changes in slope and a monthly percentage change of 0.08% (P = .04).
The study findings were limited by several factors, including a lack of data on race and ethnicity, in-clinic administration of buprenorphine, and buprenorphine dispensing through methadone outpatient programs, the researchers note. Also, data did not indicate whether some patients began buprenorphine to treat pain, they say. The timing of the flattening of buprenorphine use also suggests the influence of factors beyond the COVID-19 pandemic, they write.
However, the results were strengthened by the large sample size and suggest that efforts to date to increase buprenorphine use have been unsuccessful, the researchers write. “A comprehensive approach is needed to eliminate barriers to buprenorphine initiation and retention, such as stigma and uneven access to prescribers,” they conclude.
Study highlights underuse of buprenorphine option
“Our study shows that buprenorphine initiation rates have been flat since the end of 2018 and that rates of 180-day retention in buprenorphine therapy have remained low throughout 2016-2022,” Dr. Chua told this news organization. “Neither of these findings are particularly surprising, but they are disappointing,” he said. “There were a lot of policy and clinical efforts to maintain and expand access to buprenorphine during the COVID-19 pandemic, such as allowing buprenorphine to be prescribed via telehealth without an in-person visit and eliminating training requirements for the waiver that previously was required to prescribe buprenorphine.
“The fact that buprenorphine initiation and retention did not rise after these efforts were implemented suggests that they were insufficient to meet the rising need for this medication,” he said.
The current study “adds to a growing body of research suggesting that clinicians are not maximizing opportunities to initiate buprenorphine treatment among patients with opioid addiction,” Dr. Chua said. He cited another of his recent studies in which 1 in 12 patients were prescribed buprenorphine within 30 days of an emergency department visit for opioid overdose from August 2019 to April 2021, but half of patients with emergency department visits with anaphylaxis were prescribed anepinephrine auto-injector.
“My hope is that our new study will further underscore to clinicians how much the health care system is underusing a critical tool to prevent opioid overdose deaths,” he said.
The federal government’s recent elimination of the waiver needed to prescribe buprenorphine may move the needle, but to what degree remains to be seen, Dr. Chua added. “It is possible this intervention will be insufficient to overcome the many other barriers to buprenorphine initiation and retention, such as stigma about the drug among clinicians, patients, and pharmacists,” he said.
Lack of education remains a barrier to buprenorphine use
The current study is important to determine whether attempts to increase buprenorphine initiation and treatment retention are working, said Reuben J. Strayer, MD, director of addiction medicine in the emergency medicine department at Maimonides Medical Center, New York, in an interview.
Dr. Strayer was not involved in the current study, but said he was surprised that initiation of buprenorphine didn’t decrease more dramatically during the pandemic, given the significant barriers to accessing care during that time.
However, “efforts to increase buprenorphine initiation and retention have not been sufficiently effective,” Dr. Strayer said. “The rise of fentanyl as a primary street opioid, replacing heroin, has dissuaded both patients and providers from initiating buprenorphine for fear of precipitated withdrawal.”
The elimination of the DATA 2000 (X) waiver was the removal of a potential barrier to increased buprenorphine use, said Dr. Strayer. “Now that the DATA 2000 (X) waiver has been eliminated, the focus of buprenorphine access is educating primary care and inpatient providers on its use, so that patients with OUD [opioid use disorder] can be treated, regardless of the venue at which they seek care,” he said.
Looking ahead, “The priority in buprenorphine research is determining the most effective way to initiate buprenorphine without the risk of precipitated withdrawal,” Dr. Strayer added.
The study was supported in part by the Benter Foundation, the Michigan Department of Health and Human Services, and the Susan B. Meister Child Health Evaluation and Research Center in the department of pediatrics at the University of Michigan. Dr. Chua was supported by the National Institute on Drug Abuse. Dr. Strayer has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Opioid overdose deaths are at a record high in the United States, and many of these deaths can be prevented with medications such as buprenorphine, said lead author Kao-Ping Chua, MD, of the University of Michigan, Ann Arbor, in an interview. “However, buprenorphine cannot prevent opioid overdose deaths if patients are never started on the medication or only stay on the medication for a short time. For that reason, rates of buprenorphine initiation and retention are critical metrics for measuring how well the U.S. health care system is responding to the opioid epidemic,” he said.
“At the time we started our study, several other research groups had evaluated U.S. rates of buprenorphine initiation and retention using data through 2020. However, more recent national data were lacking,” Dr. Chua told this news organization. “We felt that this was an important knowledge gap given the many changes in society that have occurred since 2020,” he noted. “For example, it was possible that the relaxation of social distancing measures during 2021 and 2022 might have reduced barriers to health care visits, thereby increasing opportunities to initiate treatment for opioid addiction with buprenorphine,” he said.
Dr. Chua and colleagues used data from the IQVIA Longitudinal Prescription Database, which reports 92% of prescriptions dispensed from retail pharmacies in the United States. “Buprenorphine products included immediate-release and extended-release formulations approved for opioid use disorder but not formulations primarily used to treat pain,” they write.
Monthly buprenorphine initiation was defined as the number of patients initiating therapy per 100,000 individuals. For retention, the researchers used a National Quality Forum-endorsed quality measure that defined retention as continuous use of buprenorphine for at least 180 days.
A total of 3,006,629 patients began buprenorphine therapy during the study period; approximately 43% were female.
During the first years of the study period, from January 2016 through September 2018, the monthly buprenorphine initiation rate increased from 12.5 per 100,000 to 15.9 per 100,000, with a statistically significant monthly percentage change of 0.62% (P < .001).
However, from October 2018 through October 2022, the monthly percentage remained essentially the same (P = .62) with a monthly percentage change of −0.03%.
From March 2020 through December 2020, the median monthly buprenorphine initiation rate was 14.4 per 100,000, only slightly lower than the rates from January 2019 through February 2020 and from January 2021 through October 2022 (15.5 per 100,000 and 15.0 per 100,000, respectively).
Over the entire study period from January 2016 through October 2022, the median monthly retention rate for buprenorphine use was 22.2%. This rate increased minimally, with no significant changes in slope and a monthly percentage change of 0.08% (P = .04).
The study findings were limited by several factors, including a lack of data on race and ethnicity, in-clinic administration of buprenorphine, and buprenorphine dispensing through methadone outpatient programs, the researchers note. Also, data did not indicate whether some patients began buprenorphine to treat pain, they say. The timing of the flattening of buprenorphine use also suggests the influence of factors beyond the COVID-19 pandemic, they write.
However, the results were strengthened by the large sample size and suggest that efforts to date to increase buprenorphine use have been unsuccessful, the researchers write. “A comprehensive approach is needed to eliminate barriers to buprenorphine initiation and retention, such as stigma and uneven access to prescribers,” they conclude.
Study highlights underuse of buprenorphine option
“Our study shows that buprenorphine initiation rates have been flat since the end of 2018 and that rates of 180-day retention in buprenorphine therapy have remained low throughout 2016-2022,” Dr. Chua told this news organization. “Neither of these findings are particularly surprising, but they are disappointing,” he said. “There were a lot of policy and clinical efforts to maintain and expand access to buprenorphine during the COVID-19 pandemic, such as allowing buprenorphine to be prescribed via telehealth without an in-person visit and eliminating training requirements for the waiver that previously was required to prescribe buprenorphine.
“The fact that buprenorphine initiation and retention did not rise after these efforts were implemented suggests that they were insufficient to meet the rising need for this medication,” he said.
The current study “adds to a growing body of research suggesting that clinicians are not maximizing opportunities to initiate buprenorphine treatment among patients with opioid addiction,” Dr. Chua said. He cited another of his recent studies in which 1 in 12 patients were prescribed buprenorphine within 30 days of an emergency department visit for opioid overdose from August 2019 to April 2021, but half of patients with emergency department visits with anaphylaxis were prescribed anepinephrine auto-injector.
“My hope is that our new study will further underscore to clinicians how much the health care system is underusing a critical tool to prevent opioid overdose deaths,” he said.
The federal government’s recent elimination of the waiver needed to prescribe buprenorphine may move the needle, but to what degree remains to be seen, Dr. Chua added. “It is possible this intervention will be insufficient to overcome the many other barriers to buprenorphine initiation and retention, such as stigma about the drug among clinicians, patients, and pharmacists,” he said.
Lack of education remains a barrier to buprenorphine use
The current study is important to determine whether attempts to increase buprenorphine initiation and treatment retention are working, said Reuben J. Strayer, MD, director of addiction medicine in the emergency medicine department at Maimonides Medical Center, New York, in an interview.
Dr. Strayer was not involved in the current study, but said he was surprised that initiation of buprenorphine didn’t decrease more dramatically during the pandemic, given the significant barriers to accessing care during that time.
However, “efforts to increase buprenorphine initiation and retention have not been sufficiently effective,” Dr. Strayer said. “The rise of fentanyl as a primary street opioid, replacing heroin, has dissuaded both patients and providers from initiating buprenorphine for fear of precipitated withdrawal.”
The elimination of the DATA 2000 (X) waiver was the removal of a potential barrier to increased buprenorphine use, said Dr. Strayer. “Now that the DATA 2000 (X) waiver has been eliminated, the focus of buprenorphine access is educating primary care and inpatient providers on its use, so that patients with OUD [opioid use disorder] can be treated, regardless of the venue at which they seek care,” he said.
Looking ahead, “The priority in buprenorphine research is determining the most effective way to initiate buprenorphine without the risk of precipitated withdrawal,” Dr. Strayer added.
The study was supported in part by the Benter Foundation, the Michigan Department of Health and Human Services, and the Susan B. Meister Child Health Evaluation and Research Center in the department of pediatrics at the University of Michigan. Dr. Chua was supported by the National Institute on Drug Abuse. Dr. Strayer has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Opioid overdose deaths are at a record high in the United States, and many of these deaths can be prevented with medications such as buprenorphine, said lead author Kao-Ping Chua, MD, of the University of Michigan, Ann Arbor, in an interview. “However, buprenorphine cannot prevent opioid overdose deaths if patients are never started on the medication or only stay on the medication for a short time. For that reason, rates of buprenorphine initiation and retention are critical metrics for measuring how well the U.S. health care system is responding to the opioid epidemic,” he said.
“At the time we started our study, several other research groups had evaluated U.S. rates of buprenorphine initiation and retention using data through 2020. However, more recent national data were lacking,” Dr. Chua told this news organization. “We felt that this was an important knowledge gap given the many changes in society that have occurred since 2020,” he noted. “For example, it was possible that the relaxation of social distancing measures during 2021 and 2022 might have reduced barriers to health care visits, thereby increasing opportunities to initiate treatment for opioid addiction with buprenorphine,” he said.
Dr. Chua and colleagues used data from the IQVIA Longitudinal Prescription Database, which reports 92% of prescriptions dispensed from retail pharmacies in the United States. “Buprenorphine products included immediate-release and extended-release formulations approved for opioid use disorder but not formulations primarily used to treat pain,” they write.
Monthly buprenorphine initiation was defined as the number of patients initiating therapy per 100,000 individuals. For retention, the researchers used a National Quality Forum-endorsed quality measure that defined retention as continuous use of buprenorphine for at least 180 days.
A total of 3,006,629 patients began buprenorphine therapy during the study period; approximately 43% were female.
During the first years of the study period, from January 2016 through September 2018, the monthly buprenorphine initiation rate increased from 12.5 per 100,000 to 15.9 per 100,000, with a statistically significant monthly percentage change of 0.62% (P < .001).
However, from October 2018 through October 2022, the monthly percentage remained essentially the same (P = .62) with a monthly percentage change of −0.03%.
From March 2020 through December 2020, the median monthly buprenorphine initiation rate was 14.4 per 100,000, only slightly lower than the rates from January 2019 through February 2020 and from January 2021 through October 2022 (15.5 per 100,000 and 15.0 per 100,000, respectively).
Over the entire study period from January 2016 through October 2022, the median monthly retention rate for buprenorphine use was 22.2%. This rate increased minimally, with no significant changes in slope and a monthly percentage change of 0.08% (P = .04).
The study findings were limited by several factors, including a lack of data on race and ethnicity, in-clinic administration of buprenorphine, and buprenorphine dispensing through methadone outpatient programs, the researchers note. Also, data did not indicate whether some patients began buprenorphine to treat pain, they say. The timing of the flattening of buprenorphine use also suggests the influence of factors beyond the COVID-19 pandemic, they write.
However, the results were strengthened by the large sample size and suggest that efforts to date to increase buprenorphine use have been unsuccessful, the researchers write. “A comprehensive approach is needed to eliminate barriers to buprenorphine initiation and retention, such as stigma and uneven access to prescribers,” they conclude.
Study highlights underuse of buprenorphine option
“Our study shows that buprenorphine initiation rates have been flat since the end of 2018 and that rates of 180-day retention in buprenorphine therapy have remained low throughout 2016-2022,” Dr. Chua told this news organization. “Neither of these findings are particularly surprising, but they are disappointing,” he said. “There were a lot of policy and clinical efforts to maintain and expand access to buprenorphine during the COVID-19 pandemic, such as allowing buprenorphine to be prescribed via telehealth without an in-person visit and eliminating training requirements for the waiver that previously was required to prescribe buprenorphine.
“The fact that buprenorphine initiation and retention did not rise after these efforts were implemented suggests that they were insufficient to meet the rising need for this medication,” he said.
The current study “adds to a growing body of research suggesting that clinicians are not maximizing opportunities to initiate buprenorphine treatment among patients with opioid addiction,” Dr. Chua said. He cited another of his recent studies in which 1 in 12 patients were prescribed buprenorphine within 30 days of an emergency department visit for opioid overdose from August 2019 to April 2021, but half of patients with emergency department visits with anaphylaxis were prescribed anepinephrine auto-injector.
“My hope is that our new study will further underscore to clinicians how much the health care system is underusing a critical tool to prevent opioid overdose deaths,” he said.
The federal government’s recent elimination of the waiver needed to prescribe buprenorphine may move the needle, but to what degree remains to be seen, Dr. Chua added. “It is possible this intervention will be insufficient to overcome the many other barriers to buprenorphine initiation and retention, such as stigma about the drug among clinicians, patients, and pharmacists,” he said.
Lack of education remains a barrier to buprenorphine use
The current study is important to determine whether attempts to increase buprenorphine initiation and treatment retention are working, said Reuben J. Strayer, MD, director of addiction medicine in the emergency medicine department at Maimonides Medical Center, New York, in an interview.
Dr. Strayer was not involved in the current study, but said he was surprised that initiation of buprenorphine didn’t decrease more dramatically during the pandemic, given the significant barriers to accessing care during that time.
However, “efforts to increase buprenorphine initiation and retention have not been sufficiently effective,” Dr. Strayer said. “The rise of fentanyl as a primary street opioid, replacing heroin, has dissuaded both patients and providers from initiating buprenorphine for fear of precipitated withdrawal.”
The elimination of the DATA 2000 (X) waiver was the removal of a potential barrier to increased buprenorphine use, said Dr. Strayer. “Now that the DATA 2000 (X) waiver has been eliminated, the focus of buprenorphine access is educating primary care and inpatient providers on its use, so that patients with OUD [opioid use disorder] can be treated, regardless of the venue at which they seek care,” he said.
Looking ahead, “The priority in buprenorphine research is determining the most effective way to initiate buprenorphine without the risk of precipitated withdrawal,” Dr. Strayer added.
The study was supported in part by the Benter Foundation, the Michigan Department of Health and Human Services, and the Susan B. Meister Child Health Evaluation and Research Center in the department of pediatrics at the University of Michigan. Dr. Chua was supported by the National Institute on Drug Abuse. Dr. Strayer has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA
UTI imaging falls short in some primary care settings
WASHINGTON –
“Timely imaging is recommended after febrile UTI (fUTI) in young children to identify treatable urologic conditions,” wrote Jonathan Hatoun, MD, of Boston Children’s Hospital, and colleagues in a poster presented at the Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics (AAP) currently recommends renal-bladder ultrasound (RBUS) after fUTI with voiding cystourethrogram (VCUG) after abnormal RBUS or second fUTI, but data on clinician adherence to these recommendations are limited, the researchers said.
To characterize practice patterns regarding fUTI, the researchers reviewed data from children younger than 24 months of age with fUTI who were treated at a primary care network in Massachusetts in 2019. The definition of fUTI was temperature of 38° C or higher, positive urinalysis, and more than 50,000 CFU on urine culture. The median age of the patients was 9 months; 84% were female.
In a multivariate analysis, post-UTI imaging followed the AAP guidelines in 82 cases (69.5%). The main reasons for nonadherence were lack of RBUS in 21 patients, VCUG despite normal RBUS in 9 patients, no VCUG after abnormal RBUS in 4 patients, and no VCUG after a second fUTI in 2 patients.
Overall, nonadherence was a result of not ordering a recommended study in 23% of cases (errors of omission) and ordering an unnecessary study in 8% of cases (errors of commission).
Commercial insurance, larger number of providers in practice, and younger provider age were significant independent predictors of adherence (odds ratios 2.82, 1.38, and 0.96, respectively).
The findings were limited by the use of data from a single center; however, the results suggest that targeted training may improve guideline adherence, the researchers wrote. Additional research and quality improvement studies are needed to understand and address the impact of insurance on guideline adherence for imaging after febrile UTIs, they noted.
Provider education is essential to continued quality of care
When it comes to febrile UTIs, “it is important to stay focused on the quality of care being provided, as opposed to the usual benchmark of quantity of care,” Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, said in an interview.
“This is a very simple but interesting study on provider compliance with practice guidelines,” said Dr. Joos, who was not involved in the study. “I was surprised that the providers did so well in ordering the correct imaging in 70% of the cases,” he said.
Of particular interest, Dr. Joos noted, was that “the authors also showed that older providers and those working in smaller practices are less likely to comply with this particular imaging guideline. This can be summed up as the ‘I didn’t know the guideline’ effect.”
To improve quality of care, “more research and effort should be directed at updating providers when strong new evidence changes previous practices and guidelines,” Dr. Joos told this news organization.
The study received no outside funding. The researchers and Dr. Joos had no financial conflicts to disclose.
WASHINGTON –
“Timely imaging is recommended after febrile UTI (fUTI) in young children to identify treatable urologic conditions,” wrote Jonathan Hatoun, MD, of Boston Children’s Hospital, and colleagues in a poster presented at the Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics (AAP) currently recommends renal-bladder ultrasound (RBUS) after fUTI with voiding cystourethrogram (VCUG) after abnormal RBUS or second fUTI, but data on clinician adherence to these recommendations are limited, the researchers said.
To characterize practice patterns regarding fUTI, the researchers reviewed data from children younger than 24 months of age with fUTI who were treated at a primary care network in Massachusetts in 2019. The definition of fUTI was temperature of 38° C or higher, positive urinalysis, and more than 50,000 CFU on urine culture. The median age of the patients was 9 months; 84% were female.
In a multivariate analysis, post-UTI imaging followed the AAP guidelines in 82 cases (69.5%). The main reasons for nonadherence were lack of RBUS in 21 patients, VCUG despite normal RBUS in 9 patients, no VCUG after abnormal RBUS in 4 patients, and no VCUG after a second fUTI in 2 patients.
Overall, nonadherence was a result of not ordering a recommended study in 23% of cases (errors of omission) and ordering an unnecessary study in 8% of cases (errors of commission).
Commercial insurance, larger number of providers in practice, and younger provider age were significant independent predictors of adherence (odds ratios 2.82, 1.38, and 0.96, respectively).
The findings were limited by the use of data from a single center; however, the results suggest that targeted training may improve guideline adherence, the researchers wrote. Additional research and quality improvement studies are needed to understand and address the impact of insurance on guideline adherence for imaging after febrile UTIs, they noted.
Provider education is essential to continued quality of care
When it comes to febrile UTIs, “it is important to stay focused on the quality of care being provided, as opposed to the usual benchmark of quantity of care,” Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, said in an interview.
“This is a very simple but interesting study on provider compliance with practice guidelines,” said Dr. Joos, who was not involved in the study. “I was surprised that the providers did so well in ordering the correct imaging in 70% of the cases,” he said.
Of particular interest, Dr. Joos noted, was that “the authors also showed that older providers and those working in smaller practices are less likely to comply with this particular imaging guideline. This can be summed up as the ‘I didn’t know the guideline’ effect.”
To improve quality of care, “more research and effort should be directed at updating providers when strong new evidence changes previous practices and guidelines,” Dr. Joos told this news organization.
The study received no outside funding. The researchers and Dr. Joos had no financial conflicts to disclose.
WASHINGTON –
“Timely imaging is recommended after febrile UTI (fUTI) in young children to identify treatable urologic conditions,” wrote Jonathan Hatoun, MD, of Boston Children’s Hospital, and colleagues in a poster presented at the Pediatric Academic Societies annual meeting.
The American Academy of Pediatrics (AAP) currently recommends renal-bladder ultrasound (RBUS) after fUTI with voiding cystourethrogram (VCUG) after abnormal RBUS or second fUTI, but data on clinician adherence to these recommendations are limited, the researchers said.
To characterize practice patterns regarding fUTI, the researchers reviewed data from children younger than 24 months of age with fUTI who were treated at a primary care network in Massachusetts in 2019. The definition of fUTI was temperature of 38° C or higher, positive urinalysis, and more than 50,000 CFU on urine culture. The median age of the patients was 9 months; 84% were female.
In a multivariate analysis, post-UTI imaging followed the AAP guidelines in 82 cases (69.5%). The main reasons for nonadherence were lack of RBUS in 21 patients, VCUG despite normal RBUS in 9 patients, no VCUG after abnormal RBUS in 4 patients, and no VCUG after a second fUTI in 2 patients.
Overall, nonadherence was a result of not ordering a recommended study in 23% of cases (errors of omission) and ordering an unnecessary study in 8% of cases (errors of commission).
Commercial insurance, larger number of providers in practice, and younger provider age were significant independent predictors of adherence (odds ratios 2.82, 1.38, and 0.96, respectively).
The findings were limited by the use of data from a single center; however, the results suggest that targeted training may improve guideline adherence, the researchers wrote. Additional research and quality improvement studies are needed to understand and address the impact of insurance on guideline adherence for imaging after febrile UTIs, they noted.
Provider education is essential to continued quality of care
When it comes to febrile UTIs, “it is important to stay focused on the quality of care being provided, as opposed to the usual benchmark of quantity of care,” Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, said in an interview.
“This is a very simple but interesting study on provider compliance with practice guidelines,” said Dr. Joos, who was not involved in the study. “I was surprised that the providers did so well in ordering the correct imaging in 70% of the cases,” he said.
Of particular interest, Dr. Joos noted, was that “the authors also showed that older providers and those working in smaller practices are less likely to comply with this particular imaging guideline. This can be summed up as the ‘I didn’t know the guideline’ effect.”
To improve quality of care, “more research and effort should be directed at updating providers when strong new evidence changes previous practices and guidelines,” Dr. Joos told this news organization.
The study received no outside funding. The researchers and Dr. Joos had no financial conflicts to disclose.
AT PAS 2023
Cycle timing may reduce hormonal dosage for contraception
Progesterone and estrogen are often used for contraception by preventing ovulation, but the adverse effects associated with large doses of these hormones remain a concern, wrote Brenda Lyn A. Gavina, a PhD candidate at the University of the Philippines Diliman, Quezon City, and colleagues.
In a study published in PLoS Computational Biology, the researchers examined how the timing of hormone administration during a cycle might impact the amount of hormones needed for contraception. Previous research shown that combining hormones can reduce the dosage needed, but the impact of timing on further dose reduction has not been well studied, they said.
The researchers applied optimal control theory in a mathematical model to show the contraceptive effect of estrogen and/or progesterone at different times in the menstrual cycle. The model was based on a normal menstrual cycle with pituitary and ovarian phases. The model assumed that synthesis of luteinizing hormone and follicle-stimulating hormone occurs in the pituitary, that LH and FSH are held in reserve before release into the bloodstream, and that the follicular/luteal mass goes through nine ovarian stages of development. The model also included the activity of ovarian hormones estradiol (E2), progesterone (P4), and inhibin (Inh), in a normal cycle. In the model, LH, FSH, and E2 peaked in the late follicular phase, and P4 and Inh peaked in the luteal phase.
The pituitary model predicted the synthesis, release, and clearance of LH and FSH, and the response of the pituitary to E2, P4, and Inh. The ovarian model predicted the response of E2, P4, and Inh as functions of LH and FSH.
The researchers simulated a constant dose of exogenous progesterone monotherapy and combined exogenous estrogen/progesterone. They determined that a P4 peak of 4.99 ng/mL was taken as the optimum constant dosage for progesterone monotherapy, and for combination estrogen/progesterone.
The researchers then assessed the impact of time on dosage. They found that estrogen administration starting on the first day of a normal cycle preventing FHS from reaching maximum value, and that the low level of FHS in the follicular phase and additional P4 inhibition slowed follicular growth, and use of combination estrogen/progesterone caused similar inhibition at a later follicular stage.
“The combination therapy suggests that time-varying doses of estrogen and progesterone given simultaneously from the start to the end of the 28-day period, only requires a surge in estrogen dose around the 12th day of the cycle (a delayed administration compared to the estrogen monotherapy),” they noted.
With attention to timing, the maximum progesterone levels throughout a menstrual cycle were 4.43 ng/mL, 4.66 ng/mL, and 4.31 ng/mL for estrogen monotherapy, progesterone monotherapy, and combination therapy, respectively. Total doses of the optimal exogenous hormone were 77.76 pg/mL and 48.84 ng/mL for estrogen and progesterone monotherapy, respectively, and 35.58 pg/mL and 21.67 ng/mL for estrogen and progesterone in combination.
The findings were limited by the use of a standard model that does not account for variations in cycle length, the researchers noted. However, the results reflect other studies of hormonal activity, and the model can be used in future studies of the effect of hormones on cycle length, they said.
Overall, the researchers determined that timing dosage with estrogen monotherapy based on their model could provide effective contraception with about 92% of the minimum total constant dosage, while progesterone monotherapy would be effective with approximately 43% of the total constant dose.
Although more work is needed, the current study results may guide clinicians in experimenting with the optimal treatment regimen for anovulation, the researchers said.
“The results presented here give insights on construction of timed devices that give contraception at certain parts of the menstrual cycle,” they concluded.
Model aims to improve women’s control of contraception
“Aside from wanting to contribute to controlling population growth, we aim to empower women more by giving them more control on when to conceive and start motherhood,” and be in control of contraception in a safer way, said lead author Ms. Gavina, in an interview. In addition, studies are showing the noncontraceptive benefits of suppressing ovulation for managing premenstrual syndromes such as breast tenderness and irritability, and for managing diseases such as endometriosis, she said. “Anovulation also lowers the risk of ACL injuries in female athletes,” she added.
Ms. Gavina said that she was surprised primarily by the optimal control result for estrogen monotherapy. “It was surprising that, theoretically, our mathematical model, with the simplifying assumptions, showed that as low as 10% of the total dose in constant administration could achieve contraception as long as the administration of this dosage is perfectly timed, and the timing was also shown in our optimization result,” she said.
“Our model does not capture all factors in contraception, since the reproductive function in women is a very complex multiscale dynamical system highly dependent on both endogenous and exogenous hormones,” Ms. Gavina told this news organization. However, “with the emergence of more data, it can be refined to address other contraception issues. Further, although the results of this study are not directly translatable to the clinical setting, we hope that these results may aid clinicians in identifying the minimum dose and treatment schedule for contraception,” she said.
Future research directions include examining within and between women’s variabilities and adding a pharmacokinetics model to account for the effects of specific drugs, she said. “We also hope to expand or modify the current model to investigate reproductive health concerns in women, such as [polycystic ovary syndrome] and ovarian cysts,” she added.
Ms. Gavina disclosed support from the University of the Philippines Office of International Linkages, a Continuous Operational and Outcomes-based Partnership for Excellence in Research and Academic Training Enhancement grant, and a Commission on Higher Education Faculty Development Program-II scholarship.
Progesterone and estrogen are often used for contraception by preventing ovulation, but the adverse effects associated with large doses of these hormones remain a concern, wrote Brenda Lyn A. Gavina, a PhD candidate at the University of the Philippines Diliman, Quezon City, and colleagues.
In a study published in PLoS Computational Biology, the researchers examined how the timing of hormone administration during a cycle might impact the amount of hormones needed for contraception. Previous research shown that combining hormones can reduce the dosage needed, but the impact of timing on further dose reduction has not been well studied, they said.
The researchers applied optimal control theory in a mathematical model to show the contraceptive effect of estrogen and/or progesterone at different times in the menstrual cycle. The model was based on a normal menstrual cycle with pituitary and ovarian phases. The model assumed that synthesis of luteinizing hormone and follicle-stimulating hormone occurs in the pituitary, that LH and FSH are held in reserve before release into the bloodstream, and that the follicular/luteal mass goes through nine ovarian stages of development. The model also included the activity of ovarian hormones estradiol (E2), progesterone (P4), and inhibin (Inh), in a normal cycle. In the model, LH, FSH, and E2 peaked in the late follicular phase, and P4 and Inh peaked in the luteal phase.
The pituitary model predicted the synthesis, release, and clearance of LH and FSH, and the response of the pituitary to E2, P4, and Inh. The ovarian model predicted the response of E2, P4, and Inh as functions of LH and FSH.
The researchers simulated a constant dose of exogenous progesterone monotherapy and combined exogenous estrogen/progesterone. They determined that a P4 peak of 4.99 ng/mL was taken as the optimum constant dosage for progesterone monotherapy, and for combination estrogen/progesterone.
The researchers then assessed the impact of time on dosage. They found that estrogen administration starting on the first day of a normal cycle preventing FHS from reaching maximum value, and that the low level of FHS in the follicular phase and additional P4 inhibition slowed follicular growth, and use of combination estrogen/progesterone caused similar inhibition at a later follicular stage.
“The combination therapy suggests that time-varying doses of estrogen and progesterone given simultaneously from the start to the end of the 28-day period, only requires a surge in estrogen dose around the 12th day of the cycle (a delayed administration compared to the estrogen monotherapy),” they noted.
With attention to timing, the maximum progesterone levels throughout a menstrual cycle were 4.43 ng/mL, 4.66 ng/mL, and 4.31 ng/mL for estrogen monotherapy, progesterone monotherapy, and combination therapy, respectively. Total doses of the optimal exogenous hormone were 77.76 pg/mL and 48.84 ng/mL for estrogen and progesterone monotherapy, respectively, and 35.58 pg/mL and 21.67 ng/mL for estrogen and progesterone in combination.
The findings were limited by the use of a standard model that does not account for variations in cycle length, the researchers noted. However, the results reflect other studies of hormonal activity, and the model can be used in future studies of the effect of hormones on cycle length, they said.
Overall, the researchers determined that timing dosage with estrogen monotherapy based on their model could provide effective contraception with about 92% of the minimum total constant dosage, while progesterone monotherapy would be effective with approximately 43% of the total constant dose.
Although more work is needed, the current study results may guide clinicians in experimenting with the optimal treatment regimen for anovulation, the researchers said.
“The results presented here give insights on construction of timed devices that give contraception at certain parts of the menstrual cycle,” they concluded.
Model aims to improve women’s control of contraception
“Aside from wanting to contribute to controlling population growth, we aim to empower women more by giving them more control on when to conceive and start motherhood,” and be in control of contraception in a safer way, said lead author Ms. Gavina, in an interview. In addition, studies are showing the noncontraceptive benefits of suppressing ovulation for managing premenstrual syndromes such as breast tenderness and irritability, and for managing diseases such as endometriosis, she said. “Anovulation also lowers the risk of ACL injuries in female athletes,” she added.
Ms. Gavina said that she was surprised primarily by the optimal control result for estrogen monotherapy. “It was surprising that, theoretically, our mathematical model, with the simplifying assumptions, showed that as low as 10% of the total dose in constant administration could achieve contraception as long as the administration of this dosage is perfectly timed, and the timing was also shown in our optimization result,” she said.
“Our model does not capture all factors in contraception, since the reproductive function in women is a very complex multiscale dynamical system highly dependent on both endogenous and exogenous hormones,” Ms. Gavina told this news organization. However, “with the emergence of more data, it can be refined to address other contraception issues. Further, although the results of this study are not directly translatable to the clinical setting, we hope that these results may aid clinicians in identifying the minimum dose and treatment schedule for contraception,” she said.
Future research directions include examining within and between women’s variabilities and adding a pharmacokinetics model to account for the effects of specific drugs, she said. “We also hope to expand or modify the current model to investigate reproductive health concerns in women, such as [polycystic ovary syndrome] and ovarian cysts,” she added.
Ms. Gavina disclosed support from the University of the Philippines Office of International Linkages, a Continuous Operational and Outcomes-based Partnership for Excellence in Research and Academic Training Enhancement grant, and a Commission on Higher Education Faculty Development Program-II scholarship.
Progesterone and estrogen are often used for contraception by preventing ovulation, but the adverse effects associated with large doses of these hormones remain a concern, wrote Brenda Lyn A. Gavina, a PhD candidate at the University of the Philippines Diliman, Quezon City, and colleagues.
In a study published in PLoS Computational Biology, the researchers examined how the timing of hormone administration during a cycle might impact the amount of hormones needed for contraception. Previous research shown that combining hormones can reduce the dosage needed, but the impact of timing on further dose reduction has not been well studied, they said.
The researchers applied optimal control theory in a mathematical model to show the contraceptive effect of estrogen and/or progesterone at different times in the menstrual cycle. The model was based on a normal menstrual cycle with pituitary and ovarian phases. The model assumed that synthesis of luteinizing hormone and follicle-stimulating hormone occurs in the pituitary, that LH and FSH are held in reserve before release into the bloodstream, and that the follicular/luteal mass goes through nine ovarian stages of development. The model also included the activity of ovarian hormones estradiol (E2), progesterone (P4), and inhibin (Inh), in a normal cycle. In the model, LH, FSH, and E2 peaked in the late follicular phase, and P4 and Inh peaked in the luteal phase.
The pituitary model predicted the synthesis, release, and clearance of LH and FSH, and the response of the pituitary to E2, P4, and Inh. The ovarian model predicted the response of E2, P4, and Inh as functions of LH and FSH.
The researchers simulated a constant dose of exogenous progesterone monotherapy and combined exogenous estrogen/progesterone. They determined that a P4 peak of 4.99 ng/mL was taken as the optimum constant dosage for progesterone monotherapy, and for combination estrogen/progesterone.
The researchers then assessed the impact of time on dosage. They found that estrogen administration starting on the first day of a normal cycle preventing FHS from reaching maximum value, and that the low level of FHS in the follicular phase and additional P4 inhibition slowed follicular growth, and use of combination estrogen/progesterone caused similar inhibition at a later follicular stage.
“The combination therapy suggests that time-varying doses of estrogen and progesterone given simultaneously from the start to the end of the 28-day period, only requires a surge in estrogen dose around the 12th day of the cycle (a delayed administration compared to the estrogen monotherapy),” they noted.
With attention to timing, the maximum progesterone levels throughout a menstrual cycle were 4.43 ng/mL, 4.66 ng/mL, and 4.31 ng/mL for estrogen monotherapy, progesterone monotherapy, and combination therapy, respectively. Total doses of the optimal exogenous hormone were 77.76 pg/mL and 48.84 ng/mL for estrogen and progesterone monotherapy, respectively, and 35.58 pg/mL and 21.67 ng/mL for estrogen and progesterone in combination.
The findings were limited by the use of a standard model that does not account for variations in cycle length, the researchers noted. However, the results reflect other studies of hormonal activity, and the model can be used in future studies of the effect of hormones on cycle length, they said.
Overall, the researchers determined that timing dosage with estrogen monotherapy based on their model could provide effective contraception with about 92% of the minimum total constant dosage, while progesterone monotherapy would be effective with approximately 43% of the total constant dose.
Although more work is needed, the current study results may guide clinicians in experimenting with the optimal treatment regimen for anovulation, the researchers said.
“The results presented here give insights on construction of timed devices that give contraception at certain parts of the menstrual cycle,” they concluded.
Model aims to improve women’s control of contraception
“Aside from wanting to contribute to controlling population growth, we aim to empower women more by giving them more control on when to conceive and start motherhood,” and be in control of contraception in a safer way, said lead author Ms. Gavina, in an interview. In addition, studies are showing the noncontraceptive benefits of suppressing ovulation for managing premenstrual syndromes such as breast tenderness and irritability, and for managing diseases such as endometriosis, she said. “Anovulation also lowers the risk of ACL injuries in female athletes,” she added.
Ms. Gavina said that she was surprised primarily by the optimal control result for estrogen monotherapy. “It was surprising that, theoretically, our mathematical model, with the simplifying assumptions, showed that as low as 10% of the total dose in constant administration could achieve contraception as long as the administration of this dosage is perfectly timed, and the timing was also shown in our optimization result,” she said.
“Our model does not capture all factors in contraception, since the reproductive function in women is a very complex multiscale dynamical system highly dependent on both endogenous and exogenous hormones,” Ms. Gavina told this news organization. However, “with the emergence of more data, it can be refined to address other contraception issues. Further, although the results of this study are not directly translatable to the clinical setting, we hope that these results may aid clinicians in identifying the minimum dose and treatment schedule for contraception,” she said.
Future research directions include examining within and between women’s variabilities and adding a pharmacokinetics model to account for the effects of specific drugs, she said. “We also hope to expand or modify the current model to investigate reproductive health concerns in women, such as [polycystic ovary syndrome] and ovarian cysts,” she added.
Ms. Gavina disclosed support from the University of the Philippines Office of International Linkages, a Continuous Operational and Outcomes-based Partnership for Excellence in Research and Academic Training Enhancement grant, and a Commission on Higher Education Faculty Development Program-II scholarship.
FROM PLOS COMPUTATIONAL BIOLOGY