Transitioning from Employment in Academia to Private Practice

Article Type
Changed
Fri, 11/15/2024 - 11:53
A Gastroenterologist’s Journey in Starting from Scratch

After more than 10 years of serving in a large academic medical center in Chicago, Illinois, that was part of a national health care system, the decision to transition into private practice wasn’t one I made lightly.

Having built a rewarding career and spent over a quarter of my life in an academic medical center and a national health system, the move to starting an independent practice from scratch was both exciting and daunting. The notion of leaving behind the structure, resources, and safety of the large health system was unsettling. However, as the landscape of health care continues to evolve, with worsening large structural problems within the U.S. health care system, I realized that starting an independent gastroenterology practice — focused on trying to fix some of these large-scale problems from the start — would not only align with my professional goals but also provide the personal satisfaction I had failed to find. 

As I reflect on my journey, there are a few key lessons I learned from making this leap — lessons that helped me transition from a highly structured employed physician environment to leading a thriving independent practice focused on redesigning gastroenterology care from scratch.

Dr. Neil Gupta



 

Lesson 1: Autonomy Opens the Door to Innovation

One of the primary reasons I left the employed physician setting was to gain greater control over my clinical practice and decision-making processes.

In a national health care system, the goal of standardization often dictates not only clinical care, but many “back end” aspects of the entire health care experience. We often see the things that are more visible, such as what supplies/equipment you use, how your patient appointments are scheduled, how many support staff members are assigned to help your practice, what electronic health record system you use, and how shared resources (like GI lab block time or anesthesia teams) are allocated.

However, this also impacts things we don’t usually see, such as what fees are billed for care you are providing (like facility fees), communication systems that your patients need to navigate for help, human resource systems you use, and retirement/health benefits you and your other team members receive. 

Standardization has two adverse consequences: 1) it does not allow for personalization and as a result, 2) it suppresses innovation. Standard protocols can streamline processes, but they sometimes fail to account for the nuanced differences between patients, such as genetic factors, unique medical histories, or responses/failures to prior treatments. This rigidity can stifle innovation, as physicians are often bound by guidelines that may not reflect the latest advancements or allow for creative, individualized approaches to care. In the long term, an overemphasis on standardization risks turning health care into a one-size-fits-all model, undermining the potential for breakthroughs.

The transition was challenging at first, as we needed to engage our entire new practice with a different mindset now that many of us had autonomy for the first time. Instead of everyone just practicing health care the way they had done before, we took a page from Elon Musk and challenged every member of the team to ask three questions about everything they do on a daily basis:

  • Is what I am doing helping a patient get healthy? (Question every requirement)
  • If not, do I still need to do this? (Delete any part of the process you can)
  • If so, how can I make this easier, faster, or automated? (Simplify and optimize, accelerate cycle time, and automate)

The freedom to innovate is a hallmark of independent practice. Embracing innovation in every aspect of the practice has been the most critical lesson of this journey. 

 

Lesson 2: Financial Stewardship is Critical for Sustainability

Running an independent practice is not just about medicine — it’s also about managing a business.

This was a stark shift from the large academic health systems, where financial decisions were handled by the “administration.” In my new role as a business owner, understanding the financial aspects of health care was crucial for success. The cost of what patients pay for health care in the United States (either directly in deductibles and coinsurance or indirectly through insurance premiums) is unsustainably high. However, inflation continues to cause substantial increases in almost all the costs of delivering care: medical supplies, salaries, benefits, IT costs, etc. It was critical to develop a financial plan that accounted for these two macro-economic trends, and ideally helped solve for both. In our case, delivering high quality care with a lower cost to patients and payers. 

We started by reevaluating our relationship with payers. Whereas being part of a large academic health system, we are often taught to look at payers as the adversary; as an independent practice looking to redesign the health care experience, it was critical for us to look to the payers as a partner in this journey. Understanding payer expectations and structuring contracts that aligned with shared goals of reducing total health care costs for patients was one of the foundations of our financial plan. 

Offering office-based endoscopy was one innovation we implemented to significantly impact both patient affordability and practice revenue. By performing procedures like colonoscopies and upper endoscopies in an office setting rather than a hospital or ambulatory surgery center, we eliminated facility fees, which are often a significant part of the total cost of care. This directly lowers out-of-pocket expenses for patients and reduces the overall financial burden on insurance companies. At the same time, it allows the practice to capture more of the revenue from these procedures, without the overhead costs associated with larger facilities. This model creates a win-win situation: patients save money while receiving the same quality of care, and the practice experiences an increase in profitability and autonomy in managing its services.

 

Lesson 3: Collaborative Care and Multidisciplinary Teams Can Exist Anywhere

One aspect I deeply valued in academia was the collaborative environment — having specialists across disciplines work together on challenging cases. In private practice, I was concerned that I would lose this collegial atmosphere. However, I quickly learned that building a robust network of multidisciplinary collaborators was achievable in independent practice, just like it was in a large health system.

In our practice, we established close relationships with primary care physicians, surgeons, advanced practice providers, dietitians, behavioral health specialists, and others. These partnerships were not just referral networks but integrated care teams where communication and shared decision-making were prioritized. By fostering collaboration, we could offer patients comprehensive care that addressed their physical, psychological, and nutritional needs. 

For example, managing patients with chronic conditions like inflammatory bowel disease, cirrhosis, or obesity requires more than just prescribing medications. It involves regular monitoring, dietary adjustments, psychological support, and in some cases, surgical intervention. In an academic setting, coordinating this level of care can be cumbersome due to institutional barriers and siloed departments. In our practice, some of these relationships are achieved through partnerships with other like-minded practices. In other situations, team members of other disciplines are employed directly by our practice. Being in an independent practice allowed us the flexibility to prioritize working with the right team members first, and then structuring the relationship model second. 

 

Lesson 4: Technology Is a Vital Tool in Redesigning Health Care

When I worked in a large academic health system, technology was often seen as an administrative burden rather than a clinical asset. Electronic health records (EHR) and a lot of the other IT systems that health care workers and patients interacted with on a regular basis were viewed as a barrier to care or a cause of time burdens instead of as tools to make health care easier. As we built our new practice from scratch, it was critical that we had an IT infrastructure that aligned with our core goals: simplify and automate the health care experience for everyone.

For our practice, we didn’t try to re-invent the wheel. Instead we copied from other industries who had already figured out a great solution for a problem we had. We wanted our patients to have a great customer service experience when interacting with our practice for scheduling, questions, refills, etc. So we implemented a unified communication system that some Fortune 100 companies, with perennial high scores for customer service, used. We wanted a great human resource system that would streamline the administrative time it would take to handle all HR needs for our practice. So we implemented an HR information system that had the best ratings for automation and integration with other business systems. At every point in the process, we reminded ourselves to focus on simplification and automation for every user of the system. 

 

Conclusion: A Rewarding Transition

The decision to leave academic medicine and start an independent gastroenterology practice wasn’t easy, but it was one of the most rewarding choices I have made. The lessons I’ve learned along the way — embracing autonomy, understanding financial stewardship, fostering collaboration, and leveraging technology — have helped me work toward a better total health care experience for the community.

This journey has also been deeply fulfilling on a personal level. It has allowed me to build stronger relationships with my patients, focus on long-term health outcomes, and create a practice where innovation and quality truly matter. While the challenges of running a private practice are real, the rewards — both for me and my patients — are immeasurable. If I had to do it all over again, I wouldn’t hesitate for a moment. If anything, I should have done it earlier.

Dr. Gupta is Managing Partner at Midwest Digestive Health & Nutrition, in Des Plaines, Illinois. He has reported no conflicts of interest in relation to this article.

Publications
Topics
Sections
A Gastroenterologist’s Journey in Starting from Scratch
A Gastroenterologist’s Journey in Starting from Scratch

After more than 10 years of serving in a large academic medical center in Chicago, Illinois, that was part of a national health care system, the decision to transition into private practice wasn’t one I made lightly.

Having built a rewarding career and spent over a quarter of my life in an academic medical center and a national health system, the move to starting an independent practice from scratch was both exciting and daunting. The notion of leaving behind the structure, resources, and safety of the large health system was unsettling. However, as the landscape of health care continues to evolve, with worsening large structural problems within the U.S. health care system, I realized that starting an independent gastroenterology practice — focused on trying to fix some of these large-scale problems from the start — would not only align with my professional goals but also provide the personal satisfaction I had failed to find. 

As I reflect on my journey, there are a few key lessons I learned from making this leap — lessons that helped me transition from a highly structured employed physician environment to leading a thriving independent practice focused on redesigning gastroenterology care from scratch.

Dr. Neil Gupta



 

Lesson 1: Autonomy Opens the Door to Innovation

One of the primary reasons I left the employed physician setting was to gain greater control over my clinical practice and decision-making processes.

In a national health care system, the goal of standardization often dictates not only clinical care, but many “back end” aspects of the entire health care experience. We often see the things that are more visible, such as what supplies/equipment you use, how your patient appointments are scheduled, how many support staff members are assigned to help your practice, what electronic health record system you use, and how shared resources (like GI lab block time or anesthesia teams) are allocated.

However, this also impacts things we don’t usually see, such as what fees are billed for care you are providing (like facility fees), communication systems that your patients need to navigate for help, human resource systems you use, and retirement/health benefits you and your other team members receive. 

Standardization has two adverse consequences: 1) it does not allow for personalization and as a result, 2) it suppresses innovation. Standard protocols can streamline processes, but they sometimes fail to account for the nuanced differences between patients, such as genetic factors, unique medical histories, or responses/failures to prior treatments. This rigidity can stifle innovation, as physicians are often bound by guidelines that may not reflect the latest advancements or allow for creative, individualized approaches to care. In the long term, an overemphasis on standardization risks turning health care into a one-size-fits-all model, undermining the potential for breakthroughs.

The transition was challenging at first, as we needed to engage our entire new practice with a different mindset now that many of us had autonomy for the first time. Instead of everyone just practicing health care the way they had done before, we took a page from Elon Musk and challenged every member of the team to ask three questions about everything they do on a daily basis:

  • Is what I am doing helping a patient get healthy? (Question every requirement)
  • If not, do I still need to do this? (Delete any part of the process you can)
  • If so, how can I make this easier, faster, or automated? (Simplify and optimize, accelerate cycle time, and automate)

The freedom to innovate is a hallmark of independent practice. Embracing innovation in every aspect of the practice has been the most critical lesson of this journey. 

 

Lesson 2: Financial Stewardship is Critical for Sustainability

Running an independent practice is not just about medicine — it’s also about managing a business.

This was a stark shift from the large academic health systems, where financial decisions were handled by the “administration.” In my new role as a business owner, understanding the financial aspects of health care was crucial for success. The cost of what patients pay for health care in the United States (either directly in deductibles and coinsurance or indirectly through insurance premiums) is unsustainably high. However, inflation continues to cause substantial increases in almost all the costs of delivering care: medical supplies, salaries, benefits, IT costs, etc. It was critical to develop a financial plan that accounted for these two macro-economic trends, and ideally helped solve for both. In our case, delivering high quality care with a lower cost to patients and payers. 

We started by reevaluating our relationship with payers. Whereas being part of a large academic health system, we are often taught to look at payers as the adversary; as an independent practice looking to redesign the health care experience, it was critical for us to look to the payers as a partner in this journey. Understanding payer expectations and structuring contracts that aligned with shared goals of reducing total health care costs for patients was one of the foundations of our financial plan. 

Offering office-based endoscopy was one innovation we implemented to significantly impact both patient affordability and practice revenue. By performing procedures like colonoscopies and upper endoscopies in an office setting rather than a hospital or ambulatory surgery center, we eliminated facility fees, which are often a significant part of the total cost of care. This directly lowers out-of-pocket expenses for patients and reduces the overall financial burden on insurance companies. At the same time, it allows the practice to capture more of the revenue from these procedures, without the overhead costs associated with larger facilities. This model creates a win-win situation: patients save money while receiving the same quality of care, and the practice experiences an increase in profitability and autonomy in managing its services.

 

Lesson 3: Collaborative Care and Multidisciplinary Teams Can Exist Anywhere

One aspect I deeply valued in academia was the collaborative environment — having specialists across disciplines work together on challenging cases. In private practice, I was concerned that I would lose this collegial atmosphere. However, I quickly learned that building a robust network of multidisciplinary collaborators was achievable in independent practice, just like it was in a large health system.

In our practice, we established close relationships with primary care physicians, surgeons, advanced practice providers, dietitians, behavioral health specialists, and others. These partnerships were not just referral networks but integrated care teams where communication and shared decision-making were prioritized. By fostering collaboration, we could offer patients comprehensive care that addressed their physical, psychological, and nutritional needs. 

For example, managing patients with chronic conditions like inflammatory bowel disease, cirrhosis, or obesity requires more than just prescribing medications. It involves regular monitoring, dietary adjustments, psychological support, and in some cases, surgical intervention. In an academic setting, coordinating this level of care can be cumbersome due to institutional barriers and siloed departments. In our practice, some of these relationships are achieved through partnerships with other like-minded practices. In other situations, team members of other disciplines are employed directly by our practice. Being in an independent practice allowed us the flexibility to prioritize working with the right team members first, and then structuring the relationship model second. 

 

Lesson 4: Technology Is a Vital Tool in Redesigning Health Care

When I worked in a large academic health system, technology was often seen as an administrative burden rather than a clinical asset. Electronic health records (EHR) and a lot of the other IT systems that health care workers and patients interacted with on a regular basis were viewed as a barrier to care or a cause of time burdens instead of as tools to make health care easier. As we built our new practice from scratch, it was critical that we had an IT infrastructure that aligned with our core goals: simplify and automate the health care experience for everyone.

For our practice, we didn’t try to re-invent the wheel. Instead we copied from other industries who had already figured out a great solution for a problem we had. We wanted our patients to have a great customer service experience when interacting with our practice for scheduling, questions, refills, etc. So we implemented a unified communication system that some Fortune 100 companies, with perennial high scores for customer service, used. We wanted a great human resource system that would streamline the administrative time it would take to handle all HR needs for our practice. So we implemented an HR information system that had the best ratings for automation and integration with other business systems. At every point in the process, we reminded ourselves to focus on simplification and automation for every user of the system. 

 

Conclusion: A Rewarding Transition

The decision to leave academic medicine and start an independent gastroenterology practice wasn’t easy, but it was one of the most rewarding choices I have made. The lessons I’ve learned along the way — embracing autonomy, understanding financial stewardship, fostering collaboration, and leveraging technology — have helped me work toward a better total health care experience for the community.

This journey has also been deeply fulfilling on a personal level. It has allowed me to build stronger relationships with my patients, focus on long-term health outcomes, and create a practice where innovation and quality truly matter. While the challenges of running a private practice are real, the rewards — both for me and my patients — are immeasurable. If I had to do it all over again, I wouldn’t hesitate for a moment. If anything, I should have done it earlier.

Dr. Gupta is Managing Partner at Midwest Digestive Health & Nutrition, in Des Plaines, Illinois. He has reported no conflicts of interest in relation to this article.

After more than 10 years of serving in a large academic medical center in Chicago, Illinois, that was part of a national health care system, the decision to transition into private practice wasn’t one I made lightly.

Having built a rewarding career and spent over a quarter of my life in an academic medical center and a national health system, the move to starting an independent practice from scratch was both exciting and daunting. The notion of leaving behind the structure, resources, and safety of the large health system was unsettling. However, as the landscape of health care continues to evolve, with worsening large structural problems within the U.S. health care system, I realized that starting an independent gastroenterology practice — focused on trying to fix some of these large-scale problems from the start — would not only align with my professional goals but also provide the personal satisfaction I had failed to find. 

As I reflect on my journey, there are a few key lessons I learned from making this leap — lessons that helped me transition from a highly structured employed physician environment to leading a thriving independent practice focused on redesigning gastroenterology care from scratch.

Dr. Neil Gupta



 

Lesson 1: Autonomy Opens the Door to Innovation

One of the primary reasons I left the employed physician setting was to gain greater control over my clinical practice and decision-making processes.

In a national health care system, the goal of standardization often dictates not only clinical care, but many “back end” aspects of the entire health care experience. We often see the things that are more visible, such as what supplies/equipment you use, how your patient appointments are scheduled, how many support staff members are assigned to help your practice, what electronic health record system you use, and how shared resources (like GI lab block time or anesthesia teams) are allocated.

However, this also impacts things we don’t usually see, such as what fees are billed for care you are providing (like facility fees), communication systems that your patients need to navigate for help, human resource systems you use, and retirement/health benefits you and your other team members receive. 

Standardization has two adverse consequences: 1) it does not allow for personalization and as a result, 2) it suppresses innovation. Standard protocols can streamline processes, but they sometimes fail to account for the nuanced differences between patients, such as genetic factors, unique medical histories, or responses/failures to prior treatments. This rigidity can stifle innovation, as physicians are often bound by guidelines that may not reflect the latest advancements or allow for creative, individualized approaches to care. In the long term, an overemphasis on standardization risks turning health care into a one-size-fits-all model, undermining the potential for breakthroughs.

The transition was challenging at first, as we needed to engage our entire new practice with a different mindset now that many of us had autonomy for the first time. Instead of everyone just practicing health care the way they had done before, we took a page from Elon Musk and challenged every member of the team to ask three questions about everything they do on a daily basis:

  • Is what I am doing helping a patient get healthy? (Question every requirement)
  • If not, do I still need to do this? (Delete any part of the process you can)
  • If so, how can I make this easier, faster, or automated? (Simplify and optimize, accelerate cycle time, and automate)

The freedom to innovate is a hallmark of independent practice. Embracing innovation in every aspect of the practice has been the most critical lesson of this journey. 

 

Lesson 2: Financial Stewardship is Critical for Sustainability

Running an independent practice is not just about medicine — it’s also about managing a business.

This was a stark shift from the large academic health systems, where financial decisions were handled by the “administration.” In my new role as a business owner, understanding the financial aspects of health care was crucial for success. The cost of what patients pay for health care in the United States (either directly in deductibles and coinsurance or indirectly through insurance premiums) is unsustainably high. However, inflation continues to cause substantial increases in almost all the costs of delivering care: medical supplies, salaries, benefits, IT costs, etc. It was critical to develop a financial plan that accounted for these two macro-economic trends, and ideally helped solve for both. In our case, delivering high quality care with a lower cost to patients and payers. 

We started by reevaluating our relationship with payers. Whereas being part of a large academic health system, we are often taught to look at payers as the adversary; as an independent practice looking to redesign the health care experience, it was critical for us to look to the payers as a partner in this journey. Understanding payer expectations and structuring contracts that aligned with shared goals of reducing total health care costs for patients was one of the foundations of our financial plan. 

Offering office-based endoscopy was one innovation we implemented to significantly impact both patient affordability and practice revenue. By performing procedures like colonoscopies and upper endoscopies in an office setting rather than a hospital or ambulatory surgery center, we eliminated facility fees, which are often a significant part of the total cost of care. This directly lowers out-of-pocket expenses for patients and reduces the overall financial burden on insurance companies. At the same time, it allows the practice to capture more of the revenue from these procedures, without the overhead costs associated with larger facilities. This model creates a win-win situation: patients save money while receiving the same quality of care, and the practice experiences an increase in profitability and autonomy in managing its services.

 

Lesson 3: Collaborative Care and Multidisciplinary Teams Can Exist Anywhere

One aspect I deeply valued in academia was the collaborative environment — having specialists across disciplines work together on challenging cases. In private practice, I was concerned that I would lose this collegial atmosphere. However, I quickly learned that building a robust network of multidisciplinary collaborators was achievable in independent practice, just like it was in a large health system.

In our practice, we established close relationships with primary care physicians, surgeons, advanced practice providers, dietitians, behavioral health specialists, and others. These partnerships were not just referral networks but integrated care teams where communication and shared decision-making were prioritized. By fostering collaboration, we could offer patients comprehensive care that addressed their physical, psychological, and nutritional needs. 

For example, managing patients with chronic conditions like inflammatory bowel disease, cirrhosis, or obesity requires more than just prescribing medications. It involves regular monitoring, dietary adjustments, psychological support, and in some cases, surgical intervention. In an academic setting, coordinating this level of care can be cumbersome due to institutional barriers and siloed departments. In our practice, some of these relationships are achieved through partnerships with other like-minded practices. In other situations, team members of other disciplines are employed directly by our practice. Being in an independent practice allowed us the flexibility to prioritize working with the right team members first, and then structuring the relationship model second. 

 

Lesson 4: Technology Is a Vital Tool in Redesigning Health Care

When I worked in a large academic health system, technology was often seen as an administrative burden rather than a clinical asset. Electronic health records (EHR) and a lot of the other IT systems that health care workers and patients interacted with on a regular basis were viewed as a barrier to care or a cause of time burdens instead of as tools to make health care easier. As we built our new practice from scratch, it was critical that we had an IT infrastructure that aligned with our core goals: simplify and automate the health care experience for everyone.

For our practice, we didn’t try to re-invent the wheel. Instead we copied from other industries who had already figured out a great solution for a problem we had. We wanted our patients to have a great customer service experience when interacting with our practice for scheduling, questions, refills, etc. So we implemented a unified communication system that some Fortune 100 companies, with perennial high scores for customer service, used. We wanted a great human resource system that would streamline the administrative time it would take to handle all HR needs for our practice. So we implemented an HR information system that had the best ratings for automation and integration with other business systems. At every point in the process, we reminded ourselves to focus on simplification and automation for every user of the system. 

 

Conclusion: A Rewarding Transition

The decision to leave academic medicine and start an independent gastroenterology practice wasn’t easy, but it was one of the most rewarding choices I have made. The lessons I’ve learned along the way — embracing autonomy, understanding financial stewardship, fostering collaboration, and leveraging technology — have helped me work toward a better total health care experience for the community.

This journey has also been deeply fulfilling on a personal level. It has allowed me to build stronger relationships with my patients, focus on long-term health outcomes, and create a practice where innovation and quality truly matter. While the challenges of running a private practice are real, the rewards — both for me and my patients — are immeasurable. If I had to do it all over again, I wouldn’t hesitate for a moment. If anything, I should have done it earlier.

Dr. Gupta is Managing Partner at Midwest Digestive Health & Nutrition, in Des Plaines, Illinois. He has reported no conflicts of interest in relation to this article.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/15/2024 - 11:52
Un-Gate On Date
Fri, 11/15/2024 - 11:52
Use ProPublica
CFC Schedule Remove Status
Fri, 11/15/2024 - 11:52
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/15/2024 - 11:52

A Child’s Picky Eating: Normal Phase or Health Concern?

Article Type
Changed
Fri, 11/15/2024 - 11:42

— “My child is a poor eater” is a complaint frequently heard during medical consultations. Such concerns are often unjustified but a source of much parental frustration. 

Marc Bellaïche, MD, a pediatrician at Robert-Debré Hospital in Paris, addressed this issue at France’s annual general medicine conference (JNMG 2024). His presentation focused on distinguishing between parental perception, typical childhood behaviors, and feeding issues that require intervention.

In assessing parental worries, tools such as The Montreal Children’s Hospital Feeding Scale for children aged 6 months to 6 years and the Baby Eating Behavior Questionnaire for those under 6 months can help identify and monitor feeding issues. Observing the child eat, when possible, is also valuable.

 

Key Phases and Development

Bellaïche focused on children under 6 years, as they frequently experience feeding challenges during critical development phases, such as weaning or when the child is able to sit up.

A phase of neophilia (interest in new foods) typically occurs before 12 months, followed by a phase of neophobia (fear of new foods) between ages 1 and 3 years. This neophobia is a normal part of neuropsychological, sensory, and taste development and can persist if a key developmental moment is marked by a choking incident, mealtime stress, or forced feeding. “Challenges differ between a difficult 3-year-old and a 6- or 7-year-old who still refuses new foods,” he explained.

 

Parental Pressure and Nutritional Balance

Nutritional balance is essential, but “parental pressure is often too high.” Parents worry because they see food as a “nutraceutical.” Bellaïche recommended defusing anxiety by keeping mealtimes calm, allowing the child to eat at their pace, avoiding force-feeding, keeping meals brief, and avoiding snacks. While “it’s important to stay vigilant — as it’s incorrect to assume a child won’t let themselves starve — most cases can be managed in general practice through parental guidance, empathy, and a positive approach.”

Monitoring growth and weight curves is crucial, with the Kanawati index (ratio of arm circumference to head circumference) being a reliable indicator for specialist referral if < 0.31. A varied diet is important for nutritional balance; when this isn’t achieved, continued consumption of toddler formula after age 3 can prevent iron and calcium deficiencies.

When eating difficulties are documented, healthcare providers should investigate for underlying organic, digestive, or extra-digestive diseases (neurologic, cardiac, renal, etc.). “It’s best not to hastily diagnose cow’s milk protein allergy,” Bellaïche advised, as cases are relatively rare and unnecessarily eliminating milk can complicate a child’s relationship with food. Similarly, gastroesophageal reflux disease should be objectively diagnosed to avoid unnecessary proton pump inhibitor treatment and associated side effects.

For children with low birth weight, mild congenital heart disease, or suggestive dysmorphology, consider evaluating for a genetic syndrome.

 

Avoidant/Restrictive Food Intake Disorder (ARFID)

ARFID is marked by a lack of interest in food and avoidance due to sensory characteristics. Often observed in anxious children, ARFID is diagnosed in approximately 20% of children with autism spectrum disorder, where food selectivity is prevalent. This condition can hinder a child’s development and may necessitate nutritional supplementation.

Case Profiles in Eating Issues

Bellaïche outlined three typical cases among children considered “picky eaters”:

  • The small eater: Often near the lower growth curve limits, this child “grazes and doesn’t sit still.” These children are usually active and have a family history of similar eating habits. Parents should encourage psychomotor activities, discourage snacks outside of mealtimes, and consider fun family picnics on the floor, offering a mezze-style variety of foods. 
  • The child with a history of trauma: Children with trauma (from intubation, nasogastric tubes, severe vomiting, forced feeding, or choking) may develop aversions requiring behavioral intervention. 
  • The child with high sensory sensitivity: This child dislikes getting the hands dirty, avoids mouthing objects, or resists certain textures, such as grass and sand. Gradual behavioral approaches with sensory play and visually appealing new foods can be beneficial. Guided self-led food exploration (baby-led weaning) may also help, though dairy intake is often needed to prevent deficiencies during this stage. 

Finally, gastroesophageal reflux disease or constipation can contribute to appetite loss. Studies have shown that treating these issues can improve appetite in small eaters.

 

This story was translated from Univadis France using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

— “My child is a poor eater” is a complaint frequently heard during medical consultations. Such concerns are often unjustified but a source of much parental frustration. 

Marc Bellaïche, MD, a pediatrician at Robert-Debré Hospital in Paris, addressed this issue at France’s annual general medicine conference (JNMG 2024). His presentation focused on distinguishing between parental perception, typical childhood behaviors, and feeding issues that require intervention.

In assessing parental worries, tools such as The Montreal Children’s Hospital Feeding Scale for children aged 6 months to 6 years and the Baby Eating Behavior Questionnaire for those under 6 months can help identify and monitor feeding issues. Observing the child eat, when possible, is also valuable.

 

Key Phases and Development

Bellaïche focused on children under 6 years, as they frequently experience feeding challenges during critical development phases, such as weaning or when the child is able to sit up.

A phase of neophilia (interest in new foods) typically occurs before 12 months, followed by a phase of neophobia (fear of new foods) between ages 1 and 3 years. This neophobia is a normal part of neuropsychological, sensory, and taste development and can persist if a key developmental moment is marked by a choking incident, mealtime stress, or forced feeding. “Challenges differ between a difficult 3-year-old and a 6- or 7-year-old who still refuses new foods,” he explained.

 

Parental Pressure and Nutritional Balance

Nutritional balance is essential, but “parental pressure is often too high.” Parents worry because they see food as a “nutraceutical.” Bellaïche recommended defusing anxiety by keeping mealtimes calm, allowing the child to eat at their pace, avoiding force-feeding, keeping meals brief, and avoiding snacks. While “it’s important to stay vigilant — as it’s incorrect to assume a child won’t let themselves starve — most cases can be managed in general practice through parental guidance, empathy, and a positive approach.”

Monitoring growth and weight curves is crucial, with the Kanawati index (ratio of arm circumference to head circumference) being a reliable indicator for specialist referral if < 0.31. A varied diet is important for nutritional balance; when this isn’t achieved, continued consumption of toddler formula after age 3 can prevent iron and calcium deficiencies.

When eating difficulties are documented, healthcare providers should investigate for underlying organic, digestive, or extra-digestive diseases (neurologic, cardiac, renal, etc.). “It’s best not to hastily diagnose cow’s milk protein allergy,” Bellaïche advised, as cases are relatively rare and unnecessarily eliminating milk can complicate a child’s relationship with food. Similarly, gastroesophageal reflux disease should be objectively diagnosed to avoid unnecessary proton pump inhibitor treatment and associated side effects.

For children with low birth weight, mild congenital heart disease, or suggestive dysmorphology, consider evaluating for a genetic syndrome.

 

Avoidant/Restrictive Food Intake Disorder (ARFID)

ARFID is marked by a lack of interest in food and avoidance due to sensory characteristics. Often observed in anxious children, ARFID is diagnosed in approximately 20% of children with autism spectrum disorder, where food selectivity is prevalent. This condition can hinder a child’s development and may necessitate nutritional supplementation.

Case Profiles in Eating Issues

Bellaïche outlined three typical cases among children considered “picky eaters”:

  • The small eater: Often near the lower growth curve limits, this child “grazes and doesn’t sit still.” These children are usually active and have a family history of similar eating habits. Parents should encourage psychomotor activities, discourage snacks outside of mealtimes, and consider fun family picnics on the floor, offering a mezze-style variety of foods. 
  • The child with a history of trauma: Children with trauma (from intubation, nasogastric tubes, severe vomiting, forced feeding, or choking) may develop aversions requiring behavioral intervention. 
  • The child with high sensory sensitivity: This child dislikes getting the hands dirty, avoids mouthing objects, or resists certain textures, such as grass and sand. Gradual behavioral approaches with sensory play and visually appealing new foods can be beneficial. Guided self-led food exploration (baby-led weaning) may also help, though dairy intake is often needed to prevent deficiencies during this stage. 

Finally, gastroesophageal reflux disease or constipation can contribute to appetite loss. Studies have shown that treating these issues can improve appetite in small eaters.

 

This story was translated from Univadis France using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

— “My child is a poor eater” is a complaint frequently heard during medical consultations. Such concerns are often unjustified but a source of much parental frustration. 

Marc Bellaïche, MD, a pediatrician at Robert-Debré Hospital in Paris, addressed this issue at France’s annual general medicine conference (JNMG 2024). His presentation focused on distinguishing between parental perception, typical childhood behaviors, and feeding issues that require intervention.

In assessing parental worries, tools such as The Montreal Children’s Hospital Feeding Scale for children aged 6 months to 6 years and the Baby Eating Behavior Questionnaire for those under 6 months can help identify and monitor feeding issues. Observing the child eat, when possible, is also valuable.

 

Key Phases and Development

Bellaïche focused on children under 6 years, as they frequently experience feeding challenges during critical development phases, such as weaning or when the child is able to sit up.

A phase of neophilia (interest in new foods) typically occurs before 12 months, followed by a phase of neophobia (fear of new foods) between ages 1 and 3 years. This neophobia is a normal part of neuropsychological, sensory, and taste development and can persist if a key developmental moment is marked by a choking incident, mealtime stress, or forced feeding. “Challenges differ between a difficult 3-year-old and a 6- or 7-year-old who still refuses new foods,” he explained.

 

Parental Pressure and Nutritional Balance

Nutritional balance is essential, but “parental pressure is often too high.” Parents worry because they see food as a “nutraceutical.” Bellaïche recommended defusing anxiety by keeping mealtimes calm, allowing the child to eat at their pace, avoiding force-feeding, keeping meals brief, and avoiding snacks. While “it’s important to stay vigilant — as it’s incorrect to assume a child won’t let themselves starve — most cases can be managed in general practice through parental guidance, empathy, and a positive approach.”

Monitoring growth and weight curves is crucial, with the Kanawati index (ratio of arm circumference to head circumference) being a reliable indicator for specialist referral if < 0.31. A varied diet is important for nutritional balance; when this isn’t achieved, continued consumption of toddler formula after age 3 can prevent iron and calcium deficiencies.

When eating difficulties are documented, healthcare providers should investigate for underlying organic, digestive, or extra-digestive diseases (neurologic, cardiac, renal, etc.). “It’s best not to hastily diagnose cow’s milk protein allergy,” Bellaïche advised, as cases are relatively rare and unnecessarily eliminating milk can complicate a child’s relationship with food. Similarly, gastroesophageal reflux disease should be objectively diagnosed to avoid unnecessary proton pump inhibitor treatment and associated side effects.

For children with low birth weight, mild congenital heart disease, or suggestive dysmorphology, consider evaluating for a genetic syndrome.

 

Avoidant/Restrictive Food Intake Disorder (ARFID)

ARFID is marked by a lack of interest in food and avoidance due to sensory characteristics. Often observed in anxious children, ARFID is diagnosed in approximately 20% of children with autism spectrum disorder, where food selectivity is prevalent. This condition can hinder a child’s development and may necessitate nutritional supplementation.

Case Profiles in Eating Issues

Bellaïche outlined three typical cases among children considered “picky eaters”:

  • The small eater: Often near the lower growth curve limits, this child “grazes and doesn’t sit still.” These children are usually active and have a family history of similar eating habits. Parents should encourage psychomotor activities, discourage snacks outside of mealtimes, and consider fun family picnics on the floor, offering a mezze-style variety of foods. 
  • The child with a history of trauma: Children with trauma (from intubation, nasogastric tubes, severe vomiting, forced feeding, or choking) may develop aversions requiring behavioral intervention. 
  • The child with high sensory sensitivity: This child dislikes getting the hands dirty, avoids mouthing objects, or resists certain textures, such as grass and sand. Gradual behavioral approaches with sensory play and visually appealing new foods can be beneficial. Guided self-led food exploration (baby-led weaning) may also help, though dairy intake is often needed to prevent deficiencies during this stage. 

Finally, gastroesophageal reflux disease or constipation can contribute to appetite loss. Studies have shown that treating these issues can improve appetite in small eaters.

 

This story was translated from Univadis France using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JNMG 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/15/2024 - 11:39
Un-Gate On Date
Fri, 11/15/2024 - 11:39
Use ProPublica
CFC Schedule Remove Status
Fri, 11/15/2024 - 11:39
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/15/2024 - 11:39

Building an AI Army of Digital Twins to Fight Cancer

Article Type
Changed
Wed, 11/13/2024 - 09:41

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smokeless Tobacco, Areca Nut Chewing Behind 1 in 3 Oral Cancers: IARC Report

Article Type
Changed
Wed, 11/13/2024 - 09:38

Globally, nearly one in three cases of oral cancer can be attributed to use of smokeless tobacco and areca nut products, according to a new study from the International Agency for Research on Cancer (IARC), a part of the World Health Organization (WHO).

“Smokeless tobacco and areca nut products are available to consumers in many different forms across the world, but consuming smokeless tobacco and areca nut is linked to multiple diseases, including oral cancer,” Harriet Rumgay, PhD, a scientist in the Cancer Surveillance Branch at IARC and first author of the study in Lancet Oncology, said in a news release.

Worldwide, about 300 million people use smokeless tobacco and 600 million people use areca (also called betel) nut, one of the most popular psychoactive substances in the world after nicotine, alcohol, and caffeine. Smokeless tobacco products are consumed without burning and can be chewed, sucked, inhaled, applied locally, or ingested. Areca nut is the seed of the areca palm and can be consumed in various forms.

“Our estimates highlight the burden these products pose on health care and the importance of prevention strategies to reduce consumption of smokeless tobacco and areca nut,” Rumgay said.

According to the new report, in 2022, an estimated 120,200 of the 389,800 (30.8%) global cases of oral cancer were attributable to these products.

More than three quarters (77%) of attributable cases were among men and about one quarter (23%) among women.

The vast majority (96%) of all oral cancer cases caused by smokeless tobacco and areca nut use occurred in low- and middle-income countries.

Regions with the highest burden of oral cancers from these products were Southcentral Asia — with 105,500 of 120,200 cases (nearly 88%), including 83,400 in India, 9700 in Bangladesh, 8900 in Pakistan, and 1300 in Sri Lanka — followed by Southeastern Asia with a total of 3900 cases (1600 in Myanmar, 990 in Indonesia, and 785 in Thailand) and East Asia with 3300 cases (3200 in China).
 

Limitations and Action Points

The authors noted a limitation of the analysis is not accounting for the potential synergistic effects of combined use of smokeless tobacco or areca nut products with other risk factors for oral cancer, such as smoking tobacco or drinking alcohol.

The researchers explained that combined consumption of smokeless tobacco or areca nut, smoked tobacco, and alcohol has a “multiplicative effect” on oral cancer risk, with reported odds ratios increasing from 2.7 for smokeless tobacco only, 7.0 for smoked tobacco only, and 1.6 for alcohol only to 16.2 for all three exposures (vs no use).

However, the proportion of people who chewed tobacco and also smoked in countries with high smokeless tobacco or areca nut use was small. In India, for example, 6% of men and 0.5% of women in 2016-2017 were dual users of both smoked and smokeless tobacco, compared with 23% of men and 12% of women who only used smokeless tobacco.

Overall, curbing or preventing smokeless tobacco and areca nut use could help avoid many instances of oral cancer.

Despite “encouraging trends” in control of tobacco smoking in many regions of the world over the past two decades, progress in reducing the prevalence of smokeless tobacco consumption has stalled in many countries that are major consumers, the authors said.

Compounding the problem, areca nut does not fall within the WHO framework of tobacco control and there are very few areca nut control policies worldwide.

Smokeless tobacco control must be “prioritized” and a framework on areca nut control should be developed with guidelines to incorporate areca nut prevention into cancer control programs, the authors concluded.

Funding for the study was provided by the French National Cancer Institute. The authors had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Globally, nearly one in three cases of oral cancer can be attributed to use of smokeless tobacco and areca nut products, according to a new study from the International Agency for Research on Cancer (IARC), a part of the World Health Organization (WHO).

“Smokeless tobacco and areca nut products are available to consumers in many different forms across the world, but consuming smokeless tobacco and areca nut is linked to multiple diseases, including oral cancer,” Harriet Rumgay, PhD, a scientist in the Cancer Surveillance Branch at IARC and first author of the study in Lancet Oncology, said in a news release.

Worldwide, about 300 million people use smokeless tobacco and 600 million people use areca (also called betel) nut, one of the most popular psychoactive substances in the world after nicotine, alcohol, and caffeine. Smokeless tobacco products are consumed without burning and can be chewed, sucked, inhaled, applied locally, or ingested. Areca nut is the seed of the areca palm and can be consumed in various forms.

“Our estimates highlight the burden these products pose on health care and the importance of prevention strategies to reduce consumption of smokeless tobacco and areca nut,” Rumgay said.

According to the new report, in 2022, an estimated 120,200 of the 389,800 (30.8%) global cases of oral cancer were attributable to these products.

More than three quarters (77%) of attributable cases were among men and about one quarter (23%) among women.

The vast majority (96%) of all oral cancer cases caused by smokeless tobacco and areca nut use occurred in low- and middle-income countries.

Regions with the highest burden of oral cancers from these products were Southcentral Asia — with 105,500 of 120,200 cases (nearly 88%), including 83,400 in India, 9700 in Bangladesh, 8900 in Pakistan, and 1300 in Sri Lanka — followed by Southeastern Asia with a total of 3900 cases (1600 in Myanmar, 990 in Indonesia, and 785 in Thailand) and East Asia with 3300 cases (3200 in China).
 

Limitations and Action Points

The authors noted a limitation of the analysis is not accounting for the potential synergistic effects of combined use of smokeless tobacco or areca nut products with other risk factors for oral cancer, such as smoking tobacco or drinking alcohol.

The researchers explained that combined consumption of smokeless tobacco or areca nut, smoked tobacco, and alcohol has a “multiplicative effect” on oral cancer risk, with reported odds ratios increasing from 2.7 for smokeless tobacco only, 7.0 for smoked tobacco only, and 1.6 for alcohol only to 16.2 for all three exposures (vs no use).

However, the proportion of people who chewed tobacco and also smoked in countries with high smokeless tobacco or areca nut use was small. In India, for example, 6% of men and 0.5% of women in 2016-2017 were dual users of both smoked and smokeless tobacco, compared with 23% of men and 12% of women who only used smokeless tobacco.

Overall, curbing or preventing smokeless tobacco and areca nut use could help avoid many instances of oral cancer.

Despite “encouraging trends” in control of tobacco smoking in many regions of the world over the past two decades, progress in reducing the prevalence of smokeless tobacco consumption has stalled in many countries that are major consumers, the authors said.

Compounding the problem, areca nut does not fall within the WHO framework of tobacco control and there are very few areca nut control policies worldwide.

Smokeless tobacco control must be “prioritized” and a framework on areca nut control should be developed with guidelines to incorporate areca nut prevention into cancer control programs, the authors concluded.

Funding for the study was provided by the French National Cancer Institute. The authors had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Globally, nearly one in three cases of oral cancer can be attributed to use of smokeless tobacco and areca nut products, according to a new study from the International Agency for Research on Cancer (IARC), a part of the World Health Organization (WHO).

“Smokeless tobacco and areca nut products are available to consumers in many different forms across the world, but consuming smokeless tobacco and areca nut is linked to multiple diseases, including oral cancer,” Harriet Rumgay, PhD, a scientist in the Cancer Surveillance Branch at IARC and first author of the study in Lancet Oncology, said in a news release.

Worldwide, about 300 million people use smokeless tobacco and 600 million people use areca (also called betel) nut, one of the most popular psychoactive substances in the world after nicotine, alcohol, and caffeine. Smokeless tobacco products are consumed without burning and can be chewed, sucked, inhaled, applied locally, or ingested. Areca nut is the seed of the areca palm and can be consumed in various forms.

“Our estimates highlight the burden these products pose on health care and the importance of prevention strategies to reduce consumption of smokeless tobacco and areca nut,” Rumgay said.

According to the new report, in 2022, an estimated 120,200 of the 389,800 (30.8%) global cases of oral cancer were attributable to these products.

More than three quarters (77%) of attributable cases were among men and about one quarter (23%) among women.

The vast majority (96%) of all oral cancer cases caused by smokeless tobacco and areca nut use occurred in low- and middle-income countries.

Regions with the highest burden of oral cancers from these products were Southcentral Asia — with 105,500 of 120,200 cases (nearly 88%), including 83,400 in India, 9700 in Bangladesh, 8900 in Pakistan, and 1300 in Sri Lanka — followed by Southeastern Asia with a total of 3900 cases (1600 in Myanmar, 990 in Indonesia, and 785 in Thailand) and East Asia with 3300 cases (3200 in China).
 

Limitations and Action Points

The authors noted a limitation of the analysis is not accounting for the potential synergistic effects of combined use of smokeless tobacco or areca nut products with other risk factors for oral cancer, such as smoking tobacco or drinking alcohol.

The researchers explained that combined consumption of smokeless tobacco or areca nut, smoked tobacco, and alcohol has a “multiplicative effect” on oral cancer risk, with reported odds ratios increasing from 2.7 for smokeless tobacco only, 7.0 for smoked tobacco only, and 1.6 for alcohol only to 16.2 for all three exposures (vs no use).

However, the proportion of people who chewed tobacco and also smoked in countries with high smokeless tobacco or areca nut use was small. In India, for example, 6% of men and 0.5% of women in 2016-2017 were dual users of both smoked and smokeless tobacco, compared with 23% of men and 12% of women who only used smokeless tobacco.

Overall, curbing or preventing smokeless tobacco and areca nut use could help avoid many instances of oral cancer.

Despite “encouraging trends” in control of tobacco smoking in many regions of the world over the past two decades, progress in reducing the prevalence of smokeless tobacco consumption has stalled in many countries that are major consumers, the authors said.

Compounding the problem, areca nut does not fall within the WHO framework of tobacco control and there are very few areca nut control policies worldwide.

Smokeless tobacco control must be “prioritized” and a framework on areca nut control should be developed with guidelines to incorporate areca nut prevention into cancer control programs, the authors concluded.

Funding for the study was provided by the French National Cancer Institute. The authors had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study Finds No Significant Effect of Low-Dose Oral Minoxidil on BP

Article Type
Changed
Wed, 11/13/2024 - 09:27

 

TOPLINE:

Low-dose oral minoxidil (LDOM), used off-label to treat alopecia, does not significantly affect blood pressure (BP) in patients with alopecia, but is associated with a slight increase in heart rate and a 5% incidence of hypotensive symptoms.

METHODOLOGY:

  • Researchers conducted a systematic review and meta-analysis of 16 studies, which involved 2387 patients with alopecia (60.7% women) who received minoxidil, a vasodilator originally developed as an antihypertensive, at doses of 5 mg or less per day.
  • Outcomes included changes in mean arterial pressure, systolic BP, diastolic BP, and heart rate.
  • Mean differences were calculated between pretreatment and posttreatment values.

TAKEAWAY:

  • Hypotensive symptoms were reported in 5% patients, with no significant hypotensive episodes. About 1.8% patients experienced lightheadedness or syncope, 1.2% experienced dizziness, 0.9% had tachycardia, and 0.8% had palpitations.
  • LDOM did not significantly alter systolic BP (mean difference, –0.13; 95% CI, –2.67 to 2.41), diastolic BP (mean difference, –1.25; 95% CI, –3.21 to 0.71), and mean arterial pressure (mean difference, –1.92; 95% CI, –4.00 to 0.17).
  • LDOM led to a significant increase in heart rate (mean difference, 2.67 beats/min; 95% CI, 0.34-5.01), a difference the authors wrote would “likely not be clinically significant for most patients.”
  • Hypertrichosis was the most common side effect (59.6%) and reason for stopping treatment (accounting for nearly 35% of discontinuations).

IN PRACTICE:

“LDOM appears to be a safe treatment for alopecia with no significant impact on blood pressure,” the authors wrote, noting that the study “addresses gaps in clinical knowledge involving LDOM.” Based on their results, they recommended that BP and heart rate “do not need to be closely monitored in patients without prior cardiovascular risk history.”

SOURCE:

The study was led by Matthew Chen, BS, Stony Brook Dermatology in New York. It was published online in The Journal of the American Academy of Dermatology.

LIMITATIONS:

The studies included had small sample sizes and retrospective designs, which may limit the reliability of the findings. Additional limitations include the absence of control groups, a potential recall bias in adverse effect reporting, and variability in dosing regimens and BP monitoring. 

DISCLOSURES:

The authors reported no external funding or conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Low-dose oral minoxidil (LDOM), used off-label to treat alopecia, does not significantly affect blood pressure (BP) in patients with alopecia, but is associated with a slight increase in heart rate and a 5% incidence of hypotensive symptoms.

METHODOLOGY:

  • Researchers conducted a systematic review and meta-analysis of 16 studies, which involved 2387 patients with alopecia (60.7% women) who received minoxidil, a vasodilator originally developed as an antihypertensive, at doses of 5 mg or less per day.
  • Outcomes included changes in mean arterial pressure, systolic BP, diastolic BP, and heart rate.
  • Mean differences were calculated between pretreatment and posttreatment values.

TAKEAWAY:

  • Hypotensive symptoms were reported in 5% patients, with no significant hypotensive episodes. About 1.8% patients experienced lightheadedness or syncope, 1.2% experienced dizziness, 0.9% had tachycardia, and 0.8% had palpitations.
  • LDOM did not significantly alter systolic BP (mean difference, –0.13; 95% CI, –2.67 to 2.41), diastolic BP (mean difference, –1.25; 95% CI, –3.21 to 0.71), and mean arterial pressure (mean difference, –1.92; 95% CI, –4.00 to 0.17).
  • LDOM led to a significant increase in heart rate (mean difference, 2.67 beats/min; 95% CI, 0.34-5.01), a difference the authors wrote would “likely not be clinically significant for most patients.”
  • Hypertrichosis was the most common side effect (59.6%) and reason for stopping treatment (accounting for nearly 35% of discontinuations).

IN PRACTICE:

“LDOM appears to be a safe treatment for alopecia with no significant impact on blood pressure,” the authors wrote, noting that the study “addresses gaps in clinical knowledge involving LDOM.” Based on their results, they recommended that BP and heart rate “do not need to be closely monitored in patients without prior cardiovascular risk history.”

SOURCE:

The study was led by Matthew Chen, BS, Stony Brook Dermatology in New York. It was published online in The Journal of the American Academy of Dermatology.

LIMITATIONS:

The studies included had small sample sizes and retrospective designs, which may limit the reliability of the findings. Additional limitations include the absence of control groups, a potential recall bias in adverse effect reporting, and variability in dosing regimens and BP monitoring. 

DISCLOSURES:

The authors reported no external funding or conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Low-dose oral minoxidil (LDOM), used off-label to treat alopecia, does not significantly affect blood pressure (BP) in patients with alopecia, but is associated with a slight increase in heart rate and a 5% incidence of hypotensive symptoms.

METHODOLOGY:

  • Researchers conducted a systematic review and meta-analysis of 16 studies, which involved 2387 patients with alopecia (60.7% women) who received minoxidil, a vasodilator originally developed as an antihypertensive, at doses of 5 mg or less per day.
  • Outcomes included changes in mean arterial pressure, systolic BP, diastolic BP, and heart rate.
  • Mean differences were calculated between pretreatment and posttreatment values.

TAKEAWAY:

  • Hypotensive symptoms were reported in 5% patients, with no significant hypotensive episodes. About 1.8% patients experienced lightheadedness or syncope, 1.2% experienced dizziness, 0.9% had tachycardia, and 0.8% had palpitations.
  • LDOM did not significantly alter systolic BP (mean difference, –0.13; 95% CI, –2.67 to 2.41), diastolic BP (mean difference, –1.25; 95% CI, –3.21 to 0.71), and mean arterial pressure (mean difference, –1.92; 95% CI, –4.00 to 0.17).
  • LDOM led to a significant increase in heart rate (mean difference, 2.67 beats/min; 95% CI, 0.34-5.01), a difference the authors wrote would “likely not be clinically significant for most patients.”
  • Hypertrichosis was the most common side effect (59.6%) and reason for stopping treatment (accounting for nearly 35% of discontinuations).

IN PRACTICE:

“LDOM appears to be a safe treatment for alopecia with no significant impact on blood pressure,” the authors wrote, noting that the study “addresses gaps in clinical knowledge involving LDOM.” Based on their results, they recommended that BP and heart rate “do not need to be closely monitored in patients without prior cardiovascular risk history.”

SOURCE:

The study was led by Matthew Chen, BS, Stony Brook Dermatology in New York. It was published online in The Journal of the American Academy of Dermatology.

LIMITATIONS:

The studies included had small sample sizes and retrospective designs, which may limit the reliability of the findings. Additional limitations include the absence of control groups, a potential recall bias in adverse effect reporting, and variability in dosing regimens and BP monitoring. 

DISCLOSURES:

The authors reported no external funding or conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Many Patients With Cancer Visit EDs Before Diagnosis

Article Type
Changed
Mon, 11/11/2024 - 12:38

More than one third of patients with cancer visited an emergency department (ED) in the 90 days before their diagnosis, according to a study of medical records from Ontario, Canada.

Researchers examined Institute for Clinical Evaluative Sciences (ICES) data that had been gathered from January 1, 2014, to December 31, 2021. The study focused on patients aged 18 years or older with confirmed primary cancer diagnoses.

Factors associated with an increased likelihood of an ED visit ahead of diagnosis included having certain cancers, living in rural areas, and having less access to primary care, according to study author Keerat Grewal, MD, an emergency physician and clinician scientist at the Schwartz/Reisman Emergency Medicine Institute at Sinai Health in Toronto, Ontario, Canada, and coauthors.

“The ED is a distressing environment for patients to receive a possible cancer diagnosis,” the authors wrote. “Moreover, it is frequently ill equipped to provide ongoing continuity of care, which can lead patients down a poorly defined diagnostic pathway before receiving a confirmed diagnosis based on tissue and a subsequent treatment plan.”

The findings were published online on November 4 in CMAJ).
 

Neurologic Cancers Prominent

In an interview, Grewal said in an interview that the study reflects her desire as an emergency room physician to understand why so many patients with cancer get the initial reports about their disease from clinicians whom they often have just met for the first time.

Among patients with an ED visit before cancer diagnosis, 51.4% were admitted to hospital from the most recent visit.

Compared with patients with a family physician on whom they could rely for routine care, those who had no outpatient visits (odds ratio [OR], 2.09) or fewer than three outpatient visits (OR, 1.41) in the 6-30 months before cancer diagnosis were more likely to have an ED visit before their cancer diagnosis.

Other factors associated with increased odds of ED use before cancer diagnosis included rurality (OR, 1.15), residence in northern Ontario (northeast region: OR, 1.14 and northwest region: OR, 1.27 vs Toronto region), and living in the most marginalized areas (material resource deprivation: OR, 1.37 and housing stability: OR, 1.09 vs least marginalized area).

The researchers also found that patients with certain cancers were more likely to have sought care in the ED. They compared these cancers with breast cancer, which is often detected through screening.

“Patients with neurologic cancers had extremely high odds of ED use before cancer diagnosis,” the authors wrote. “This is likely because of the emergent nature of presentation, with acute neurologic symptoms such as weakness, confusion, or seizures, which require urgent assessment.” On the other hand, pancreatic, liver, or thoracic cancer can trigger nonspecific symptoms that may be ignored until they reach a crisis level that prompts an ED visit.

The limitations of the study included its inability to identify cancer-related ED visits and its narrow focus on patients in Ontario, according to the researchers. But the use of the ICES databases also allowed researchers access to a broader pool of data than are available in many other cases.

The findings in the new paper echo those of previous research, the authors noted. Research in the United Kingdom found that 24%-31% of cancer diagnoses involved the ED. In addition, a study of people enrolled in the US Medicare program, which serves patients aged 65 years or older, found that 23% were seen in the ED in the 30 days before diagnosis.
 

 

 

‘Unpacking the Data’

The current findings also are consistent with those of an International Cancer Benchmarking Partnership study that was published in 2022 in The Lancet Oncology, said Erika Nicholson, MHS, vice president of cancer systems and innovation at the Canadian Partnership Against Cancer. The latter study analyzed cancer registration and linked hospital admissions data from 14 jurisdictions in Australia, Canada, Denmark, New Zealand, Norway, and the United Kingdom.

“We see similar trends in terms of people visiting EDs and being diagnosed through EDs internationally,” Nicholson said. “We’re working with partners to put in place different strategies to address the challenges” that this phenomenon presents in terms of improving screening and follow-up care.

“Cancer is not one disease, but many diseases,” she said. “They present differently. We’re focused on really unpacking the data and understanding them.”

All this research highlights the need for more services and personnel to address cancer, including people who are trained to help patients cope after getting concerning news through emergency care, she said.

“That means having a system that fully supports you and helps you navigate through that diagnostic process,” Nicholson said. Addressing the added challenges for patients who don’t have secure housing is a special need, she added.

This study was supported by the Canadian Institutes of Health Research (CIHR). Grewal reported receiving grants from CIHR and the Canadian Association of Emergency Physicians. Nicholson reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

More than one third of patients with cancer visited an emergency department (ED) in the 90 days before their diagnosis, according to a study of medical records from Ontario, Canada.

Researchers examined Institute for Clinical Evaluative Sciences (ICES) data that had been gathered from January 1, 2014, to December 31, 2021. The study focused on patients aged 18 years or older with confirmed primary cancer diagnoses.

Factors associated with an increased likelihood of an ED visit ahead of diagnosis included having certain cancers, living in rural areas, and having less access to primary care, according to study author Keerat Grewal, MD, an emergency physician and clinician scientist at the Schwartz/Reisman Emergency Medicine Institute at Sinai Health in Toronto, Ontario, Canada, and coauthors.

“The ED is a distressing environment for patients to receive a possible cancer diagnosis,” the authors wrote. “Moreover, it is frequently ill equipped to provide ongoing continuity of care, which can lead patients down a poorly defined diagnostic pathway before receiving a confirmed diagnosis based on tissue and a subsequent treatment plan.”

The findings were published online on November 4 in CMAJ).
 

Neurologic Cancers Prominent

In an interview, Grewal said in an interview that the study reflects her desire as an emergency room physician to understand why so many patients with cancer get the initial reports about their disease from clinicians whom they often have just met for the first time.

Among patients with an ED visit before cancer diagnosis, 51.4% were admitted to hospital from the most recent visit.

Compared with patients with a family physician on whom they could rely for routine care, those who had no outpatient visits (odds ratio [OR], 2.09) or fewer than three outpatient visits (OR, 1.41) in the 6-30 months before cancer diagnosis were more likely to have an ED visit before their cancer diagnosis.

Other factors associated with increased odds of ED use before cancer diagnosis included rurality (OR, 1.15), residence in northern Ontario (northeast region: OR, 1.14 and northwest region: OR, 1.27 vs Toronto region), and living in the most marginalized areas (material resource deprivation: OR, 1.37 and housing stability: OR, 1.09 vs least marginalized area).

The researchers also found that patients with certain cancers were more likely to have sought care in the ED. They compared these cancers with breast cancer, which is often detected through screening.

“Patients with neurologic cancers had extremely high odds of ED use before cancer diagnosis,” the authors wrote. “This is likely because of the emergent nature of presentation, with acute neurologic symptoms such as weakness, confusion, or seizures, which require urgent assessment.” On the other hand, pancreatic, liver, or thoracic cancer can trigger nonspecific symptoms that may be ignored until they reach a crisis level that prompts an ED visit.

The limitations of the study included its inability to identify cancer-related ED visits and its narrow focus on patients in Ontario, according to the researchers. But the use of the ICES databases also allowed researchers access to a broader pool of data than are available in many other cases.

The findings in the new paper echo those of previous research, the authors noted. Research in the United Kingdom found that 24%-31% of cancer diagnoses involved the ED. In addition, a study of people enrolled in the US Medicare program, which serves patients aged 65 years or older, found that 23% were seen in the ED in the 30 days before diagnosis.
 

 

 

‘Unpacking the Data’

The current findings also are consistent with those of an International Cancer Benchmarking Partnership study that was published in 2022 in The Lancet Oncology, said Erika Nicholson, MHS, vice president of cancer systems and innovation at the Canadian Partnership Against Cancer. The latter study analyzed cancer registration and linked hospital admissions data from 14 jurisdictions in Australia, Canada, Denmark, New Zealand, Norway, and the United Kingdom.

“We see similar trends in terms of people visiting EDs and being diagnosed through EDs internationally,” Nicholson said. “We’re working with partners to put in place different strategies to address the challenges” that this phenomenon presents in terms of improving screening and follow-up care.

“Cancer is not one disease, but many diseases,” she said. “They present differently. We’re focused on really unpacking the data and understanding them.”

All this research highlights the need for more services and personnel to address cancer, including people who are trained to help patients cope after getting concerning news through emergency care, she said.

“That means having a system that fully supports you and helps you navigate through that diagnostic process,” Nicholson said. Addressing the added challenges for patients who don’t have secure housing is a special need, she added.

This study was supported by the Canadian Institutes of Health Research (CIHR). Grewal reported receiving grants from CIHR and the Canadian Association of Emergency Physicians. Nicholson reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

More than one third of patients with cancer visited an emergency department (ED) in the 90 days before their diagnosis, according to a study of medical records from Ontario, Canada.

Researchers examined Institute for Clinical Evaluative Sciences (ICES) data that had been gathered from January 1, 2014, to December 31, 2021. The study focused on patients aged 18 years or older with confirmed primary cancer diagnoses.

Factors associated with an increased likelihood of an ED visit ahead of diagnosis included having certain cancers, living in rural areas, and having less access to primary care, according to study author Keerat Grewal, MD, an emergency physician and clinician scientist at the Schwartz/Reisman Emergency Medicine Institute at Sinai Health in Toronto, Ontario, Canada, and coauthors.

“The ED is a distressing environment for patients to receive a possible cancer diagnosis,” the authors wrote. “Moreover, it is frequently ill equipped to provide ongoing continuity of care, which can lead patients down a poorly defined diagnostic pathway before receiving a confirmed diagnosis based on tissue and a subsequent treatment plan.”

The findings were published online on November 4 in CMAJ).
 

Neurologic Cancers Prominent

In an interview, Grewal said in an interview that the study reflects her desire as an emergency room physician to understand why so many patients with cancer get the initial reports about their disease from clinicians whom they often have just met for the first time.

Among patients with an ED visit before cancer diagnosis, 51.4% were admitted to hospital from the most recent visit.

Compared with patients with a family physician on whom they could rely for routine care, those who had no outpatient visits (odds ratio [OR], 2.09) or fewer than three outpatient visits (OR, 1.41) in the 6-30 months before cancer diagnosis were more likely to have an ED visit before their cancer diagnosis.

Other factors associated with increased odds of ED use before cancer diagnosis included rurality (OR, 1.15), residence in northern Ontario (northeast region: OR, 1.14 and northwest region: OR, 1.27 vs Toronto region), and living in the most marginalized areas (material resource deprivation: OR, 1.37 and housing stability: OR, 1.09 vs least marginalized area).

The researchers also found that patients with certain cancers were more likely to have sought care in the ED. They compared these cancers with breast cancer, which is often detected through screening.

“Patients with neurologic cancers had extremely high odds of ED use before cancer diagnosis,” the authors wrote. “This is likely because of the emergent nature of presentation, with acute neurologic symptoms such as weakness, confusion, or seizures, which require urgent assessment.” On the other hand, pancreatic, liver, or thoracic cancer can trigger nonspecific symptoms that may be ignored until they reach a crisis level that prompts an ED visit.

The limitations of the study included its inability to identify cancer-related ED visits and its narrow focus on patients in Ontario, according to the researchers. But the use of the ICES databases also allowed researchers access to a broader pool of data than are available in many other cases.

The findings in the new paper echo those of previous research, the authors noted. Research in the United Kingdom found that 24%-31% of cancer diagnoses involved the ED. In addition, a study of people enrolled in the US Medicare program, which serves patients aged 65 years or older, found that 23% were seen in the ED in the 30 days before diagnosis.
 

 

 

‘Unpacking the Data’

The current findings also are consistent with those of an International Cancer Benchmarking Partnership study that was published in 2022 in The Lancet Oncology, said Erika Nicholson, MHS, vice president of cancer systems and innovation at the Canadian Partnership Against Cancer. The latter study analyzed cancer registration and linked hospital admissions data from 14 jurisdictions in Australia, Canada, Denmark, New Zealand, Norway, and the United Kingdom.

“We see similar trends in terms of people visiting EDs and being diagnosed through EDs internationally,” Nicholson said. “We’re working with partners to put in place different strategies to address the challenges” that this phenomenon presents in terms of improving screening and follow-up care.

“Cancer is not one disease, but many diseases,” she said. “They present differently. We’re focused on really unpacking the data and understanding them.”

All this research highlights the need for more services and personnel to address cancer, including people who are trained to help patients cope after getting concerning news through emergency care, she said.

“That means having a system that fully supports you and helps you navigate through that diagnostic process,” Nicholson said. Addressing the added challenges for patients who don’t have secure housing is a special need, she added.

This study was supported by the Canadian Institutes of Health Research (CIHR). Grewal reported receiving grants from CIHR and the Canadian Association of Emergency Physicians. Nicholson reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CMAJ

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Plasma Omega-6 and Omega-3 Fatty Acids Inversely Associated With Cancer

Article Type
Changed
Wed, 11/13/2024 - 03:09

 

TOPLINE:

Higher plasma levels of omega-6 and omega-3 fatty acids are associated with a lower incidence of cancer. However, omega-3 fatty acids are linked to an increased risk for prostate cancer, specifically.

METHODOLOGY:

  • Researchers looked for associations of plasma omega-3 and omega-6 polyunsaturated fatty acids (PUFAs) with the incidence of cancer overall and 19 site-specific cancers in the large population-based prospective UK Biobank cohort.
  • They included 253,138 participants aged 37-73 years who were followed for an average of 12.9 years, with 29,838 diagnosed with cancer.
  • Plasma levels of omega-3 and omega-6 fatty acids were measured using nuclear magnetic resonance and expressed as percentages of total fatty acids.
  • Participants with cancer diagnoses at baseline, those who withdrew from the study, and those with missing data on plasma PUFAs were excluded.
  • The study adjusted for multiple covariates, including age, sex, ethnicity, socioeconomic status, lifestyle behaviors, and family history of diseases.

TAKEAWAY:

  • Higher plasma levels of omega-6 and omega-3 fatty acids were associated with a 2% and 1% reduction in overall cancer risk per SD increase, respectively (P = .001 and P = .03).
  • Omega-6 fatty acids were inversely associated with 14 site-specific cancers, whereas omega-3 fatty acids were inversely associated with five site-specific cancers.
  • Prostate cancer was positively associated with omega-3 fatty acids, with a 3% increased risk per SD increase (P = .049).
  • A higher omega-6/omega-3 ratio was associated with an increased risk for overall cancer, and three site-specific cancers showed positive associations with the ratio. “Each standard deviation increase, corresponding to a 13.13 increase in the omega ratio, was associated with a 2% increase in the risk of rectum cancer,” for example, the authors wrote.

IN PRACTICE:

“Overall, our findings provide support for possible small net protective roles of omega-3 and omega-6 PUFAs in the development of new cancer incidence. Our study also suggests that the usage of circulating blood biomarkers captures different aspects of dietary intake, reduces measurement errors, and thus enhances statistical power. The differential effects of omega-6% and omega-3% in age and sex subgroups warrant future investigation,” wrote the authors of the study.

SOURCE:

The study was led by Yuchen Zhang of the University of Georgia in Athens, Georgia. It was published online in the International Journal of Cancer.

LIMITATIONS:

The study’s potential for selective bias persists due to the participant sample skewing heavily toward European ancestry and White ethnicity. The number of events was small for some specific cancer sites, which may have limited the statistical power. The study focused on total omega-3 and omega-6 PUFAs, with only two individual fatty acids measured. Future studies are needed to examine the roles of other individual PUFAs and specific genetic variants. 

DISCLOSURES:

This study was supported by grants from the National Institute of General Medical Sciences of the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Higher plasma levels of omega-6 and omega-3 fatty acids are associated with a lower incidence of cancer. However, omega-3 fatty acids are linked to an increased risk for prostate cancer, specifically.

METHODOLOGY:

  • Researchers looked for associations of plasma omega-3 and omega-6 polyunsaturated fatty acids (PUFAs) with the incidence of cancer overall and 19 site-specific cancers in the large population-based prospective UK Biobank cohort.
  • They included 253,138 participants aged 37-73 years who were followed for an average of 12.9 years, with 29,838 diagnosed with cancer.
  • Plasma levels of omega-3 and omega-6 fatty acids were measured using nuclear magnetic resonance and expressed as percentages of total fatty acids.
  • Participants with cancer diagnoses at baseline, those who withdrew from the study, and those with missing data on plasma PUFAs were excluded.
  • The study adjusted for multiple covariates, including age, sex, ethnicity, socioeconomic status, lifestyle behaviors, and family history of diseases.

TAKEAWAY:

  • Higher plasma levels of omega-6 and omega-3 fatty acids were associated with a 2% and 1% reduction in overall cancer risk per SD increase, respectively (P = .001 and P = .03).
  • Omega-6 fatty acids were inversely associated with 14 site-specific cancers, whereas omega-3 fatty acids were inversely associated with five site-specific cancers.
  • Prostate cancer was positively associated with omega-3 fatty acids, with a 3% increased risk per SD increase (P = .049).
  • A higher omega-6/omega-3 ratio was associated with an increased risk for overall cancer, and three site-specific cancers showed positive associations with the ratio. “Each standard deviation increase, corresponding to a 13.13 increase in the omega ratio, was associated with a 2% increase in the risk of rectum cancer,” for example, the authors wrote.

IN PRACTICE:

“Overall, our findings provide support for possible small net protective roles of omega-3 and omega-6 PUFAs in the development of new cancer incidence. Our study also suggests that the usage of circulating blood biomarkers captures different aspects of dietary intake, reduces measurement errors, and thus enhances statistical power. The differential effects of omega-6% and omega-3% in age and sex subgroups warrant future investigation,” wrote the authors of the study.

SOURCE:

The study was led by Yuchen Zhang of the University of Georgia in Athens, Georgia. It was published online in the International Journal of Cancer.

LIMITATIONS:

The study’s potential for selective bias persists due to the participant sample skewing heavily toward European ancestry and White ethnicity. The number of events was small for some specific cancer sites, which may have limited the statistical power. The study focused on total omega-3 and omega-6 PUFAs, with only two individual fatty acids measured. Future studies are needed to examine the roles of other individual PUFAs and specific genetic variants. 

DISCLOSURES:

This study was supported by grants from the National Institute of General Medical Sciences of the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Higher plasma levels of omega-6 and omega-3 fatty acids are associated with a lower incidence of cancer. However, omega-3 fatty acids are linked to an increased risk for prostate cancer, specifically.

METHODOLOGY:

  • Researchers looked for associations of plasma omega-3 and omega-6 polyunsaturated fatty acids (PUFAs) with the incidence of cancer overall and 19 site-specific cancers in the large population-based prospective UK Biobank cohort.
  • They included 253,138 participants aged 37-73 years who were followed for an average of 12.9 years, with 29,838 diagnosed with cancer.
  • Plasma levels of omega-3 and omega-6 fatty acids were measured using nuclear magnetic resonance and expressed as percentages of total fatty acids.
  • Participants with cancer diagnoses at baseline, those who withdrew from the study, and those with missing data on plasma PUFAs were excluded.
  • The study adjusted for multiple covariates, including age, sex, ethnicity, socioeconomic status, lifestyle behaviors, and family history of diseases.

TAKEAWAY:

  • Higher plasma levels of omega-6 and omega-3 fatty acids were associated with a 2% and 1% reduction in overall cancer risk per SD increase, respectively (P = .001 and P = .03).
  • Omega-6 fatty acids were inversely associated with 14 site-specific cancers, whereas omega-3 fatty acids were inversely associated with five site-specific cancers.
  • Prostate cancer was positively associated with omega-3 fatty acids, with a 3% increased risk per SD increase (P = .049).
  • A higher omega-6/omega-3 ratio was associated with an increased risk for overall cancer, and three site-specific cancers showed positive associations with the ratio. “Each standard deviation increase, corresponding to a 13.13 increase in the omega ratio, was associated with a 2% increase in the risk of rectum cancer,” for example, the authors wrote.

IN PRACTICE:

“Overall, our findings provide support for possible small net protective roles of omega-3 and omega-6 PUFAs in the development of new cancer incidence. Our study also suggests that the usage of circulating blood biomarkers captures different aspects of dietary intake, reduces measurement errors, and thus enhances statistical power. The differential effects of omega-6% and omega-3% in age and sex subgroups warrant future investigation,” wrote the authors of the study.

SOURCE:

The study was led by Yuchen Zhang of the University of Georgia in Athens, Georgia. It was published online in the International Journal of Cancer.

LIMITATIONS:

The study’s potential for selective bias persists due to the participant sample skewing heavily toward European ancestry and White ethnicity. The number of events was small for some specific cancer sites, which may have limited the statistical power. The study focused on total omega-3 and omega-6 PUFAs, with only two individual fatty acids measured. Future studies are needed to examine the roles of other individual PUFAs and specific genetic variants. 

DISCLOSURES:

This study was supported by grants from the National Institute of General Medical Sciences of the National Institutes of Health. No relevant conflicts of interest were disclosed by the authors.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pinto Bean Pressure Wraps: A Novel Approach to Treating Digital Warts

Article Type
Changed
Thu, 11/07/2024 - 16:57
Display Headline
Pinto Bean Pressure Wraps: A Novel Approach to Treating Digital Warts

Practice Gap

Verruca vulgaris is a common dermatologic challenge due to its high prevalence and tendency to recur following routinely employed destructive modalities (eg, cryotherapy, electrosurgery), which can incur a considerable amount of pain and some risk for scarring.1,2 Other treatment methods for warts such as topical salicylic acid preparations, topical immunotherapy, or intralesional allergen injections often require multiple treatment sessions.3,4 Furthermore, the financial burden of traditional wart treatment can be substantial.4 Better techniques are needed to improve the clinician’s approach to treating warts. We describe a home-based technique to treat common digital warts using pinto bean pressure wraps to induce ischemic changes in wart tissue with similar response rates to commonly used modalities.

Technique

Our technique utilizes a small, hard, convex object that is applied directly over the digital wart. A simple self-adhesive wrap is used to cover the object and maintain constant pressure on the wart overnight. We typically use a dried pinto bean (a variety of the common bean Phaseolus vulgaris) acquired from a local grocery store due to its ideal size, hard surface, and convex shape (Figure 1). The bean is taped in place directly overlying the wart and covered with a self-adhesive wrap overnight. The wrap is removed in the morning, and often no further treatment is needed. The ischemic wart tissue is allowed to slough spontaneously over 1 to 2 weeks. No wound care or dressing is necessary (Figure 2). Larger warts may require application of the pressure wraps for 2 to 3 additional nights. While most warts resolve with this technique, we have observed a recurrence rate similar to that for cryotherapy. Patients are advised that any recurrent warts can be re-treated monthly, if needed, until resolution.

FIGURE 1. A, The home pressure wrap kit includes pinto beans, stretch tape, and a self-adherent wrap. B, A pinto bean is taped in place directly over the wart. C, The selfadherent wrap is applied to augment the pressure of the secured bean.

FIGURE 2. A–C, The digital wart before treatment, 2 days after a single overnight pressure wrap application showing necrosis of the wart, and 6 days posttreatment showing evidence of sloughing.

What to Use and How to Prepare—Any small, hard, convex object can be used for the pressure wrap; we also have used appropriately sized and shaped plastic shirt buttons with similar results. Home kits can be assembled in advance and provided to patients at their initial visit along with appropriate instructions (Figure 1A).

Effects on the Skin and Distal Digit—Application of pressure wraps does not harm normal skin; however, care should be taken when the self-adherent wrap is applied so as not to induce ischemia of the distal digit. The wrap should be applied using gentle pressure with patients experiencing minimal discomfort from the overnight application.

Indications—This pressure wrap technique can be employed on most digital warts, including periungual warts, which can be difficult to treat by other means. However, in our experience this technique is not effective for nondigital warts, likely due to the inability to maintain adequate pressure with the overlying dressing. Patients at risk for compromised digital perfusion, such as those with Raynaud phenomenon or systemic sclerosis, should not be treated with pressure wraps due to possible digital ischemia.

Precautions—Patients should be advised that the pinto bean should only be used if dry and should not be ingested. The bean can be a choking hazard for small children, therefore appropriate precautions should be used. Allergic contact dermatitis to the materials used in this technique is possible, but we have never observed this. The pinto bean can be reused for future application as long as it remains dry and provides a hard convex surface.

Practice Implications

The probable mechanism of the ischemic changes to the wart tissue likely is the occlusion of tortuous blood vessels in the dermal papillae, which are intrinsic to wart tissue and absent in normal skin.1 This pressure-induced ischemic injury allows for selective destruction of the wart tissue with sparing of the normal skin. Our technique is fairly novel, although at least one report in the literature has described the use of a mechanical device to induce ischemic changes in skin tags.5

The use of pinto bean pressure wraps to induce ischemic change in digital warts provides a low-risk and nearly pain-free alternative to more expensive and invasive treatment methods. Moreover, this technique allows for a low-cost home-based therapy that can be repeated easily for other digital sites or if recurrence is noted.

References
  1. Cardoso J, Calonje E. Cutaneous manifestations of human papillomaviruses: a review. Acta Dermatovenerol Alp Pannonica Adriat. 2011;20:145-154. 
  2. Lipke M. An armamentarium of wart treatments. Clin Med Res. 2006;4:273-293. doi:10.3121/cmr.4.4.273 
  3. Muse M, Stiff K, Glines K, et al. A review of intralesional wart therapy. Dermatol Online J. 2020;26:2. doi:10.5070/D3263048027
  4. Berna R, Margolis D, Barbieri J. Annual health care utilization and costs for treatment of cutaneous and anogenital warts among a commercially insured population in the US, 2017-2019. JAMA Dermatol. 2022;158:695-697. doi:10.1001/jamadermatol.2022.0964
  5. Fredriksson C, Ilias M, Anderson C. New mechanical device for effective removal of skin tags in routine health care. Dermatol Online J. 2009;15:9. doi:10.5070/D37tj2800k
Article PDF
Author and Disclosure Information

From Forefront Dermatology, West Burlington, Iowa.

The authors have no relevant financial disclosures to report.

Correspondence: Mark G. Cleveland, MD, PhD, 1225 S Gear Ave, Ste 252, West Burlington, IA 52655 ([email protected]).

Cutis. 2024 November;114(5):169-170. doi:10.12788/cutis.1121

Publications
Topics
Page Number
169-170
Sections
Author and Disclosure Information

From Forefront Dermatology, West Burlington, Iowa.

The authors have no relevant financial disclosures to report.

Correspondence: Mark G. Cleveland, MD, PhD, 1225 S Gear Ave, Ste 252, West Burlington, IA 52655 ([email protected]).

Cutis. 2024 November;114(5):169-170. doi:10.12788/cutis.1121

Author and Disclosure Information

From Forefront Dermatology, West Burlington, Iowa.

The authors have no relevant financial disclosures to report.

Correspondence: Mark G. Cleveland, MD, PhD, 1225 S Gear Ave, Ste 252, West Burlington, IA 52655 ([email protected]).

Cutis. 2024 November;114(5):169-170. doi:10.12788/cutis.1121

Article PDF
Article PDF

Practice Gap

Verruca vulgaris is a common dermatologic challenge due to its high prevalence and tendency to recur following routinely employed destructive modalities (eg, cryotherapy, electrosurgery), which can incur a considerable amount of pain and some risk for scarring.1,2 Other treatment methods for warts such as topical salicylic acid preparations, topical immunotherapy, or intralesional allergen injections often require multiple treatment sessions.3,4 Furthermore, the financial burden of traditional wart treatment can be substantial.4 Better techniques are needed to improve the clinician’s approach to treating warts. We describe a home-based technique to treat common digital warts using pinto bean pressure wraps to induce ischemic changes in wart tissue with similar response rates to commonly used modalities.

Technique

Our technique utilizes a small, hard, convex object that is applied directly over the digital wart. A simple self-adhesive wrap is used to cover the object and maintain constant pressure on the wart overnight. We typically use a dried pinto bean (a variety of the common bean Phaseolus vulgaris) acquired from a local grocery store due to its ideal size, hard surface, and convex shape (Figure 1). The bean is taped in place directly overlying the wart and covered with a self-adhesive wrap overnight. The wrap is removed in the morning, and often no further treatment is needed. The ischemic wart tissue is allowed to slough spontaneously over 1 to 2 weeks. No wound care or dressing is necessary (Figure 2). Larger warts may require application of the pressure wraps for 2 to 3 additional nights. While most warts resolve with this technique, we have observed a recurrence rate similar to that for cryotherapy. Patients are advised that any recurrent warts can be re-treated monthly, if needed, until resolution.

FIGURE 1. A, The home pressure wrap kit includes pinto beans, stretch tape, and a self-adherent wrap. B, A pinto bean is taped in place directly over the wart. C, The selfadherent wrap is applied to augment the pressure of the secured bean.

FIGURE 2. A–C, The digital wart before treatment, 2 days after a single overnight pressure wrap application showing necrosis of the wart, and 6 days posttreatment showing evidence of sloughing.

What to Use and How to Prepare—Any small, hard, convex object can be used for the pressure wrap; we also have used appropriately sized and shaped plastic shirt buttons with similar results. Home kits can be assembled in advance and provided to patients at their initial visit along with appropriate instructions (Figure 1A).

Effects on the Skin and Distal Digit—Application of pressure wraps does not harm normal skin; however, care should be taken when the self-adherent wrap is applied so as not to induce ischemia of the distal digit. The wrap should be applied using gentle pressure with patients experiencing minimal discomfort from the overnight application.

Indications—This pressure wrap technique can be employed on most digital warts, including periungual warts, which can be difficult to treat by other means. However, in our experience this technique is not effective for nondigital warts, likely due to the inability to maintain adequate pressure with the overlying dressing. Patients at risk for compromised digital perfusion, such as those with Raynaud phenomenon or systemic sclerosis, should not be treated with pressure wraps due to possible digital ischemia.

Precautions—Patients should be advised that the pinto bean should only be used if dry and should not be ingested. The bean can be a choking hazard for small children, therefore appropriate precautions should be used. Allergic contact dermatitis to the materials used in this technique is possible, but we have never observed this. The pinto bean can be reused for future application as long as it remains dry and provides a hard convex surface.

Practice Implications

The probable mechanism of the ischemic changes to the wart tissue likely is the occlusion of tortuous blood vessels in the dermal papillae, which are intrinsic to wart tissue and absent in normal skin.1 This pressure-induced ischemic injury allows for selective destruction of the wart tissue with sparing of the normal skin. Our technique is fairly novel, although at least one report in the literature has described the use of a mechanical device to induce ischemic changes in skin tags.5

The use of pinto bean pressure wraps to induce ischemic change in digital warts provides a low-risk and nearly pain-free alternative to more expensive and invasive treatment methods. Moreover, this technique allows for a low-cost home-based therapy that can be repeated easily for other digital sites or if recurrence is noted.

Practice Gap

Verruca vulgaris is a common dermatologic challenge due to its high prevalence and tendency to recur following routinely employed destructive modalities (eg, cryotherapy, electrosurgery), which can incur a considerable amount of pain and some risk for scarring.1,2 Other treatment methods for warts such as topical salicylic acid preparations, topical immunotherapy, or intralesional allergen injections often require multiple treatment sessions.3,4 Furthermore, the financial burden of traditional wart treatment can be substantial.4 Better techniques are needed to improve the clinician’s approach to treating warts. We describe a home-based technique to treat common digital warts using pinto bean pressure wraps to induce ischemic changes in wart tissue with similar response rates to commonly used modalities.

Technique

Our technique utilizes a small, hard, convex object that is applied directly over the digital wart. A simple self-adhesive wrap is used to cover the object and maintain constant pressure on the wart overnight. We typically use a dried pinto bean (a variety of the common bean Phaseolus vulgaris) acquired from a local grocery store due to its ideal size, hard surface, and convex shape (Figure 1). The bean is taped in place directly overlying the wart and covered with a self-adhesive wrap overnight. The wrap is removed in the morning, and often no further treatment is needed. The ischemic wart tissue is allowed to slough spontaneously over 1 to 2 weeks. No wound care or dressing is necessary (Figure 2). Larger warts may require application of the pressure wraps for 2 to 3 additional nights. While most warts resolve with this technique, we have observed a recurrence rate similar to that for cryotherapy. Patients are advised that any recurrent warts can be re-treated monthly, if needed, until resolution.

FIGURE 1. A, The home pressure wrap kit includes pinto beans, stretch tape, and a self-adherent wrap. B, A pinto bean is taped in place directly over the wart. C, The selfadherent wrap is applied to augment the pressure of the secured bean.

FIGURE 2. A–C, The digital wart before treatment, 2 days after a single overnight pressure wrap application showing necrosis of the wart, and 6 days posttreatment showing evidence of sloughing.

What to Use and How to Prepare—Any small, hard, convex object can be used for the pressure wrap; we also have used appropriately sized and shaped plastic shirt buttons with similar results. Home kits can be assembled in advance and provided to patients at their initial visit along with appropriate instructions (Figure 1A).

Effects on the Skin and Distal Digit—Application of pressure wraps does not harm normal skin; however, care should be taken when the self-adherent wrap is applied so as not to induce ischemia of the distal digit. The wrap should be applied using gentle pressure with patients experiencing minimal discomfort from the overnight application.

Indications—This pressure wrap technique can be employed on most digital warts, including periungual warts, which can be difficult to treat by other means. However, in our experience this technique is not effective for nondigital warts, likely due to the inability to maintain adequate pressure with the overlying dressing. Patients at risk for compromised digital perfusion, such as those with Raynaud phenomenon or systemic sclerosis, should not be treated with pressure wraps due to possible digital ischemia.

Precautions—Patients should be advised that the pinto bean should only be used if dry and should not be ingested. The bean can be a choking hazard for small children, therefore appropriate precautions should be used. Allergic contact dermatitis to the materials used in this technique is possible, but we have never observed this. The pinto bean can be reused for future application as long as it remains dry and provides a hard convex surface.

Practice Implications

The probable mechanism of the ischemic changes to the wart tissue likely is the occlusion of tortuous blood vessels in the dermal papillae, which are intrinsic to wart tissue and absent in normal skin.1 This pressure-induced ischemic injury allows for selective destruction of the wart tissue with sparing of the normal skin. Our technique is fairly novel, although at least one report in the literature has described the use of a mechanical device to induce ischemic changes in skin tags.5

The use of pinto bean pressure wraps to induce ischemic change in digital warts provides a low-risk and nearly pain-free alternative to more expensive and invasive treatment methods. Moreover, this technique allows for a low-cost home-based therapy that can be repeated easily for other digital sites or if recurrence is noted.

References
  1. Cardoso J, Calonje E. Cutaneous manifestations of human papillomaviruses: a review. Acta Dermatovenerol Alp Pannonica Adriat. 2011;20:145-154. 
  2. Lipke M. An armamentarium of wart treatments. Clin Med Res. 2006;4:273-293. doi:10.3121/cmr.4.4.273 
  3. Muse M, Stiff K, Glines K, et al. A review of intralesional wart therapy. Dermatol Online J. 2020;26:2. doi:10.5070/D3263048027
  4. Berna R, Margolis D, Barbieri J. Annual health care utilization and costs for treatment of cutaneous and anogenital warts among a commercially insured population in the US, 2017-2019. JAMA Dermatol. 2022;158:695-697. doi:10.1001/jamadermatol.2022.0964
  5. Fredriksson C, Ilias M, Anderson C. New mechanical device for effective removal of skin tags in routine health care. Dermatol Online J. 2009;15:9. doi:10.5070/D37tj2800k
References
  1. Cardoso J, Calonje E. Cutaneous manifestations of human papillomaviruses: a review. Acta Dermatovenerol Alp Pannonica Adriat. 2011;20:145-154. 
  2. Lipke M. An armamentarium of wart treatments. Clin Med Res. 2006;4:273-293. doi:10.3121/cmr.4.4.273 
  3. Muse M, Stiff K, Glines K, et al. A review of intralesional wart therapy. Dermatol Online J. 2020;26:2. doi:10.5070/D3263048027
  4. Berna R, Margolis D, Barbieri J. Annual health care utilization and costs for treatment of cutaneous and anogenital warts among a commercially insured population in the US, 2017-2019. JAMA Dermatol. 2022;158:695-697. doi:10.1001/jamadermatol.2022.0964
  5. Fredriksson C, Ilias M, Anderson C. New mechanical device for effective removal of skin tags in routine health care. Dermatol Online J. 2009;15:9. doi:10.5070/D37tj2800k
Page Number
169-170
Page Number
169-170
Publications
Publications
Topics
Article Type
Display Headline
Pinto Bean Pressure Wraps: A Novel Approach to Treating Digital Warts
Display Headline
Pinto Bean Pressure Wraps: A Novel Approach to Treating Digital Warts
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Hospital Dermatology: Review of Research in 2023-2024

Article Type
Changed
Thu, 11/07/2024 - 16:46
Display Headline
Hospital Dermatology: Review of Research in 2023-2024

Inpatient consultative dermatology has advanced as a subspecialty and increasingly gained recognition in recent years. Since its founding in 2009, the Society of Dermatology Hospitalists has fostered research and education in hospital dermatology. Last year, we reviewed the 2022-2023 literature with a focus on developments in severe cutaneous adverse reactions, supportive oncodermatology, cost of inpatient services, and teledermatology.1 In this review, we highlight 3 areas of interest from the 2023-2024 literature: severe cutaneous adverse drug reactions, skin and soft tissue infections, and autoimmune blistering diseases (AIBDs).

Severe Cutaneous Adverse Drug Reactions

Adverse drug reactions are among the most common diagnoses encountered by inpatient dermatology consultants.2,3 Severe cutaneous adverse drug reactions are associated with substantial morbidity and mortality. Efforts to characterize these conditions and standardize their diagnosis and management continue to be a major focus of ongoing research.

A single-center retrospective analysis of 102 cases of drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome evaluated differences in clinical manifestations depending on the culprit drug, offering insights into the heterogeneity of DRESS syndrome and the potential for diagnostic uncertainty.4 The shortest median latency was observed in a case caused by penicillin and cephalosporins (12 and 18 days, respectively), while DRESS syndrome secondary to allopurinol had the longest median latency (36 days). Nonsteroidal anti-inflammatory drug–induced DRESS syndrome was associated with the shortest hospital stay (6.5 days), while cephalosporin and vancomycin cases had the highest mortality rates.4

In the first international Delphi consensus study on the diagnostic workup, severity assessment, and management of DRESS syndrome, 54 dermatology and/or allergy experts reached consensus on 93 statements.5 Specific recommendations included basic evaluation with complete blood count with differential, kidney and liver function parameters, and electrocardiogram for all patients with suspected DRESS syndrome, with additional complementary workup considered in patients with evidence of specific organ damage and/or severe disease. In the proposed DRESS syndrome severity grading scheme, laboratory values that reached consensus for inclusion were hemoglobin, neutrophil, and platelet counts and creatinine, transaminases, and alkaline phosphatase levels. Although treatment of DRESS syndrome should be based on assessed disease severity, treatment with corticosteroids should be initiated in all patients with confirmed DRESS syndrome. Cyclosporine, antibodies interfering with the IL-5 axis, and intravenous immunoglobulins can be considered in patients with corticosteroid-refractory DRESS syndrome, and antiviral treatment can be considered in patients with a high serum cytomegalovirus viral load. Regularly following up with laboratory evaluation of involved organs; screening for autoantibodies, thyroid dysfunction, and steroid adverse effects; and offering of psychological support also were consensus recommendations.5

Identifying causative agents in drug hypersensitivity reactions remains challenging. A retrospective cohort study of 48 patients with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) highlighted the need for a systematic unbiased approach to identifying culprit drugs. Using the RegiSCAR database and algorithm for drug causality for epidermal necrolysis to analyze the cohort, more than half of causative agents were determined to be different from those initially identified by the treating physicians. Nine additional suspected culprit drugs were identified, while 43 drugs initially identified as allergens were exonerated.6

Etiology-associated definitions for blistering reactions in children have been proposed to replace the existing terms Stevens-Johnson syndrome, toxic epidermal necrolysis, and others.7 Investigators in a recent study reclassified cases of SJS and TEN as reactive infectious mucocutaneous eruption (RIME) or drug-induced epidermal necrolysis (DEN), respectively. In RIME cases, Mycoplasma pneumoniae was the most commonly identified trigger, and in DEN cases, anticonvulsants were the most common class of culprit medications. Cases of RIME were less severe and were most often treated with antibiotics, whereas patients with DEN were more likely to receive supportive care, corticosteroids, intravenous immunoglobulins, and other immunosuppressive therapies.7

In addition to causing acute devastating mucocutaneous complications, SJS and TEN have long-lasting effects that require ongoing care. In a cohort of 6552 incident SJS/TEN cases over an 11-year period, survivors of SJS/TEN endured a mean loss of 9.4 years in life expectancy and excess health care expenditures of $3752 per year compared with age- and sex-matched controls. Patients with more severe disease, comorbid malignancy, diabetes, end-stage renal disease, or SJS/TEN sequelae experienced greater loss in life expectancy and lifetime health care expenditures.8 Separately, a qualitative study investigating the psychological impact of SJS/TEN in pediatric patients described sequelae including night terrors, posttraumatic stress disorder, depression, and anxiety for many years after the acute phase. Many patients reported a desire for increased support for their physical and emotional needs following hospital discharge.9

Skin and Soft Tissue Infections: Diagnosis, Management, and Prevention

Dermatology consultation has been shown to be a cost-effective intervention to improve outcomes in hospitalized patients with skin and soft tissue infections.10,11 In particular, cellulitis frequently is misdiagnosed, leading to unnecessary antibiotic use, hospitalizations, and major health care expenditures.12 Recognizing this challenge, researchers have worked to develop objective tools to improve diagnostic accuracy. In a large prospective prognostic validation study, Pulia et al13 found that thermal imaging alone or in combination with the ALT-70 prediction model (asymmetry, leukocytosis, tachycardia, and age ≥70 years) could be used successfully to reduce overdiagnosis of cellulitis. Both thermal imaging and the ALT-70 prediction model demonstrated robust sensitivity (93.5% and 98.8%, respectively) but low specificity (38.4% and 22.0%, respectively, and 53.9% when combined).13

In a systematic review, Kovacs et al14 analyzed case reports of pseudocellulitis caused by chemotherapeutic medications. Of the 81 cases selected, 58 (71.6%) were associated with gemcitabine, with the remaining 23 (28.4%) attributed to pemetrexed. Within this group, two-thirds of the patients received antibiotic treatment prior to receiving the correct diagnosis, and 36% experienced interruptions to their oncologic therapies. In contrast to infectious cellulitis, which tends to be unilateral and associated with elevated erythrocyte sedimentation rate or C-reactive protein, most chemotherapy-induced pseudocellulitis cases occurred bilaterally on the lower extremities, while erythrocyte sedimentation rate and C-reactive protein seldom were elevated.14

Necrotizing soft tissue infections (NSTIs) are severe life-threatening conditions characterized by widespread tissue destruction, signs of systemic toxicity, hemodynamic collapse, organ failure, and high mortality. Surgical inspection along with intraoperative tissue culture is the gold standard for diagnosis. Early detection, prompt surgical intervention, and appropriate antibiotic treatment are essential to reduce mortality and improve outcomes.15 A retrospective study of patients with surgically confirmed NSTIs assessed the incidence and risk factors for recurrence within 1 year following an initial NSTI of the lower extremity. Among 93 included patients, 32 (34.4%) had recurrence within 1 year, and more than half of recurrences occurred in the first 3 months (median, 66 days). The comparison of patients with and without recurrence showed similar proportions of antibiotic prophylaxis use after the first NSTI. There was significantly less compression therapy use (33.3% vs 62.3%; P=.13) and more negative pressure wound therapy use (83.3% vs 63.3%; P=.03) in the recurrence group, though the authors acknowledged that factors such as severity of pain and size of soft tissue defect may have affected the decisions for compression and negative pressure wound therapy.16

Residents of nursing homes are a particularly vulnerable population at high risk for health care–associated infections due to older age and a higher likelihood of having wounds, indwelling medical devices, and/or coexisting conditions.17 One cluster-randomized trial compared universal decolonization with routine-care bathing practices in nursing homes (N=28,956 residents). Decolonization entailed the use of chlorhexidine for all routine bathing and showering and administration of nasal povidone-iodine twice daily for the first 5 days after admission and then twice daily for 5 days every other week. Transfer to a hospital due to infection decreased from 62.9% to 52.2% with decolonization, for a difference in risk ratio of 16.6% (P<.001) compared with routine care. Additionally, the difference in risk ratio of the secondary end point (transfer to a hospital for any reason) was 14.6%. The number needed to treat was 9.7 to prevent 1 infection-related hospitalization and 8.9 to prevent 1 hospitalization for any reason.17

Autoimmune Blistering Diseases

Although rare, AIBDs are potentially life-threatening cutaneous diseases that often require inpatient management. While corticosteroids remain the mainstay of initial AIBD management, rituximab is now well recognized as the steroid-sparing treatment of choice for patients with moderate to severe pemphigus. In a long-term follow-up study of Ritux 318—the trial that led to the US Food and Drug Administration approval of rituximab in the treatment of moderate to severe pemphigus vulgaris—researchers assessed the long-term efficacy and safety of rituximab as a first-line treatment in patients with pemphigus.19 The 5- and 7-year disease-free survival rates without corticosteroid therapy for patients treated with rituximab were 76.7% and 72.1%, respectively, compared with 35.3% and 35.3% in those treated with prednisone alone (P<.001). Fewer serious adverse events were reported in those treated with rituximab plus prednisone compared with those treated with prednisone alone. None of the patients who maintained complete remission off corticosteroid therapy received any additional maintenance infusions of rituximab after the end of the Ritux 3 regimen (1 g of rituximab at day 0 and day 14, then 500 mg at months 12 and 18).19

By contrast, treatment of severe bullous pemphigoid (BP) often is less clear-cut, as no single therapeutic option has been shown to be superior to other immunomodulatory and immunosuppressive regimens, and the medical comorbidities of elderly patients with BP can be limiting. Fortunately, newer therapies with favorable safety profiles have emerged in recent years. In a multicenter retrospective study, 100 patients with BP received omalizumab after previously failing to respond to at least one alternative therapy. Disease control was obtained after a median of 10 days, and complete remission was achieved in 77% of patients in a median time of 3 months.20 In a multicenter retrospective cohort study of 146 patients with BP treated with dupilumab following the atopic dermatitis dosing schedule (one 600-mg dose followed by 300 mg every 2 weeks), disease control was achieved in a median of 14 days, while complete remission was achieved in 35.6% of patients, with 8.9% relapsing during the observation period.21 A retrospective case series of 30 patients with BP treated with dupilumab with maintenance dosing frequency tailored to individual patient response showed complete remission or marked response in 76.7% (23/30) of patients.22 A phase 2/3 randomized controlled trial of dupilumab in BP is currently ongoing (ClinicalTrials.gov identifier NCT04206553).

Pemphigoid gestationis is a rare autoimmune subepidermal bullous dermatosis of pregnancy that may be difficult to distinguish clinically from polymorphic eruption of pregnancy but confers notably different maternal and fetal risks. Researchers developed and validated a scoring system using clinical factors—history of pemphigoid gestationis, primigravidae, timing of rash onset, and specific clinical examination findings—that was able to differentiate between the 2 diseases with 79% sensitivity, 95% specificity, and an area under the curve of 0.93 without the need for advanced immunologic testing.23

Final Thoughts

Highlights of the literature from 2023-2024 demonstrate advancements in hospital-based dermatology as well as ongoing challenges. This year’s review emphasizes key developments in severe cutaneous adverse drug reactions, skin and soft tissue infections, and AIBDs. Continued expansion of knowledge in these areas and others informs patient care and demonstrates the value of dermatologic expertise in the inpatient setting.

References
  1. Berk-Krauss J, Micheletti RG. Hospital dermatology: review of research in 2022-2023. Cutis. 2023;112:236-239.
  2. Falanga V, Schachner LA, Rae V, et al. Dermatologic consultations in the hospital setting. Arch Dermatol. 1994;130:1022-1025.
  3. Kroshinsky D, Cotliar J, Hughey LC, et al. Association of dermatology consultation with accuracy of cutaneous disorder diagnoses in hospitalized patients: a multicenter analysis. JAMA Dermatol. 2016;152:477-480.
  4. Blumenthal KG, Alvarez-Arango S, Kroshinsky D, et al. Drug reaction eosinophilia and systemic symptoms: clinical phenotypic patterns according to causative drug. J Am Acad Dermatol. 2024;90:1240-1242.
  5. Brüggen MC, Walsh S, Ameri MM, et al. Management of adult patients with drug reaction with eosinophilia and systemic symptoms: a Delphi-based international consensus. JAMA Dermatol. 2024;160:37-44.
  6. Li DJ, Velasquez GA, Romar GA, et al. Assessment of need for improved identification of a culprit drug in Stevens-Johnson syndrome/toxic epidermal necrolysis. JAMA Dermatol. 2023;159:830-836.
  7. Martinez-Cabriales S, Coulombe J, Aaron M, et al. Preliminary summary and reclassification of cases from the Pediatric Research of Management in Stevens-Johnson syndrome and Epidermonecrolysis (PROMISE) study: a North American, multisite retrospective cohort. J Am Acad Dermatol. 2024;90:635-637.
  8. Chiu YM, Chiu HY. Lifetime risk, life expectancy, loss-of-life expectancy and lifetime healthcare expenditure for Stevens-Johnson syndrome/toxic epidermal necrolysis in Taiwan: follow-up of a nationwide cohort from 2008 to 2019. Br J Dermatol. 2023;189:553-560.
  9. Phillips C, Russell E, McNiven A, et al. A qualitative study of psychological morbidity in paediatric survivors of Stevens-Johnson syndrome/toxic epidermal necrolysis. Br J Dermatol. 2024;191:293-295.
  10. Li DG, Xia FD, Khosravi H, et al. Outcomes of early dermatology consultation for inpatients diagnosed with cellulitis. JAMA Dermatol. 2018;154:537-543.
  11. Milani-Nejad N, Zhang M, Kaffenberger BH. Association of dermatology consultations with patient care outcomes in hospitalized patients with inflammatory skin diseases. JAMA Dermatol. 2017;153:523-528.
  12. Weng QY, Raff AB, Cohen JM, et al. Costs and consequences associated with misdiagnosed lower extremity cellulitis. JAMA Dermatol. 2017;153:141-146.
  13. Pulia MS, Schwei RJ, Alexandridis R, et al. Validation of thermal imaging and the ALT-70 prediction model to differentiate cellulitis from pseudocellulitis. JAMA Dermatol. 2024;160:511-517.
  14. Kovacs LD, O’Donoghue M, Cogen AL. Chemotherapy-induced pseudocellulitis without prior radiation exposure: a systematic review. JAMA Dermatol. 2023;159:870-874.
  15. Yildiz H, Yombi JC. Necrotizing soft-tissue infections. comment. N Engl J Med. 2018;378:970.
  16. Traineau H, Charpentier C, Lepeule R, et al. First-year recurrence rate of skin and soft tissue infections following an initial necrotizing soft tissue infection of the lower extremities: a retrospective cohort study of 93 patients. J Am Acad Dermatol. 2023;88:1360-1363.
  17. Miller LG, McKinnell JA, Singh RD, et al. Decolonization in nursing homes to prevent infection and hospitalization. N Engl J Med. 2023;389:1766-1777.
  18. Joly P, Maho-Vaillant M, Prost-Squarcioni C, et al; French Study Group on Autoimmune Bullous Skin Diseases. First-line rituximab combined with short-term prednisone versus prednisone alone for the treatment of pemphigus (Ritux 3): a prospective, multicentre, parallel-group, open-label randomised trial. Lancet. 2017;389:2031-2040.
  19. Tedbirt B, Maho-Vaillant M, Houivet E, et al; French Reference Center for Autoimmune Blistering Diseases MALIBUL. Sustained remission without corticosteroids among patients with pemphigus who had rituximab as first-line therapy: follow-up of the Ritux 3 Trial. JAMA Dermatol. 2024;160:290-296.
  20. Chebani R, Lombart F, Chaby G, et al; French Study Group on ­Autoimmune Bullous Diseases. Omalizumab in the treatment of bullous pemphigoid resistant to first-line therapy: a French national multicentre retrospective study of 100 patients. Br J Dermatol. 2024;190:258-265.
  21. Zhao L, Wang Q, Liang G, et al. Evaluation of dupilumab in patients with bullous pemphigoid. JAMA Dermatol. 2023;159:953-960.
  22. Miller AC, Temiz LA, Adjei S, et al. Treatment of bullous pemphigoid with dupilumab: a case series of 30 patients. J Drugs Dermatol. 2024;23:E144-E148.
  23. Xie F, Davis DMR, Baban F, et al. Development and multicenter international validation of a diagnostic tool to differentiate between pemphigoid gestationis and polymorphic eruption of pregnancy. J Am Acad Dermatol. 2023;89:106-113.
Article PDF
Author and Disclosure Information

Dr. Wei is from the Department of Dermatology, University of Washington, Seattle. Dr. Micheletti is from the Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia.

Dr. Wei has no relevant financial disclosures to report. Dr. Micheletti is a consultant for Vertex and has received research grants from Amgen, Boehringer Ingelheim, Cabaletta Bio, and InflaRX.

Presented in part at the Society of Dermatology Hospitalists Annual Meeting; March 8, 2024; San Diego, California.

Correspondence: Robert G. Micheletti, MD, Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, PCAM 7 South, Room 724, Philadelphia, PA 19104 ([email protected]).

Cutis. 2024 November;114(5):156-158, 168. doi:10.12788/cutis.1126

Publications
Topics
Page Number
156-157
Sections
Author and Disclosure Information

Dr. Wei is from the Department of Dermatology, University of Washington, Seattle. Dr. Micheletti is from the Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia.

Dr. Wei has no relevant financial disclosures to report. Dr. Micheletti is a consultant for Vertex and has received research grants from Amgen, Boehringer Ingelheim, Cabaletta Bio, and InflaRX.

Presented in part at the Society of Dermatology Hospitalists Annual Meeting; March 8, 2024; San Diego, California.

Correspondence: Robert G. Micheletti, MD, Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, PCAM 7 South, Room 724, Philadelphia, PA 19104 ([email protected]).

Cutis. 2024 November;114(5):156-158, 168. doi:10.12788/cutis.1126

Author and Disclosure Information

Dr. Wei is from the Department of Dermatology, University of Washington, Seattle. Dr. Micheletti is from the Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia.

Dr. Wei has no relevant financial disclosures to report. Dr. Micheletti is a consultant for Vertex and has received research grants from Amgen, Boehringer Ingelheim, Cabaletta Bio, and InflaRX.

Presented in part at the Society of Dermatology Hospitalists Annual Meeting; March 8, 2024; San Diego, California.

Correspondence: Robert G. Micheletti, MD, Department of Dermatology, Perelman School of Medicine, University of Pennsylvania, 3400 Civic Center Blvd, PCAM 7 South, Room 724, Philadelphia, PA 19104 ([email protected]).

Cutis. 2024 November;114(5):156-158, 168. doi:10.12788/cutis.1126

Article PDF
Article PDF

Inpatient consultative dermatology has advanced as a subspecialty and increasingly gained recognition in recent years. Since its founding in 2009, the Society of Dermatology Hospitalists has fostered research and education in hospital dermatology. Last year, we reviewed the 2022-2023 literature with a focus on developments in severe cutaneous adverse reactions, supportive oncodermatology, cost of inpatient services, and teledermatology.1 In this review, we highlight 3 areas of interest from the 2023-2024 literature: severe cutaneous adverse drug reactions, skin and soft tissue infections, and autoimmune blistering diseases (AIBDs).

Severe Cutaneous Adverse Drug Reactions

Adverse drug reactions are among the most common diagnoses encountered by inpatient dermatology consultants.2,3 Severe cutaneous adverse drug reactions are associated with substantial morbidity and mortality. Efforts to characterize these conditions and standardize their diagnosis and management continue to be a major focus of ongoing research.

A single-center retrospective analysis of 102 cases of drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome evaluated differences in clinical manifestations depending on the culprit drug, offering insights into the heterogeneity of DRESS syndrome and the potential for diagnostic uncertainty.4 The shortest median latency was observed in a case caused by penicillin and cephalosporins (12 and 18 days, respectively), while DRESS syndrome secondary to allopurinol had the longest median latency (36 days). Nonsteroidal anti-inflammatory drug–induced DRESS syndrome was associated with the shortest hospital stay (6.5 days), while cephalosporin and vancomycin cases had the highest mortality rates.4

In the first international Delphi consensus study on the diagnostic workup, severity assessment, and management of DRESS syndrome, 54 dermatology and/or allergy experts reached consensus on 93 statements.5 Specific recommendations included basic evaluation with complete blood count with differential, kidney and liver function parameters, and electrocardiogram for all patients with suspected DRESS syndrome, with additional complementary workup considered in patients with evidence of specific organ damage and/or severe disease. In the proposed DRESS syndrome severity grading scheme, laboratory values that reached consensus for inclusion were hemoglobin, neutrophil, and platelet counts and creatinine, transaminases, and alkaline phosphatase levels. Although treatment of DRESS syndrome should be based on assessed disease severity, treatment with corticosteroids should be initiated in all patients with confirmed DRESS syndrome. Cyclosporine, antibodies interfering with the IL-5 axis, and intravenous immunoglobulins can be considered in patients with corticosteroid-refractory DRESS syndrome, and antiviral treatment can be considered in patients with a high serum cytomegalovirus viral load. Regularly following up with laboratory evaluation of involved organs; screening for autoantibodies, thyroid dysfunction, and steroid adverse effects; and offering of psychological support also were consensus recommendations.5

Identifying causative agents in drug hypersensitivity reactions remains challenging. A retrospective cohort study of 48 patients with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) highlighted the need for a systematic unbiased approach to identifying culprit drugs. Using the RegiSCAR database and algorithm for drug causality for epidermal necrolysis to analyze the cohort, more than half of causative agents were determined to be different from those initially identified by the treating physicians. Nine additional suspected culprit drugs were identified, while 43 drugs initially identified as allergens were exonerated.6

Etiology-associated definitions for blistering reactions in children have been proposed to replace the existing terms Stevens-Johnson syndrome, toxic epidermal necrolysis, and others.7 Investigators in a recent study reclassified cases of SJS and TEN as reactive infectious mucocutaneous eruption (RIME) or drug-induced epidermal necrolysis (DEN), respectively. In RIME cases, Mycoplasma pneumoniae was the most commonly identified trigger, and in DEN cases, anticonvulsants were the most common class of culprit medications. Cases of RIME were less severe and were most often treated with antibiotics, whereas patients with DEN were more likely to receive supportive care, corticosteroids, intravenous immunoglobulins, and other immunosuppressive therapies.7

In addition to causing acute devastating mucocutaneous complications, SJS and TEN have long-lasting effects that require ongoing care. In a cohort of 6552 incident SJS/TEN cases over an 11-year period, survivors of SJS/TEN endured a mean loss of 9.4 years in life expectancy and excess health care expenditures of $3752 per year compared with age- and sex-matched controls. Patients with more severe disease, comorbid malignancy, diabetes, end-stage renal disease, or SJS/TEN sequelae experienced greater loss in life expectancy and lifetime health care expenditures.8 Separately, a qualitative study investigating the psychological impact of SJS/TEN in pediatric patients described sequelae including night terrors, posttraumatic stress disorder, depression, and anxiety for many years after the acute phase. Many patients reported a desire for increased support for their physical and emotional needs following hospital discharge.9

Skin and Soft Tissue Infections: Diagnosis, Management, and Prevention

Dermatology consultation has been shown to be a cost-effective intervention to improve outcomes in hospitalized patients with skin and soft tissue infections.10,11 In particular, cellulitis frequently is misdiagnosed, leading to unnecessary antibiotic use, hospitalizations, and major health care expenditures.12 Recognizing this challenge, researchers have worked to develop objective tools to improve diagnostic accuracy. In a large prospective prognostic validation study, Pulia et al13 found that thermal imaging alone or in combination with the ALT-70 prediction model (asymmetry, leukocytosis, tachycardia, and age ≥70 years) could be used successfully to reduce overdiagnosis of cellulitis. Both thermal imaging and the ALT-70 prediction model demonstrated robust sensitivity (93.5% and 98.8%, respectively) but low specificity (38.4% and 22.0%, respectively, and 53.9% when combined).13

In a systematic review, Kovacs et al14 analyzed case reports of pseudocellulitis caused by chemotherapeutic medications. Of the 81 cases selected, 58 (71.6%) were associated with gemcitabine, with the remaining 23 (28.4%) attributed to pemetrexed. Within this group, two-thirds of the patients received antibiotic treatment prior to receiving the correct diagnosis, and 36% experienced interruptions to their oncologic therapies. In contrast to infectious cellulitis, which tends to be unilateral and associated with elevated erythrocyte sedimentation rate or C-reactive protein, most chemotherapy-induced pseudocellulitis cases occurred bilaterally on the lower extremities, while erythrocyte sedimentation rate and C-reactive protein seldom were elevated.14

Necrotizing soft tissue infections (NSTIs) are severe life-threatening conditions characterized by widespread tissue destruction, signs of systemic toxicity, hemodynamic collapse, organ failure, and high mortality. Surgical inspection along with intraoperative tissue culture is the gold standard for diagnosis. Early detection, prompt surgical intervention, and appropriate antibiotic treatment are essential to reduce mortality and improve outcomes.15 A retrospective study of patients with surgically confirmed NSTIs assessed the incidence and risk factors for recurrence within 1 year following an initial NSTI of the lower extremity. Among 93 included patients, 32 (34.4%) had recurrence within 1 year, and more than half of recurrences occurred in the first 3 months (median, 66 days). The comparison of patients with and without recurrence showed similar proportions of antibiotic prophylaxis use after the first NSTI. There was significantly less compression therapy use (33.3% vs 62.3%; P=.13) and more negative pressure wound therapy use (83.3% vs 63.3%; P=.03) in the recurrence group, though the authors acknowledged that factors such as severity of pain and size of soft tissue defect may have affected the decisions for compression and negative pressure wound therapy.16

Residents of nursing homes are a particularly vulnerable population at high risk for health care–associated infections due to older age and a higher likelihood of having wounds, indwelling medical devices, and/or coexisting conditions.17 One cluster-randomized trial compared universal decolonization with routine-care bathing practices in nursing homes (N=28,956 residents). Decolonization entailed the use of chlorhexidine for all routine bathing and showering and administration of nasal povidone-iodine twice daily for the first 5 days after admission and then twice daily for 5 days every other week. Transfer to a hospital due to infection decreased from 62.9% to 52.2% with decolonization, for a difference in risk ratio of 16.6% (P<.001) compared with routine care. Additionally, the difference in risk ratio of the secondary end point (transfer to a hospital for any reason) was 14.6%. The number needed to treat was 9.7 to prevent 1 infection-related hospitalization and 8.9 to prevent 1 hospitalization for any reason.17

Autoimmune Blistering Diseases

Although rare, AIBDs are potentially life-threatening cutaneous diseases that often require inpatient management. While corticosteroids remain the mainstay of initial AIBD management, rituximab is now well recognized as the steroid-sparing treatment of choice for patients with moderate to severe pemphigus. In a long-term follow-up study of Ritux 318—the trial that led to the US Food and Drug Administration approval of rituximab in the treatment of moderate to severe pemphigus vulgaris—researchers assessed the long-term efficacy and safety of rituximab as a first-line treatment in patients with pemphigus.19 The 5- and 7-year disease-free survival rates without corticosteroid therapy for patients treated with rituximab were 76.7% and 72.1%, respectively, compared with 35.3% and 35.3% in those treated with prednisone alone (P<.001). Fewer serious adverse events were reported in those treated with rituximab plus prednisone compared with those treated with prednisone alone. None of the patients who maintained complete remission off corticosteroid therapy received any additional maintenance infusions of rituximab after the end of the Ritux 3 regimen (1 g of rituximab at day 0 and day 14, then 500 mg at months 12 and 18).19

By contrast, treatment of severe bullous pemphigoid (BP) often is less clear-cut, as no single therapeutic option has been shown to be superior to other immunomodulatory and immunosuppressive regimens, and the medical comorbidities of elderly patients with BP can be limiting. Fortunately, newer therapies with favorable safety profiles have emerged in recent years. In a multicenter retrospective study, 100 patients with BP received omalizumab after previously failing to respond to at least one alternative therapy. Disease control was obtained after a median of 10 days, and complete remission was achieved in 77% of patients in a median time of 3 months.20 In a multicenter retrospective cohort study of 146 patients with BP treated with dupilumab following the atopic dermatitis dosing schedule (one 600-mg dose followed by 300 mg every 2 weeks), disease control was achieved in a median of 14 days, while complete remission was achieved in 35.6% of patients, with 8.9% relapsing during the observation period.21 A retrospective case series of 30 patients with BP treated with dupilumab with maintenance dosing frequency tailored to individual patient response showed complete remission or marked response in 76.7% (23/30) of patients.22 A phase 2/3 randomized controlled trial of dupilumab in BP is currently ongoing (ClinicalTrials.gov identifier NCT04206553).

Pemphigoid gestationis is a rare autoimmune subepidermal bullous dermatosis of pregnancy that may be difficult to distinguish clinically from polymorphic eruption of pregnancy but confers notably different maternal and fetal risks. Researchers developed and validated a scoring system using clinical factors—history of pemphigoid gestationis, primigravidae, timing of rash onset, and specific clinical examination findings—that was able to differentiate between the 2 diseases with 79% sensitivity, 95% specificity, and an area under the curve of 0.93 without the need for advanced immunologic testing.23

Final Thoughts

Highlights of the literature from 2023-2024 demonstrate advancements in hospital-based dermatology as well as ongoing challenges. This year’s review emphasizes key developments in severe cutaneous adverse drug reactions, skin and soft tissue infections, and AIBDs. Continued expansion of knowledge in these areas and others informs patient care and demonstrates the value of dermatologic expertise in the inpatient setting.

Inpatient consultative dermatology has advanced as a subspecialty and increasingly gained recognition in recent years. Since its founding in 2009, the Society of Dermatology Hospitalists has fostered research and education in hospital dermatology. Last year, we reviewed the 2022-2023 literature with a focus on developments in severe cutaneous adverse reactions, supportive oncodermatology, cost of inpatient services, and teledermatology.1 In this review, we highlight 3 areas of interest from the 2023-2024 literature: severe cutaneous adverse drug reactions, skin and soft tissue infections, and autoimmune blistering diseases (AIBDs).

Severe Cutaneous Adverse Drug Reactions

Adverse drug reactions are among the most common diagnoses encountered by inpatient dermatology consultants.2,3 Severe cutaneous adverse drug reactions are associated with substantial morbidity and mortality. Efforts to characterize these conditions and standardize their diagnosis and management continue to be a major focus of ongoing research.

A single-center retrospective analysis of 102 cases of drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome evaluated differences in clinical manifestations depending on the culprit drug, offering insights into the heterogeneity of DRESS syndrome and the potential for diagnostic uncertainty.4 The shortest median latency was observed in a case caused by penicillin and cephalosporins (12 and 18 days, respectively), while DRESS syndrome secondary to allopurinol had the longest median latency (36 days). Nonsteroidal anti-inflammatory drug–induced DRESS syndrome was associated with the shortest hospital stay (6.5 days), while cephalosporin and vancomycin cases had the highest mortality rates.4

In the first international Delphi consensus study on the diagnostic workup, severity assessment, and management of DRESS syndrome, 54 dermatology and/or allergy experts reached consensus on 93 statements.5 Specific recommendations included basic evaluation with complete blood count with differential, kidney and liver function parameters, and electrocardiogram for all patients with suspected DRESS syndrome, with additional complementary workup considered in patients with evidence of specific organ damage and/or severe disease. In the proposed DRESS syndrome severity grading scheme, laboratory values that reached consensus for inclusion were hemoglobin, neutrophil, and platelet counts and creatinine, transaminases, and alkaline phosphatase levels. Although treatment of DRESS syndrome should be based on assessed disease severity, treatment with corticosteroids should be initiated in all patients with confirmed DRESS syndrome. Cyclosporine, antibodies interfering with the IL-5 axis, and intravenous immunoglobulins can be considered in patients with corticosteroid-refractory DRESS syndrome, and antiviral treatment can be considered in patients with a high serum cytomegalovirus viral load. Regularly following up with laboratory evaluation of involved organs; screening for autoantibodies, thyroid dysfunction, and steroid adverse effects; and offering of psychological support also were consensus recommendations.5

Identifying causative agents in drug hypersensitivity reactions remains challenging. A retrospective cohort study of 48 patients with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) highlighted the need for a systematic unbiased approach to identifying culprit drugs. Using the RegiSCAR database and algorithm for drug causality for epidermal necrolysis to analyze the cohort, more than half of causative agents were determined to be different from those initially identified by the treating physicians. Nine additional suspected culprit drugs were identified, while 43 drugs initially identified as allergens were exonerated.6

Etiology-associated definitions for blistering reactions in children have been proposed to replace the existing terms Stevens-Johnson syndrome, toxic epidermal necrolysis, and others.7 Investigators in a recent study reclassified cases of SJS and TEN as reactive infectious mucocutaneous eruption (RIME) or drug-induced epidermal necrolysis (DEN), respectively. In RIME cases, Mycoplasma pneumoniae was the most commonly identified trigger, and in DEN cases, anticonvulsants were the most common class of culprit medications. Cases of RIME were less severe and were most often treated with antibiotics, whereas patients with DEN were more likely to receive supportive care, corticosteroids, intravenous immunoglobulins, and other immunosuppressive therapies.7

In addition to causing acute devastating mucocutaneous complications, SJS and TEN have long-lasting effects that require ongoing care. In a cohort of 6552 incident SJS/TEN cases over an 11-year period, survivors of SJS/TEN endured a mean loss of 9.4 years in life expectancy and excess health care expenditures of $3752 per year compared with age- and sex-matched controls. Patients with more severe disease, comorbid malignancy, diabetes, end-stage renal disease, or SJS/TEN sequelae experienced greater loss in life expectancy and lifetime health care expenditures.8 Separately, a qualitative study investigating the psychological impact of SJS/TEN in pediatric patients described sequelae including night terrors, posttraumatic stress disorder, depression, and anxiety for many years after the acute phase. Many patients reported a desire for increased support for their physical and emotional needs following hospital discharge.9

Skin and Soft Tissue Infections: Diagnosis, Management, and Prevention

Dermatology consultation has been shown to be a cost-effective intervention to improve outcomes in hospitalized patients with skin and soft tissue infections.10,11 In particular, cellulitis frequently is misdiagnosed, leading to unnecessary antibiotic use, hospitalizations, and major health care expenditures.12 Recognizing this challenge, researchers have worked to develop objective tools to improve diagnostic accuracy. In a large prospective prognostic validation study, Pulia et al13 found that thermal imaging alone or in combination with the ALT-70 prediction model (asymmetry, leukocytosis, tachycardia, and age ≥70 years) could be used successfully to reduce overdiagnosis of cellulitis. Both thermal imaging and the ALT-70 prediction model demonstrated robust sensitivity (93.5% and 98.8%, respectively) but low specificity (38.4% and 22.0%, respectively, and 53.9% when combined).13

In a systematic review, Kovacs et al14 analyzed case reports of pseudocellulitis caused by chemotherapeutic medications. Of the 81 cases selected, 58 (71.6%) were associated with gemcitabine, with the remaining 23 (28.4%) attributed to pemetrexed. Within this group, two-thirds of the patients received antibiotic treatment prior to receiving the correct diagnosis, and 36% experienced interruptions to their oncologic therapies. In contrast to infectious cellulitis, which tends to be unilateral and associated with elevated erythrocyte sedimentation rate or C-reactive protein, most chemotherapy-induced pseudocellulitis cases occurred bilaterally on the lower extremities, while erythrocyte sedimentation rate and C-reactive protein seldom were elevated.14

Necrotizing soft tissue infections (NSTIs) are severe life-threatening conditions characterized by widespread tissue destruction, signs of systemic toxicity, hemodynamic collapse, organ failure, and high mortality. Surgical inspection along with intraoperative tissue culture is the gold standard for diagnosis. Early detection, prompt surgical intervention, and appropriate antibiotic treatment are essential to reduce mortality and improve outcomes.15 A retrospective study of patients with surgically confirmed NSTIs assessed the incidence and risk factors for recurrence within 1 year following an initial NSTI of the lower extremity. Among 93 included patients, 32 (34.4%) had recurrence within 1 year, and more than half of recurrences occurred in the first 3 months (median, 66 days). The comparison of patients with and without recurrence showed similar proportions of antibiotic prophylaxis use after the first NSTI. There was significantly less compression therapy use (33.3% vs 62.3%; P=.13) and more negative pressure wound therapy use (83.3% vs 63.3%; P=.03) in the recurrence group, though the authors acknowledged that factors such as severity of pain and size of soft tissue defect may have affected the decisions for compression and negative pressure wound therapy.16

Residents of nursing homes are a particularly vulnerable population at high risk for health care–associated infections due to older age and a higher likelihood of having wounds, indwelling medical devices, and/or coexisting conditions.17 One cluster-randomized trial compared universal decolonization with routine-care bathing practices in nursing homes (N=28,956 residents). Decolonization entailed the use of chlorhexidine for all routine bathing and showering and administration of nasal povidone-iodine twice daily for the first 5 days after admission and then twice daily for 5 days every other week. Transfer to a hospital due to infection decreased from 62.9% to 52.2% with decolonization, for a difference in risk ratio of 16.6% (P<.001) compared with routine care. Additionally, the difference in risk ratio of the secondary end point (transfer to a hospital for any reason) was 14.6%. The number needed to treat was 9.7 to prevent 1 infection-related hospitalization and 8.9 to prevent 1 hospitalization for any reason.17

Autoimmune Blistering Diseases

Although rare, AIBDs are potentially life-threatening cutaneous diseases that often require inpatient management. While corticosteroids remain the mainstay of initial AIBD management, rituximab is now well recognized as the steroid-sparing treatment of choice for patients with moderate to severe pemphigus. In a long-term follow-up study of Ritux 318—the trial that led to the US Food and Drug Administration approval of rituximab in the treatment of moderate to severe pemphigus vulgaris—researchers assessed the long-term efficacy and safety of rituximab as a first-line treatment in patients with pemphigus.19 The 5- and 7-year disease-free survival rates without corticosteroid therapy for patients treated with rituximab were 76.7% and 72.1%, respectively, compared with 35.3% and 35.3% in those treated with prednisone alone (P<.001). Fewer serious adverse events were reported in those treated with rituximab plus prednisone compared with those treated with prednisone alone. None of the patients who maintained complete remission off corticosteroid therapy received any additional maintenance infusions of rituximab after the end of the Ritux 3 regimen (1 g of rituximab at day 0 and day 14, then 500 mg at months 12 and 18).19

By contrast, treatment of severe bullous pemphigoid (BP) often is less clear-cut, as no single therapeutic option has been shown to be superior to other immunomodulatory and immunosuppressive regimens, and the medical comorbidities of elderly patients with BP can be limiting. Fortunately, newer therapies with favorable safety profiles have emerged in recent years. In a multicenter retrospective study, 100 patients with BP received omalizumab after previously failing to respond to at least one alternative therapy. Disease control was obtained after a median of 10 days, and complete remission was achieved in 77% of patients in a median time of 3 months.20 In a multicenter retrospective cohort study of 146 patients with BP treated with dupilumab following the atopic dermatitis dosing schedule (one 600-mg dose followed by 300 mg every 2 weeks), disease control was achieved in a median of 14 days, while complete remission was achieved in 35.6% of patients, with 8.9% relapsing during the observation period.21 A retrospective case series of 30 patients with BP treated with dupilumab with maintenance dosing frequency tailored to individual patient response showed complete remission or marked response in 76.7% (23/30) of patients.22 A phase 2/3 randomized controlled trial of dupilumab in BP is currently ongoing (ClinicalTrials.gov identifier NCT04206553).

Pemphigoid gestationis is a rare autoimmune subepidermal bullous dermatosis of pregnancy that may be difficult to distinguish clinically from polymorphic eruption of pregnancy but confers notably different maternal and fetal risks. Researchers developed and validated a scoring system using clinical factors—history of pemphigoid gestationis, primigravidae, timing of rash onset, and specific clinical examination findings—that was able to differentiate between the 2 diseases with 79% sensitivity, 95% specificity, and an area under the curve of 0.93 without the need for advanced immunologic testing.23

Final Thoughts

Highlights of the literature from 2023-2024 demonstrate advancements in hospital-based dermatology as well as ongoing challenges. This year’s review emphasizes key developments in severe cutaneous adverse drug reactions, skin and soft tissue infections, and AIBDs. Continued expansion of knowledge in these areas and others informs patient care and demonstrates the value of dermatologic expertise in the inpatient setting.

References
  1. Berk-Krauss J, Micheletti RG. Hospital dermatology: review of research in 2022-2023. Cutis. 2023;112:236-239.
  2. Falanga V, Schachner LA, Rae V, et al. Dermatologic consultations in the hospital setting. Arch Dermatol. 1994;130:1022-1025.
  3. Kroshinsky D, Cotliar J, Hughey LC, et al. Association of dermatology consultation with accuracy of cutaneous disorder diagnoses in hospitalized patients: a multicenter analysis. JAMA Dermatol. 2016;152:477-480.
  4. Blumenthal KG, Alvarez-Arango S, Kroshinsky D, et al. Drug reaction eosinophilia and systemic symptoms: clinical phenotypic patterns according to causative drug. J Am Acad Dermatol. 2024;90:1240-1242.
  5. Brüggen MC, Walsh S, Ameri MM, et al. Management of adult patients with drug reaction with eosinophilia and systemic symptoms: a Delphi-based international consensus. JAMA Dermatol. 2024;160:37-44.
  6. Li DJ, Velasquez GA, Romar GA, et al. Assessment of need for improved identification of a culprit drug in Stevens-Johnson syndrome/toxic epidermal necrolysis. JAMA Dermatol. 2023;159:830-836.
  7. Martinez-Cabriales S, Coulombe J, Aaron M, et al. Preliminary summary and reclassification of cases from the Pediatric Research of Management in Stevens-Johnson syndrome and Epidermonecrolysis (PROMISE) study: a North American, multisite retrospective cohort. J Am Acad Dermatol. 2024;90:635-637.
  8. Chiu YM, Chiu HY. Lifetime risk, life expectancy, loss-of-life expectancy and lifetime healthcare expenditure for Stevens-Johnson syndrome/toxic epidermal necrolysis in Taiwan: follow-up of a nationwide cohort from 2008 to 2019. Br J Dermatol. 2023;189:553-560.
  9. Phillips C, Russell E, McNiven A, et al. A qualitative study of psychological morbidity in paediatric survivors of Stevens-Johnson syndrome/toxic epidermal necrolysis. Br J Dermatol. 2024;191:293-295.
  10. Li DG, Xia FD, Khosravi H, et al. Outcomes of early dermatology consultation for inpatients diagnosed with cellulitis. JAMA Dermatol. 2018;154:537-543.
  11. Milani-Nejad N, Zhang M, Kaffenberger BH. Association of dermatology consultations with patient care outcomes in hospitalized patients with inflammatory skin diseases. JAMA Dermatol. 2017;153:523-528.
  12. Weng QY, Raff AB, Cohen JM, et al. Costs and consequences associated with misdiagnosed lower extremity cellulitis. JAMA Dermatol. 2017;153:141-146.
  13. Pulia MS, Schwei RJ, Alexandridis R, et al. Validation of thermal imaging and the ALT-70 prediction model to differentiate cellulitis from pseudocellulitis. JAMA Dermatol. 2024;160:511-517.
  14. Kovacs LD, O’Donoghue M, Cogen AL. Chemotherapy-induced pseudocellulitis without prior radiation exposure: a systematic review. JAMA Dermatol. 2023;159:870-874.
  15. Yildiz H, Yombi JC. Necrotizing soft-tissue infections. comment. N Engl J Med. 2018;378:970.
  16. Traineau H, Charpentier C, Lepeule R, et al. First-year recurrence rate of skin and soft tissue infections following an initial necrotizing soft tissue infection of the lower extremities: a retrospective cohort study of 93 patients. J Am Acad Dermatol. 2023;88:1360-1363.
  17. Miller LG, McKinnell JA, Singh RD, et al. Decolonization in nursing homes to prevent infection and hospitalization. N Engl J Med. 2023;389:1766-1777.
  18. Joly P, Maho-Vaillant M, Prost-Squarcioni C, et al; French Study Group on Autoimmune Bullous Skin Diseases. First-line rituximab combined with short-term prednisone versus prednisone alone for the treatment of pemphigus (Ritux 3): a prospective, multicentre, parallel-group, open-label randomised trial. Lancet. 2017;389:2031-2040.
  19. Tedbirt B, Maho-Vaillant M, Houivet E, et al; French Reference Center for Autoimmune Blistering Diseases MALIBUL. Sustained remission without corticosteroids among patients with pemphigus who had rituximab as first-line therapy: follow-up of the Ritux 3 Trial. JAMA Dermatol. 2024;160:290-296.
  20. Chebani R, Lombart F, Chaby G, et al; French Study Group on ­Autoimmune Bullous Diseases. Omalizumab in the treatment of bullous pemphigoid resistant to first-line therapy: a French national multicentre retrospective study of 100 patients. Br J Dermatol. 2024;190:258-265.
  21. Zhao L, Wang Q, Liang G, et al. Evaluation of dupilumab in patients with bullous pemphigoid. JAMA Dermatol. 2023;159:953-960.
  22. Miller AC, Temiz LA, Adjei S, et al. Treatment of bullous pemphigoid with dupilumab: a case series of 30 patients. J Drugs Dermatol. 2024;23:E144-E148.
  23. Xie F, Davis DMR, Baban F, et al. Development and multicenter international validation of a diagnostic tool to differentiate between pemphigoid gestationis and polymorphic eruption of pregnancy. J Am Acad Dermatol. 2023;89:106-113.
References
  1. Berk-Krauss J, Micheletti RG. Hospital dermatology: review of research in 2022-2023. Cutis. 2023;112:236-239.
  2. Falanga V, Schachner LA, Rae V, et al. Dermatologic consultations in the hospital setting. Arch Dermatol. 1994;130:1022-1025.
  3. Kroshinsky D, Cotliar J, Hughey LC, et al. Association of dermatology consultation with accuracy of cutaneous disorder diagnoses in hospitalized patients: a multicenter analysis. JAMA Dermatol. 2016;152:477-480.
  4. Blumenthal KG, Alvarez-Arango S, Kroshinsky D, et al. Drug reaction eosinophilia and systemic symptoms: clinical phenotypic patterns according to causative drug. J Am Acad Dermatol. 2024;90:1240-1242.
  5. Brüggen MC, Walsh S, Ameri MM, et al. Management of adult patients with drug reaction with eosinophilia and systemic symptoms: a Delphi-based international consensus. JAMA Dermatol. 2024;160:37-44.
  6. Li DJ, Velasquez GA, Romar GA, et al. Assessment of need for improved identification of a culprit drug in Stevens-Johnson syndrome/toxic epidermal necrolysis. JAMA Dermatol. 2023;159:830-836.
  7. Martinez-Cabriales S, Coulombe J, Aaron M, et al. Preliminary summary and reclassification of cases from the Pediatric Research of Management in Stevens-Johnson syndrome and Epidermonecrolysis (PROMISE) study: a North American, multisite retrospective cohort. J Am Acad Dermatol. 2024;90:635-637.
  8. Chiu YM, Chiu HY. Lifetime risk, life expectancy, loss-of-life expectancy and lifetime healthcare expenditure for Stevens-Johnson syndrome/toxic epidermal necrolysis in Taiwan: follow-up of a nationwide cohort from 2008 to 2019. Br J Dermatol. 2023;189:553-560.
  9. Phillips C, Russell E, McNiven A, et al. A qualitative study of psychological morbidity in paediatric survivors of Stevens-Johnson syndrome/toxic epidermal necrolysis. Br J Dermatol. 2024;191:293-295.
  10. Li DG, Xia FD, Khosravi H, et al. Outcomes of early dermatology consultation for inpatients diagnosed with cellulitis. JAMA Dermatol. 2018;154:537-543.
  11. Milani-Nejad N, Zhang M, Kaffenberger BH. Association of dermatology consultations with patient care outcomes in hospitalized patients with inflammatory skin diseases. JAMA Dermatol. 2017;153:523-528.
  12. Weng QY, Raff AB, Cohen JM, et al. Costs and consequences associated with misdiagnosed lower extremity cellulitis. JAMA Dermatol. 2017;153:141-146.
  13. Pulia MS, Schwei RJ, Alexandridis R, et al. Validation of thermal imaging and the ALT-70 prediction model to differentiate cellulitis from pseudocellulitis. JAMA Dermatol. 2024;160:511-517.
  14. Kovacs LD, O’Donoghue M, Cogen AL. Chemotherapy-induced pseudocellulitis without prior radiation exposure: a systematic review. JAMA Dermatol. 2023;159:870-874.
  15. Yildiz H, Yombi JC. Necrotizing soft-tissue infections. comment. N Engl J Med. 2018;378:970.
  16. Traineau H, Charpentier C, Lepeule R, et al. First-year recurrence rate of skin and soft tissue infections following an initial necrotizing soft tissue infection of the lower extremities: a retrospective cohort study of 93 patients. J Am Acad Dermatol. 2023;88:1360-1363.
  17. Miller LG, McKinnell JA, Singh RD, et al. Decolonization in nursing homes to prevent infection and hospitalization. N Engl J Med. 2023;389:1766-1777.
  18. Joly P, Maho-Vaillant M, Prost-Squarcioni C, et al; French Study Group on Autoimmune Bullous Skin Diseases. First-line rituximab combined with short-term prednisone versus prednisone alone for the treatment of pemphigus (Ritux 3): a prospective, multicentre, parallel-group, open-label randomised trial. Lancet. 2017;389:2031-2040.
  19. Tedbirt B, Maho-Vaillant M, Houivet E, et al; French Reference Center for Autoimmune Blistering Diseases MALIBUL. Sustained remission without corticosteroids among patients with pemphigus who had rituximab as first-line therapy: follow-up of the Ritux 3 Trial. JAMA Dermatol. 2024;160:290-296.
  20. Chebani R, Lombart F, Chaby G, et al; French Study Group on ­Autoimmune Bullous Diseases. Omalizumab in the treatment of bullous pemphigoid resistant to first-line therapy: a French national multicentre retrospective study of 100 patients. Br J Dermatol. 2024;190:258-265.
  21. Zhao L, Wang Q, Liang G, et al. Evaluation of dupilumab in patients with bullous pemphigoid. JAMA Dermatol. 2023;159:953-960.
  22. Miller AC, Temiz LA, Adjei S, et al. Treatment of bullous pemphigoid with dupilumab: a case series of 30 patients. J Drugs Dermatol. 2024;23:E144-E148.
  23. Xie F, Davis DMR, Baban F, et al. Development and multicenter international validation of a diagnostic tool to differentiate between pemphigoid gestationis and polymorphic eruption of pregnancy. J Am Acad Dermatol. 2023;89:106-113.
Page Number
156-157
Page Number
156-157
Publications
Publications
Topics
Article Type
Display Headline
Hospital Dermatology: Review of Research in 2023-2024
Display Headline
Hospital Dermatology: Review of Research in 2023-2024
Sections
Inside the Article

Practice Points

  • An international Delphi study reached consensus on 93 statements regarding workup, severity assessment, and management of DRESS syndrome.
  • In nursing homes, universal decolonization with chlorhexidine and nasal iodophor greatly reduced the risk for hospital transfers due to infection compared to routine care.
  • Rituximab as the first-line therapy for pemphigus vulgaris is associated with long-term sustained complete remission without corticosteroid therapy.
  • Dupilumab and omalizumab are emerging safe and effective treatment options for bullous pemphigoid.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

On Second Thought: Aspirin for Primary Prevention — What We Really Know

Article Type
Changed
Wed, 11/13/2024 - 02:26

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article