User login
A 55-year-old female presented a with few years' history of pruritic plaques on her shins and wrists
A definitive diagnosis can be made via skin biopsy. Histopathology reveals hyperkeratosis, acanthosis, and a band-like lymphocytic infiltrate in the dermis. An eosinophilic infiltrate may be present. Other common features include saw tooth rete ridges and Civatte bodies, which are apoptotic keratinocytes. The lymphocytic infiltrate may indicate an autoimmune etiology in which the body’s immune system erroneously attacks itself. However, the exact cause is not known and genetic and environmental factors may play a role.
The treatment of HLP includes symptomatic management and control of inflammation. Topical steroids can be prescribed to manage the inflammation and associated pruritus, and emollient creams and moisturizers are helpful in controlling the dryness. Oral steroids, immunosuppressant medications, or retinoids may be necessary in more severe cases. In addition, psoralen plus ultraviolet A (PUVA) light therapy has been found to be beneficial in some cases. Squamous cell carcinoma may arise in lesions.
This case and photo were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Florida, and Donna Bilu Martin, MD; Premier Dermatology, MD, Aventura, Florida. The column was edited by Dr. Bilu Martin.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
Arnold DL, Krishnamurthy K. Lichen Planus. [Updated 2023 Jun 1]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK526126/
Jaime TJ et al. An Bras Dermatol. 2011 Jul-Aug;86(4 Suppl 1):S96-9.
Mirchandani S et al. Med Pharm Rep. 2020 Apr;93(2):210-2. .
Whittington CP et al. Arch Pathol Lab Med. 2023 Jun 19. doi: 10.5858/arpa.2022-0515-RA.
A definitive diagnosis can be made via skin biopsy. Histopathology reveals hyperkeratosis, acanthosis, and a band-like lymphocytic infiltrate in the dermis. An eosinophilic infiltrate may be present. Other common features include saw tooth rete ridges and Civatte bodies, which are apoptotic keratinocytes. The lymphocytic infiltrate may indicate an autoimmune etiology in which the body’s immune system erroneously attacks itself. However, the exact cause is not known and genetic and environmental factors may play a role.
The treatment of HLP includes symptomatic management and control of inflammation. Topical steroids can be prescribed to manage the inflammation and associated pruritus, and emollient creams and moisturizers are helpful in controlling the dryness. Oral steroids, immunosuppressant medications, or retinoids may be necessary in more severe cases. In addition, psoralen plus ultraviolet A (PUVA) light therapy has been found to be beneficial in some cases. Squamous cell carcinoma may arise in lesions.
This case and photo were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Florida, and Donna Bilu Martin, MD; Premier Dermatology, MD, Aventura, Florida. The column was edited by Dr. Bilu Martin.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
Arnold DL, Krishnamurthy K. Lichen Planus. [Updated 2023 Jun 1]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK526126/
Jaime TJ et al. An Bras Dermatol. 2011 Jul-Aug;86(4 Suppl 1):S96-9.
Mirchandani S et al. Med Pharm Rep. 2020 Apr;93(2):210-2. .
Whittington CP et al. Arch Pathol Lab Med. 2023 Jun 19. doi: 10.5858/arpa.2022-0515-RA.
A definitive diagnosis can be made via skin biopsy. Histopathology reveals hyperkeratosis, acanthosis, and a band-like lymphocytic infiltrate in the dermis. An eosinophilic infiltrate may be present. Other common features include saw tooth rete ridges and Civatte bodies, which are apoptotic keratinocytes. The lymphocytic infiltrate may indicate an autoimmune etiology in which the body’s immune system erroneously attacks itself. However, the exact cause is not known and genetic and environmental factors may play a role.
The treatment of HLP includes symptomatic management and control of inflammation. Topical steroids can be prescribed to manage the inflammation and associated pruritus, and emollient creams and moisturizers are helpful in controlling the dryness. Oral steroids, immunosuppressant medications, or retinoids may be necessary in more severe cases. In addition, psoralen plus ultraviolet A (PUVA) light therapy has been found to be beneficial in some cases. Squamous cell carcinoma may arise in lesions.
This case and photo were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Florida, and Donna Bilu Martin, MD; Premier Dermatology, MD, Aventura, Florida. The column was edited by Dr. Bilu Martin.
Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].
References
Arnold DL, Krishnamurthy K. Lichen Planus. [Updated 2023 Jun 1]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK526126/
Jaime TJ et al. An Bras Dermatol. 2011 Jul-Aug;86(4 Suppl 1):S96-9.
Mirchandani S et al. Med Pharm Rep. 2020 Apr;93(2):210-2. .
Whittington CP et al. Arch Pathol Lab Med. 2023 Jun 19. doi: 10.5858/arpa.2022-0515-RA.
Erectile Dysfunction Rx: Give It a Shot
This transcript has been edited for clarity.
I’m Dr Rachel Rubin. I am a urologist with fellowship training in sexual medicine. Today I’m going to explain why I may recommend that your patients put a needle directly into their penises for help with erectile dysfunction (ED).
I know that sounds crazy, but in a recent video when I talked about erection hardness, I acknowledged that it may not be easy to talk with patients about their penises, but it’s important.
ED can be a marker for cardiovascular disease, with 50% of our 50-year-old patients having ED. As physicians, we must do a better job of talking to our patients about ED and letting them know that it’s a marker for overall health.
How do we treat ED? Primary care doctors can do a great deal for patients with ED, and there are other things that urologists can do when you run out of options in your own toolbox.
What’s important for a healthy erection? You need three things: healthy muscle, healthy nerves, and healthy arteries. If anything goes wrong with muscles, nerves, or arteries, this is what leads to ED. Think through the algorithm of your patient’s medical history: Do they have diabetes, which can affect their nerves? Do they have high blood pressure, which can affect their arteries? Do they have problems with testosterone, which can affect the smooth muscles of the penis? Understanding your patient’s history can be really helpful when you figure out what is the best treatment strategy for your patient.
For the penis to work, those smooth muscles have to relax; therefore, your brain has to be relaxed, along with your pelvic floor muscles. The smooth muscle of the penis has to be relaxed so it can fill with blood, increase in girth and size, and hold that erection in place.
To treat ED, we have a biopsychosocial toolbox. Biology refers to the muscles, arteries, and nerves. The psychosocial component is stress: If your brain is stressed, you have a lot of adrenaline around that can tighten those smooth muscles and cause you to lose an erection.
So, what are these treatments? I’ll start with lifestyle. A healthy heart means a healthy penis, so, all of the things you already recommend for lifestyle changes can really help with ED. Sleep is important. Does your patient need a sleep study? Do they have sleep apnea? Are they exercising? Recent data show that exercise may be just as effective, if not more effective, than Viagra. How about a good diet? The Mediterranean diet seems to be the most helpful. So, encourage your patients to make dietary, exercise, sleep, and other lifestyle changes if they want to improve erectile function.
What about sex education? Most physicians didn’t get great education about sex in medical school, but it’s very important to our patients who likewise have had inadequate sex education. Ask questions, talk to them, explain what is normal.
I can’t stress enough how important mental health is to a great sex life. Everyone would benefit from sex therapy and becoming better at sex. We need to get better at communicating and educating patients and their partners to maximize their quality of life. If you need to refer to a specialist, we recommend going to psychologytoday.com or aasect.org to find a local sex therapist. Call them and use them in your referral networks.
In the “bio” component of the biopsychosocial approach, we can do a lot to treat ED with medications and hormones. Testosterone has been shown to help with low libido and erectile function. Checking the patient’s testosterone level can be very helpful. Pills — we are familiar with Viagra, Cialis, Levitra, and Stendra. The oral PDE-5 inhibitors have been around since the late 1990s and they work quite well for many people with ED. Viagra and Cialis are generic now and patients can get them fairly inexpensively with discount coupons from GoodRx or Cost Plus Drugs. They may not even have to worry about insurance coverage.
Pills relax the smooth muscle of the penis so that it fills with blood and becomes erect, but they don’t work for everybody. If pills stop working, we often talk about synergistic treatments — combining pills and devices. Devices for ED should be discussed more often, and clinicians should consider prescribing them. We commonly discuss eyeglasses and wheelchairs, but we don’t talk about the sexual health devices that could help patients have more success and fun in the bedroom.
What are the various types of devices for ED? One common device is a vacuum pump, which can be very effective. This is how they work: The penis is lubricated and placed into the pump. A button on the pump creates suction that brings blood into the penis. The patient then applies a constriction band around the base of the penis to hold that erection in place.
“Sex tech” has really expanded to help patients with ED with devices that vibrate and hold the erection in place. Vibrating devices allow for a better orgasm. We even have devices that monitor erectile fitness (like a Fitbit for the penis), gathering data to help patients understand the firmness of their erections.
Devices are helpful adjuncts, but they don’t always do enough to achieve an erect penis that’s hard enough for penetration. In those cases, we can recommend injections that increase smooth muscle relaxation of the penis. I know it sounds crazy. If the muscles, arteries, and nerves of the penis aren’t functioning well, additional smooth muscle relaxation can be achieved by injecting alprostadil (prostaglandin E1) directly into the penis. It’s a tiny needle. It doesn’t hurt. These injections can be quite helpful for our patients, and we often recommend them.
But what happens when your patient doesn’t even respond to injections or any of the synergistic treatments? They’ve tried everything. Urologists may suggest a surgical option, the penile implant. Penile implants contain a pump inside the scrotum that fills with fluid, allowing a rigid erection. Penile implants are wonderful for patients who can no longer get erections. Talking to a urologist about the pros and the cons and the risks and benefits of surgically placed implants is very important.
Finally, ED is a marker for cardiovascular disease. These patients may need a cardiology workup. They need to improve their general health. We have to ask our patients about their goals and what they care about, and find a toolbox that makes sense for each patient and couple to maximize their sexual health and quality of life. Don’t give up. If you have questions, let us know.
Rachel S. Rubin, MD, is Assistant Clinical Professor, Department of Urology, Georgetown University, Washington, DC; Private practice, Rachel Rubin MD PLLC, North Bethesda, Maryland. She disclosed ties with Sprout, Maternal Medical, Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Dr Rachel Rubin. I am a urologist with fellowship training in sexual medicine. Today I’m going to explain why I may recommend that your patients put a needle directly into their penises for help with erectile dysfunction (ED).
I know that sounds crazy, but in a recent video when I talked about erection hardness, I acknowledged that it may not be easy to talk with patients about their penises, but it’s important.
ED can be a marker for cardiovascular disease, with 50% of our 50-year-old patients having ED. As physicians, we must do a better job of talking to our patients about ED and letting them know that it’s a marker for overall health.
How do we treat ED? Primary care doctors can do a great deal for patients with ED, and there are other things that urologists can do when you run out of options in your own toolbox.
What’s important for a healthy erection? You need three things: healthy muscle, healthy nerves, and healthy arteries. If anything goes wrong with muscles, nerves, or arteries, this is what leads to ED. Think through the algorithm of your patient’s medical history: Do they have diabetes, which can affect their nerves? Do they have high blood pressure, which can affect their arteries? Do they have problems with testosterone, which can affect the smooth muscles of the penis? Understanding your patient’s history can be really helpful when you figure out what is the best treatment strategy for your patient.
For the penis to work, those smooth muscles have to relax; therefore, your brain has to be relaxed, along with your pelvic floor muscles. The smooth muscle of the penis has to be relaxed so it can fill with blood, increase in girth and size, and hold that erection in place.
To treat ED, we have a biopsychosocial toolbox. Biology refers to the muscles, arteries, and nerves. The psychosocial component is stress: If your brain is stressed, you have a lot of adrenaline around that can tighten those smooth muscles and cause you to lose an erection.
So, what are these treatments? I’ll start with lifestyle. A healthy heart means a healthy penis, so, all of the things you already recommend for lifestyle changes can really help with ED. Sleep is important. Does your patient need a sleep study? Do they have sleep apnea? Are they exercising? Recent data show that exercise may be just as effective, if not more effective, than Viagra. How about a good diet? The Mediterranean diet seems to be the most helpful. So, encourage your patients to make dietary, exercise, sleep, and other lifestyle changes if they want to improve erectile function.
What about sex education? Most physicians didn’t get great education about sex in medical school, but it’s very important to our patients who likewise have had inadequate sex education. Ask questions, talk to them, explain what is normal.
I can’t stress enough how important mental health is to a great sex life. Everyone would benefit from sex therapy and becoming better at sex. We need to get better at communicating and educating patients and their partners to maximize their quality of life. If you need to refer to a specialist, we recommend going to psychologytoday.com or aasect.org to find a local sex therapist. Call them and use them in your referral networks.
In the “bio” component of the biopsychosocial approach, we can do a lot to treat ED with medications and hormones. Testosterone has been shown to help with low libido and erectile function. Checking the patient’s testosterone level can be very helpful. Pills — we are familiar with Viagra, Cialis, Levitra, and Stendra. The oral PDE-5 inhibitors have been around since the late 1990s and they work quite well for many people with ED. Viagra and Cialis are generic now and patients can get them fairly inexpensively with discount coupons from GoodRx or Cost Plus Drugs. They may not even have to worry about insurance coverage.
Pills relax the smooth muscle of the penis so that it fills with blood and becomes erect, but they don’t work for everybody. If pills stop working, we often talk about synergistic treatments — combining pills and devices. Devices for ED should be discussed more often, and clinicians should consider prescribing them. We commonly discuss eyeglasses and wheelchairs, but we don’t talk about the sexual health devices that could help patients have more success and fun in the bedroom.
What are the various types of devices for ED? One common device is a vacuum pump, which can be very effective. This is how they work: The penis is lubricated and placed into the pump. A button on the pump creates suction that brings blood into the penis. The patient then applies a constriction band around the base of the penis to hold that erection in place.
“Sex tech” has really expanded to help patients with ED with devices that vibrate and hold the erection in place. Vibrating devices allow for a better orgasm. We even have devices that monitor erectile fitness (like a Fitbit for the penis), gathering data to help patients understand the firmness of their erections.
Devices are helpful adjuncts, but they don’t always do enough to achieve an erect penis that’s hard enough for penetration. In those cases, we can recommend injections that increase smooth muscle relaxation of the penis. I know it sounds crazy. If the muscles, arteries, and nerves of the penis aren’t functioning well, additional smooth muscle relaxation can be achieved by injecting alprostadil (prostaglandin E1) directly into the penis. It’s a tiny needle. It doesn’t hurt. These injections can be quite helpful for our patients, and we often recommend them.
But what happens when your patient doesn’t even respond to injections or any of the synergistic treatments? They’ve tried everything. Urologists may suggest a surgical option, the penile implant. Penile implants contain a pump inside the scrotum that fills with fluid, allowing a rigid erection. Penile implants are wonderful for patients who can no longer get erections. Talking to a urologist about the pros and the cons and the risks and benefits of surgically placed implants is very important.
Finally, ED is a marker for cardiovascular disease. These patients may need a cardiology workup. They need to improve their general health. We have to ask our patients about their goals and what they care about, and find a toolbox that makes sense for each patient and couple to maximize their sexual health and quality of life. Don’t give up. If you have questions, let us know.
Rachel S. Rubin, MD, is Assistant Clinical Professor, Department of Urology, Georgetown University, Washington, DC; Private practice, Rachel Rubin MD PLLC, North Bethesda, Maryland. She disclosed ties with Sprout, Maternal Medical, Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I’m Dr Rachel Rubin. I am a urologist with fellowship training in sexual medicine. Today I’m going to explain why I may recommend that your patients put a needle directly into their penises for help with erectile dysfunction (ED).
I know that sounds crazy, but in a recent video when I talked about erection hardness, I acknowledged that it may not be easy to talk with patients about their penises, but it’s important.
ED can be a marker for cardiovascular disease, with 50% of our 50-year-old patients having ED. As physicians, we must do a better job of talking to our patients about ED and letting them know that it’s a marker for overall health.
How do we treat ED? Primary care doctors can do a great deal for patients with ED, and there are other things that urologists can do when you run out of options in your own toolbox.
What’s important for a healthy erection? You need three things: healthy muscle, healthy nerves, and healthy arteries. If anything goes wrong with muscles, nerves, or arteries, this is what leads to ED. Think through the algorithm of your patient’s medical history: Do they have diabetes, which can affect their nerves? Do they have high blood pressure, which can affect their arteries? Do they have problems with testosterone, which can affect the smooth muscles of the penis? Understanding your patient’s history can be really helpful when you figure out what is the best treatment strategy for your patient.
For the penis to work, those smooth muscles have to relax; therefore, your brain has to be relaxed, along with your pelvic floor muscles. The smooth muscle of the penis has to be relaxed so it can fill with blood, increase in girth and size, and hold that erection in place.
To treat ED, we have a biopsychosocial toolbox. Biology refers to the muscles, arteries, and nerves. The psychosocial component is stress: If your brain is stressed, you have a lot of adrenaline around that can tighten those smooth muscles and cause you to lose an erection.
So, what are these treatments? I’ll start with lifestyle. A healthy heart means a healthy penis, so, all of the things you already recommend for lifestyle changes can really help with ED. Sleep is important. Does your patient need a sleep study? Do they have sleep apnea? Are they exercising? Recent data show that exercise may be just as effective, if not more effective, than Viagra. How about a good diet? The Mediterranean diet seems to be the most helpful. So, encourage your patients to make dietary, exercise, sleep, and other lifestyle changes if they want to improve erectile function.
What about sex education? Most physicians didn’t get great education about sex in medical school, but it’s very important to our patients who likewise have had inadequate sex education. Ask questions, talk to them, explain what is normal.
I can’t stress enough how important mental health is to a great sex life. Everyone would benefit from sex therapy and becoming better at sex. We need to get better at communicating and educating patients and their partners to maximize their quality of life. If you need to refer to a specialist, we recommend going to psychologytoday.com or aasect.org to find a local sex therapist. Call them and use them in your referral networks.
In the “bio” component of the biopsychosocial approach, we can do a lot to treat ED with medications and hormones. Testosterone has been shown to help with low libido and erectile function. Checking the patient’s testosterone level can be very helpful. Pills — we are familiar with Viagra, Cialis, Levitra, and Stendra. The oral PDE-5 inhibitors have been around since the late 1990s and they work quite well for many people with ED. Viagra and Cialis are generic now and patients can get them fairly inexpensively with discount coupons from GoodRx or Cost Plus Drugs. They may not even have to worry about insurance coverage.
Pills relax the smooth muscle of the penis so that it fills with blood and becomes erect, but they don’t work for everybody. If pills stop working, we often talk about synergistic treatments — combining pills and devices. Devices for ED should be discussed more often, and clinicians should consider prescribing them. We commonly discuss eyeglasses and wheelchairs, but we don’t talk about the sexual health devices that could help patients have more success and fun in the bedroom.
What are the various types of devices for ED? One common device is a vacuum pump, which can be very effective. This is how they work: The penis is lubricated and placed into the pump. A button on the pump creates suction that brings blood into the penis. The patient then applies a constriction band around the base of the penis to hold that erection in place.
“Sex tech” has really expanded to help patients with ED with devices that vibrate and hold the erection in place. Vibrating devices allow for a better orgasm. We even have devices that monitor erectile fitness (like a Fitbit for the penis), gathering data to help patients understand the firmness of their erections.
Devices are helpful adjuncts, but they don’t always do enough to achieve an erect penis that’s hard enough for penetration. In those cases, we can recommend injections that increase smooth muscle relaxation of the penis. I know it sounds crazy. If the muscles, arteries, and nerves of the penis aren’t functioning well, additional smooth muscle relaxation can be achieved by injecting alprostadil (prostaglandin E1) directly into the penis. It’s a tiny needle. It doesn’t hurt. These injections can be quite helpful for our patients, and we often recommend them.
But what happens when your patient doesn’t even respond to injections or any of the synergistic treatments? They’ve tried everything. Urologists may suggest a surgical option, the penile implant. Penile implants contain a pump inside the scrotum that fills with fluid, allowing a rigid erection. Penile implants are wonderful for patients who can no longer get erections. Talking to a urologist about the pros and the cons and the risks and benefits of surgically placed implants is very important.
Finally, ED is a marker for cardiovascular disease. These patients may need a cardiology workup. They need to improve their general health. We have to ask our patients about their goals and what they care about, and find a toolbox that makes sense for each patient and couple to maximize their sexual health and quality of life. Don’t give up. If you have questions, let us know.
Rachel S. Rubin, MD, is Assistant Clinical Professor, Department of Urology, Georgetown University, Washington, DC; Private practice, Rachel Rubin MD PLLC, North Bethesda, Maryland. She disclosed ties with Sprout, Maternal Medical, Absorption Pharmaceuticals, GSK, and Endo.
A version of this article appeared on Medscape.com.
Supercharge your medical practice with ChatGPT: Here’s why you should upgrade
Artificial intelligence (AI) has already demonstrated its potential in various areas of healthcare, from early disease detection and drug discovery to genomics and personalized care. OpenAI’s ChatGPT, a large language model, is one AI tool that has been transforming practices across the globe, including mine.
ChatGPT is essentially an AI-fueled assistant, capable of interpreting and generating human-like text in response to user inputs. Imagine a well-informed and competent trainee working with you, ready to tackle tasks from handling patient inquiries to summarizing intricate medical literature.
Currently, ChatGPT works on the “freemium” pricing model; there is a free version built upon GPT-3.5 as well as a subscription “ChatGPT Plus” version based on GPT-4 which offers additional features such as the use of third-party plug-ins.
Now, you may ask, “Isn’t the free version enough?” The free version is indeed impressive, but upgrading to the paid version for $20 per month unlocks the full potential of this tool, particularly if we add plug-ins.
Here are some of the best ways to incorporate ChatGPT Plus into your practice.
Time saver and efficiency multiplier. The paid version of ChatGPT is an extraordinary time-saving tool. It can help you sort through vast amounts of medical literature in a fraction of the time it would normally take. Imagine having to sift through hundreds of articles to find the latest research relevant to a patient’s case. With the paid version of ChatGPT, you can simply ask it to provide summaries of the most recent and relevant studies, all in seconds.
Did you forget about that PowerPoint you need to make but know the potential papers you would use? No problem. ChatGPT can create slides in a few minutes. It becomes your on-demand research assistant.
Of course, you need to provide the source you find most relevant to you. Using plug-ins such as ScholarAI and Link Reader are great.
Improved patient communication. Explaining complex medical terminology and procedures to patients can sometimes be a challenge. ChatGPT can generate simplified and personalized explanations for your patients, fostering their understanding and involvement in their care process.
Epic is currently collaborating with Nuance Communications, Microsoft’s speech recognition subsidiary, to use generative AI tools for medical note-taking in the electronic health record. However, you do not need to wait for it; it just takes a prompt in ChatGPT and then copying/pasting the results into the chart.
Smoother administrative management. The premium version of ChatGPT can automate administrative tasks such as creating letters of medical necessity, clearance to other physicians for services, or even communications to staff on specific topics. This frees you to focus more on your core work: providing patient care.
Precision medicine aid. ChatGPT can be a powerful ally in the field of precision medicine. Its capabilities for analyzing large datasets and unearthing valuable insights can help deliver more personalized and potentially effective treatment plans. For example, one can prompt ChatGPT to query the reported frequency of certain genomic variants and their implications; with the upgraded version and plug-ins, the results will have fewer hallucinations — inaccurate results — and key data references.
Unlimited accessibility. Uninterrupted access is a compelling reason to upgrade. While the free version may have usage limitations, the premium version provides unrestricted, round-the-clock access. Be it a late-night research quest or an early-morning patient query, your AI assistant will always be available.
Strengthened privacy and security. The premium version of ChatGPT includes heightened privacy and security measures. Just make sure to follow HIPAA and not include identifiers when making queries.
Embracing AI tools like ChatGPT in your practice can help you stay at the cutting edge of medical care, saving you time, enhancing patient communication, and supporting you in providing personalized care.
While the free version can serve as a good starting point (there are apps for both iOS and Android), upgrading to the paid version opens up a world of possibilities that can truly supercharge your practice.
I would love to hear your comments on this column or on future topics. Contact me at [email protected].
Arturo Loaiza-Bonilla, MD, MSEd, is the cofounder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr. Loaiza-Bonilla is Assistant Professor of Medicine, Drexel University School of Medicine, Philadelphia, Pennsylvania, and serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has financial relationships with Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, Guardant, Amgen, Eisai, Natera, Merck, and Bristol Myers Squibb.
A version of this article appeared on Medscape.com.
Artificial intelligence (AI) has already demonstrated its potential in various areas of healthcare, from early disease detection and drug discovery to genomics and personalized care. OpenAI’s ChatGPT, a large language model, is one AI tool that has been transforming practices across the globe, including mine.
ChatGPT is essentially an AI-fueled assistant, capable of interpreting and generating human-like text in response to user inputs. Imagine a well-informed and competent trainee working with you, ready to tackle tasks from handling patient inquiries to summarizing intricate medical literature.
Currently, ChatGPT works on the “freemium” pricing model; there is a free version built upon GPT-3.5 as well as a subscription “ChatGPT Plus” version based on GPT-4 which offers additional features such as the use of third-party plug-ins.
Now, you may ask, “Isn’t the free version enough?” The free version is indeed impressive, but upgrading to the paid version for $20 per month unlocks the full potential of this tool, particularly if we add plug-ins.
Here are some of the best ways to incorporate ChatGPT Plus into your practice.
Time saver and efficiency multiplier. The paid version of ChatGPT is an extraordinary time-saving tool. It can help you sort through vast amounts of medical literature in a fraction of the time it would normally take. Imagine having to sift through hundreds of articles to find the latest research relevant to a patient’s case. With the paid version of ChatGPT, you can simply ask it to provide summaries of the most recent and relevant studies, all in seconds.
Did you forget about that PowerPoint you need to make but know the potential papers you would use? No problem. ChatGPT can create slides in a few minutes. It becomes your on-demand research assistant.
Of course, you need to provide the source you find most relevant to you. Using plug-ins such as ScholarAI and Link Reader are great.
Improved patient communication. Explaining complex medical terminology and procedures to patients can sometimes be a challenge. ChatGPT can generate simplified and personalized explanations for your patients, fostering their understanding and involvement in their care process.
Epic is currently collaborating with Nuance Communications, Microsoft’s speech recognition subsidiary, to use generative AI tools for medical note-taking in the electronic health record. However, you do not need to wait for it; it just takes a prompt in ChatGPT and then copying/pasting the results into the chart.
Smoother administrative management. The premium version of ChatGPT can automate administrative tasks such as creating letters of medical necessity, clearance to other physicians for services, or even communications to staff on specific topics. This frees you to focus more on your core work: providing patient care.
Precision medicine aid. ChatGPT can be a powerful ally in the field of precision medicine. Its capabilities for analyzing large datasets and unearthing valuable insights can help deliver more personalized and potentially effective treatment plans. For example, one can prompt ChatGPT to query the reported frequency of certain genomic variants and their implications; with the upgraded version and plug-ins, the results will have fewer hallucinations — inaccurate results — and key data references.
Unlimited accessibility. Uninterrupted access is a compelling reason to upgrade. While the free version may have usage limitations, the premium version provides unrestricted, round-the-clock access. Be it a late-night research quest or an early-morning patient query, your AI assistant will always be available.
Strengthened privacy and security. The premium version of ChatGPT includes heightened privacy and security measures. Just make sure to follow HIPAA and not include identifiers when making queries.
Embracing AI tools like ChatGPT in your practice can help you stay at the cutting edge of medical care, saving you time, enhancing patient communication, and supporting you in providing personalized care.
While the free version can serve as a good starting point (there are apps for both iOS and Android), upgrading to the paid version opens up a world of possibilities that can truly supercharge your practice.
I would love to hear your comments on this column or on future topics. Contact me at [email protected].
Arturo Loaiza-Bonilla, MD, MSEd, is the cofounder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr. Loaiza-Bonilla is Assistant Professor of Medicine, Drexel University School of Medicine, Philadelphia, Pennsylvania, and serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has financial relationships with Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, Guardant, Amgen, Eisai, Natera, Merck, and Bristol Myers Squibb.
A version of this article appeared on Medscape.com.
Artificial intelligence (AI) has already demonstrated its potential in various areas of healthcare, from early disease detection and drug discovery to genomics and personalized care. OpenAI’s ChatGPT, a large language model, is one AI tool that has been transforming practices across the globe, including mine.
ChatGPT is essentially an AI-fueled assistant, capable of interpreting and generating human-like text in response to user inputs. Imagine a well-informed and competent trainee working with you, ready to tackle tasks from handling patient inquiries to summarizing intricate medical literature.
Currently, ChatGPT works on the “freemium” pricing model; there is a free version built upon GPT-3.5 as well as a subscription “ChatGPT Plus” version based on GPT-4 which offers additional features such as the use of third-party plug-ins.
Now, you may ask, “Isn’t the free version enough?” The free version is indeed impressive, but upgrading to the paid version for $20 per month unlocks the full potential of this tool, particularly if we add plug-ins.
Here are some of the best ways to incorporate ChatGPT Plus into your practice.
Time saver and efficiency multiplier. The paid version of ChatGPT is an extraordinary time-saving tool. It can help you sort through vast amounts of medical literature in a fraction of the time it would normally take. Imagine having to sift through hundreds of articles to find the latest research relevant to a patient’s case. With the paid version of ChatGPT, you can simply ask it to provide summaries of the most recent and relevant studies, all in seconds.
Did you forget about that PowerPoint you need to make but know the potential papers you would use? No problem. ChatGPT can create slides in a few minutes. It becomes your on-demand research assistant.
Of course, you need to provide the source you find most relevant to you. Using plug-ins such as ScholarAI and Link Reader are great.
Improved patient communication. Explaining complex medical terminology and procedures to patients can sometimes be a challenge. ChatGPT can generate simplified and personalized explanations for your patients, fostering their understanding and involvement in their care process.
Epic is currently collaborating with Nuance Communications, Microsoft’s speech recognition subsidiary, to use generative AI tools for medical note-taking in the electronic health record. However, you do not need to wait for it; it just takes a prompt in ChatGPT and then copying/pasting the results into the chart.
Smoother administrative management. The premium version of ChatGPT can automate administrative tasks such as creating letters of medical necessity, clearance to other physicians for services, or even communications to staff on specific topics. This frees you to focus more on your core work: providing patient care.
Precision medicine aid. ChatGPT can be a powerful ally in the field of precision medicine. Its capabilities for analyzing large datasets and unearthing valuable insights can help deliver more personalized and potentially effective treatment plans. For example, one can prompt ChatGPT to query the reported frequency of certain genomic variants and their implications; with the upgraded version and plug-ins, the results will have fewer hallucinations — inaccurate results — and key data references.
Unlimited accessibility. Uninterrupted access is a compelling reason to upgrade. While the free version may have usage limitations, the premium version provides unrestricted, round-the-clock access. Be it a late-night research quest or an early-morning patient query, your AI assistant will always be available.
Strengthened privacy and security. The premium version of ChatGPT includes heightened privacy and security measures. Just make sure to follow HIPAA and not include identifiers when making queries.
Embracing AI tools like ChatGPT in your practice can help you stay at the cutting edge of medical care, saving you time, enhancing patient communication, and supporting you in providing personalized care.
While the free version can serve as a good starting point (there are apps for both iOS and Android), upgrading to the paid version opens up a world of possibilities that can truly supercharge your practice.
I would love to hear your comments on this column or on future topics. Contact me at [email protected].
Arturo Loaiza-Bonilla, MD, MSEd, is the cofounder and chief medical officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr. Loaiza-Bonilla is Assistant Professor of Medicine, Drexel University School of Medicine, Philadelphia, Pennsylvania, and serves as medical director of oncology research at Capital Health in New Jersey, where he maintains a connection to patient care by attending to patients 2 days a week. He has financial relationships with Verify, PSI CRO, Bayer, AstraZeneca, Cardinal Health, BrightInsight, The Lynx Group, Fresenius, Pfizer, Ipsen, Guardant, Amgen, Eisai, Natera, Merck, and Bristol Myers Squibb.
A version of this article appeared on Medscape.com.
Electronic Health Records — Recent Survey Results
I have been writing about electronic health records since the mid-1990s. While the basic concept has always been sound, I have always been (and continue to be) a critic of its implementation, which I have compared to the work of the Underpants Gnomes from the television show South Park.
You may recall that Phase One of the Gnomes’ grand scheme was to collect underpants, and Phase Three was to reap enormous profits. Unfortunately, they never quite figured out Phase Two.
EHR’s problems have run a similar course, ever since George W. Bush introduced the EHR Incentive Program (later renamed the Promoting Interoperability Program) in 2000. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” That was the ultimate goal — Phase Three, if you will — but nearly a quarter-century later, we are still struggling with Phase Two.
According to the results of a recent survey by this news organization, progress has been made, but issues with usability, reliability, and patient privacy remain.
surveys, respectively. But 56% of them continue to worry about harmful effects from incorrect or misdirected information as a result of inputs from multiple sources, and the rapid turnover of staff that is doing the inputting. Many doctors worry about the potential for incorrect medications and “rule out” diagnoses getting embedded in some patients’ records and undermining future care.
The lack of information sharing among different EHR systems has been the technology’s greatest unmet promise, according to the survey. A lack of interoperability was cited as the most common reason for switching EHR systems. Other reasons included difficulties in clinical documentation and extracting data for quality reporting, as well as the inability to merge inpatient and outpatient records.
A clear majority (72%) felt EHR systems are getting easier to use. The recent decrease in government mandates has freed vendors to work on improving ease of documentation and information retrieval. The incorporation of virtual assistants and other artificial intelligence–based features (as I discussed in two recent columns) have also contributed to improved overall usability. Some newer applications even allow users to build workarounds to compensate for inherent deficiencies in the system.
Physicians tended to be most praiseworthy of functions related to electronic prescribing and retrieval of individual patient data. They felt that much more improvement was needed in helpful prompt features, internal messaging, and communications from patients.
The survey found that 38% of physicians “always” or “often” copy and paste information in patient charts, with another 37% doing so “occasionally.” Noting some of the problems inherent in copy and paste, such as note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart, the survey authors suggest that EHR developers could help by shifting away from timelines that appear as one long note. They could also add functionality to allow new information to be displayed as updates on a digital chart.
Improvement is also needed in the way the EHR affects patient interactions, according to the survey results. Physicians are still often forced to click to a different screen to find lab results, another for current medications, and still another for past notes, all while trying to communicate with the patient. Such issues are likely to decrease in the next few years as doctors gain the ability to give voice commands to AI-based system add-ons to obtain this information.
Security concerns seem to be decreasing. In this year’s survey, nearly half of all physicians voiced no EHR privacy problems or concerns, even though a recent review of medical literature concluded that security risks remain meaningful. Those who did have privacy concerns were mostly worried about hackers and other unauthorized access to patient information.
The survey found that around 40% of EHR systems are not using patient portals to post lab results, diagnoses and procedure notes, or prescriptions. However, other physicians complained that their systems were too prompt in posting results, so that patients often received them before the doctor did. This is certainly another area where improvement at both extremes is necessary.
Other areas in which physicians saw a need for improvement were in system reliability, user training, and ongoing customer service. And among the dwindling ranks of physicians with no EHR experience, the most common reasons given for refusing to invest in an EHR system were affordability and interference with the doctor-patient relationship.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
I have been writing about electronic health records since the mid-1990s. While the basic concept has always been sound, I have always been (and continue to be) a critic of its implementation, which I have compared to the work of the Underpants Gnomes from the television show South Park.
You may recall that Phase One of the Gnomes’ grand scheme was to collect underpants, and Phase Three was to reap enormous profits. Unfortunately, they never quite figured out Phase Two.
EHR’s problems have run a similar course, ever since George W. Bush introduced the EHR Incentive Program (later renamed the Promoting Interoperability Program) in 2000. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” That was the ultimate goal — Phase Three, if you will — but nearly a quarter-century later, we are still struggling with Phase Two.
According to the results of a recent survey by this news organization, progress has been made, but issues with usability, reliability, and patient privacy remain.
surveys, respectively. But 56% of them continue to worry about harmful effects from incorrect or misdirected information as a result of inputs from multiple sources, and the rapid turnover of staff that is doing the inputting. Many doctors worry about the potential for incorrect medications and “rule out” diagnoses getting embedded in some patients’ records and undermining future care.
The lack of information sharing among different EHR systems has been the technology’s greatest unmet promise, according to the survey. A lack of interoperability was cited as the most common reason for switching EHR systems. Other reasons included difficulties in clinical documentation and extracting data for quality reporting, as well as the inability to merge inpatient and outpatient records.
A clear majority (72%) felt EHR systems are getting easier to use. The recent decrease in government mandates has freed vendors to work on improving ease of documentation and information retrieval. The incorporation of virtual assistants and other artificial intelligence–based features (as I discussed in two recent columns) have also contributed to improved overall usability. Some newer applications even allow users to build workarounds to compensate for inherent deficiencies in the system.
Physicians tended to be most praiseworthy of functions related to electronic prescribing and retrieval of individual patient data. They felt that much more improvement was needed in helpful prompt features, internal messaging, and communications from patients.
The survey found that 38% of physicians “always” or “often” copy and paste information in patient charts, with another 37% doing so “occasionally.” Noting some of the problems inherent in copy and paste, such as note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart, the survey authors suggest that EHR developers could help by shifting away from timelines that appear as one long note. They could also add functionality to allow new information to be displayed as updates on a digital chart.
Improvement is also needed in the way the EHR affects patient interactions, according to the survey results. Physicians are still often forced to click to a different screen to find lab results, another for current medications, and still another for past notes, all while trying to communicate with the patient. Such issues are likely to decrease in the next few years as doctors gain the ability to give voice commands to AI-based system add-ons to obtain this information.
Security concerns seem to be decreasing. In this year’s survey, nearly half of all physicians voiced no EHR privacy problems or concerns, even though a recent review of medical literature concluded that security risks remain meaningful. Those who did have privacy concerns were mostly worried about hackers and other unauthorized access to patient information.
The survey found that around 40% of EHR systems are not using patient portals to post lab results, diagnoses and procedure notes, or prescriptions. However, other physicians complained that their systems were too prompt in posting results, so that patients often received them before the doctor did. This is certainly another area where improvement at both extremes is necessary.
Other areas in which physicians saw a need for improvement were in system reliability, user training, and ongoing customer service. And among the dwindling ranks of physicians with no EHR experience, the most common reasons given for refusing to invest in an EHR system were affordability and interference with the doctor-patient relationship.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
I have been writing about electronic health records since the mid-1990s. While the basic concept has always been sound, I have always been (and continue to be) a critic of its implementation, which I have compared to the work of the Underpants Gnomes from the television show South Park.
You may recall that Phase One of the Gnomes’ grand scheme was to collect underpants, and Phase Three was to reap enormous profits. Unfortunately, they never quite figured out Phase Two.
EHR’s problems have run a similar course, ever since George W. Bush introduced the EHR Incentive Program (later renamed the Promoting Interoperability Program) in 2000. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” That was the ultimate goal — Phase Three, if you will — but nearly a quarter-century later, we are still struggling with Phase Two.
According to the results of a recent survey by this news organization, progress has been made, but issues with usability, reliability, and patient privacy remain.
surveys, respectively. But 56% of them continue to worry about harmful effects from incorrect or misdirected information as a result of inputs from multiple sources, and the rapid turnover of staff that is doing the inputting. Many doctors worry about the potential for incorrect medications and “rule out” diagnoses getting embedded in some patients’ records and undermining future care.
The lack of information sharing among different EHR systems has been the technology’s greatest unmet promise, according to the survey. A lack of interoperability was cited as the most common reason for switching EHR systems. Other reasons included difficulties in clinical documentation and extracting data for quality reporting, as well as the inability to merge inpatient and outpatient records.
A clear majority (72%) felt EHR systems are getting easier to use. The recent decrease in government mandates has freed vendors to work on improving ease of documentation and information retrieval. The incorporation of virtual assistants and other artificial intelligence–based features (as I discussed in two recent columns) have also contributed to improved overall usability. Some newer applications even allow users to build workarounds to compensate for inherent deficiencies in the system.
Physicians tended to be most praiseworthy of functions related to electronic prescribing and retrieval of individual patient data. They felt that much more improvement was needed in helpful prompt features, internal messaging, and communications from patients.
The survey found that 38% of physicians “always” or “often” copy and paste information in patient charts, with another 37% doing so “occasionally.” Noting some of the problems inherent in copy and paste, such as note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart, the survey authors suggest that EHR developers could help by shifting away from timelines that appear as one long note. They could also add functionality to allow new information to be displayed as updates on a digital chart.
Improvement is also needed in the way the EHR affects patient interactions, according to the survey results. Physicians are still often forced to click to a different screen to find lab results, another for current medications, and still another for past notes, all while trying to communicate with the patient. Such issues are likely to decrease in the next few years as doctors gain the ability to give voice commands to AI-based system add-ons to obtain this information.
Security concerns seem to be decreasing. In this year’s survey, nearly half of all physicians voiced no EHR privacy problems or concerns, even though a recent review of medical literature concluded that security risks remain meaningful. Those who did have privacy concerns were mostly worried about hackers and other unauthorized access to patient information.
The survey found that around 40% of EHR systems are not using patient portals to post lab results, diagnoses and procedure notes, or prescriptions. However, other physicians complained that their systems were too prompt in posting results, so that patients often received them before the doctor did. This is certainly another area where improvement at both extremes is necessary.
Other areas in which physicians saw a need for improvement were in system reliability, user training, and ongoing customer service. And among the dwindling ranks of physicians with no EHR experience, the most common reasons given for refusing to invest in an EHR system were affordability and interference with the doctor-patient relationship.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].
Toward a better framework for postmarketing reproductive safety surveillance of medications
For the last 30 years, the Center for Women’s Mental Health at Massachusetts General Hospital (MGH) has had as part of its mission, the conveying of accurate information about the reproductive safety of psychiatric medications. There has been a spectrum of medicines developed across psychiatric indications over the last several decades, and many studies over those decades have attempted to delineate the reproductive safety of these agents.
With the development of new antidepressants and second-generation antipsychotics has come an appreciation of the utility of these agents across a wide range of psychiatric disease states and psychiatric symptoms. More and more data demonstrate the efficacy of these medicines for mood and anxiety disorders; these agents are also used for a broad array of symptoms from insomnia, irritability, and symptoms of posttraumatic stress disorder (PTSD) just as examples — even absent formal approval by the US Food and Drug Administration (FDA) for these specific indications. With the growing use of medicines, including new antidepressants like selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors, and second-generation atypical antipsychotics, there has been a greater interest and appreciation of the need to provide women with the best information about reproductive safety of these medicines as well.
When I began working in reproductive psychiatry, the FDA was using the pregnancy labeling categories introduced in 1979. The categories were simple, but also oversimplified in terms of incompletely conveying information about reproductive safety. For instance, category labels of B and C under the old labeling system could be nebulous, containing sparse information (in the case of category B) or animal data and some conflicting human data (in the case of category C) that may not have translated into relevant or easily interpretable safety information for patients and clinicians.
It was on that basis the current Pregnancy and Lactation Labeling (PLLR) Final Rule was published in 2014, which was a shift from categorical labeling to more descriptive labeling, including updated actual information on the package insert about available reproductive safety data, animal data, and data on lactation.
Even following the publication of the PLLR, there has still been an acknowledgment in the field that our assessment tools for postmarketing reproductive safety surveillance are incomplete. A recent 2-day FDA workshop hosted by the Duke-Margolis Center for Health Policy on optimizing the use of postapproval pregnancy safety studies sought to discuss the many questions that still surround this issue. Based on presentations at this workshop, a framework emerged for the future of assessing the reproductive safety of medications, which included an effort to develop the most effective model using tools such as pregnancy registries and harnessing “big data,” whether through electronic health records or large administrative databases from public and private insurers. Together, these various sources of information can provide signals of potential concern, prompting the need for a more rigorous look at the reproductive safety of a medication, or provide reassurance if data fail to indicate the absence of a signal of risk.
FDA’s new commitments under the latest reauthorization of the Prescription Drug User Fee Act (PDUFA VII) include pregnancy-specific postmarketing safety requirements as well as the creation of a framework for how data from pregnancy-specific postmarketing studies can be used. The agency is also conducting demonstration projects, including one for assessing the performance of pregnancy registries for the potential to detect safety signals for medications early in pregnancy. FDA is expanding its Sentinel Initiative to help accomplish these aims, and is implementing an Active Risk Identification and Analysis (ARIA) system to conduct active safety surveillance of medications used during pregnancy.
Pregnancy registries have now been available for decades, and some have been more successful than others across different classes of medicines, with the most rigorous registries including prospective follow-up of women across pregnancies and careful documentation of malformations (at best with original source data and with a blinded dysmorphologist). Still, with all of its rigor, even the best-intentioned efforts with respect to pregnancy registries have limitations. As I mentioned in my testimony during the public comment portion of the workshop, the sheer volume of pregnancy data from administrative databases we now have access to is attractive, but the quality of these data needs to be good enough to ascertain a signal of risk if they are to be used as a basis for reproductive safety determination.
The flip side of using data from large administrative databases is using carefully collected data from pregnancy registries. With a pregnancy registry, accrual of a substantial number of participants can also take a considerable period of time, and initial risk estimates of outcomes can have typically large confidence intervals, which can make it difficult to discern whether a drug is safe for women of reproductive age.
Another key issue is a lack of participation from manufacturers with respect to commitment to collection of high-quality reproductive safety data. History has shown that many medication manufacturers, unless required to have a dedicated registry as part of a postmarketing requirement or commitment, will invest sparse resources to track data on safety of fetal drug exposure. Participation is typically voluntary and varies from company to company unless, as noted previously, there is a postmarketing requirement or commitment tied to the approval of a medication. Just as a recent concrete example, the manufacturer of a new medication recently approved by the FDA for the treatment of postpartum depression (which will include presumably sexually active women well into the first postpartum year) has no plan to support the collection of reproductive safety data on this new medication because it is not required to, based on current FDA guidelines and the absence of a postmarketing requirement to do so.
Looking ahead
While the PLLR was a huge step forward in the field from the old pregnancy category system that could misinform women contemplating pregnancy, it also sets the stage for the next iteration of a system that allows us to generate information more quickly about the reproductive safety of medications. In psychiatry, as many as 10% of women use SSRIs during pregnancy. With drugs like atypical antipsychotics being used across disease states — in schizophrenia, bipolar disorder, depression, anxiety, insomnia, and PTSD — and where new classes of medicine are becoming available, like with ketamine or steroids, we need to have a system by which we can more quickly ascertain reproductive safety information. This information informs treatment decisions during a critical life event of deciding to try to become pregnant or during an actual pregnancy.
In my mind, it is reassuring when a registry has even as few as 50-60 cases of fetal exposure without an increase in the risk for malformation, because it can mean we are not seeing a repeat of the past with medications like thalidomide and sodium valproate. However, patients and clinicians are starved for better data. Risk assessment is also different from clinician to clinician and patient to patient. We want to empower patients to make decisions that work for them based on more rapidly accumulating information and help inform their decisions.
To come out on the “other side” of the PLLR, , which can be confusing when study results frequently conflict. I believe we have an obligation today to do this better, because the areas of reproductive toxicology and pharmacovigilance are growing incredibly quickly, and clinicians and patients are seeing these volumes of data being published without the ability to integrate that information in a systematic way.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Full disclosure information for Dr. Cohen is available at womensmentalhealth.org. Email Dr. Cohen at [email protected].
For the last 30 years, the Center for Women’s Mental Health at Massachusetts General Hospital (MGH) has had as part of its mission, the conveying of accurate information about the reproductive safety of psychiatric medications. There has been a spectrum of medicines developed across psychiatric indications over the last several decades, and many studies over those decades have attempted to delineate the reproductive safety of these agents.
With the development of new antidepressants and second-generation antipsychotics has come an appreciation of the utility of these agents across a wide range of psychiatric disease states and psychiatric symptoms. More and more data demonstrate the efficacy of these medicines for mood and anxiety disorders; these agents are also used for a broad array of symptoms from insomnia, irritability, and symptoms of posttraumatic stress disorder (PTSD) just as examples — even absent formal approval by the US Food and Drug Administration (FDA) for these specific indications. With the growing use of medicines, including new antidepressants like selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors, and second-generation atypical antipsychotics, there has been a greater interest and appreciation of the need to provide women with the best information about reproductive safety of these medicines as well.
When I began working in reproductive psychiatry, the FDA was using the pregnancy labeling categories introduced in 1979. The categories were simple, but also oversimplified in terms of incompletely conveying information about reproductive safety. For instance, category labels of B and C under the old labeling system could be nebulous, containing sparse information (in the case of category B) or animal data and some conflicting human data (in the case of category C) that may not have translated into relevant or easily interpretable safety information for patients and clinicians.
It was on that basis the current Pregnancy and Lactation Labeling (PLLR) Final Rule was published in 2014, which was a shift from categorical labeling to more descriptive labeling, including updated actual information on the package insert about available reproductive safety data, animal data, and data on lactation.
Even following the publication of the PLLR, there has still been an acknowledgment in the field that our assessment tools for postmarketing reproductive safety surveillance are incomplete. A recent 2-day FDA workshop hosted by the Duke-Margolis Center for Health Policy on optimizing the use of postapproval pregnancy safety studies sought to discuss the many questions that still surround this issue. Based on presentations at this workshop, a framework emerged for the future of assessing the reproductive safety of medications, which included an effort to develop the most effective model using tools such as pregnancy registries and harnessing “big data,” whether through electronic health records or large administrative databases from public and private insurers. Together, these various sources of information can provide signals of potential concern, prompting the need for a more rigorous look at the reproductive safety of a medication, or provide reassurance if data fail to indicate the absence of a signal of risk.
FDA’s new commitments under the latest reauthorization of the Prescription Drug User Fee Act (PDUFA VII) include pregnancy-specific postmarketing safety requirements as well as the creation of a framework for how data from pregnancy-specific postmarketing studies can be used. The agency is also conducting demonstration projects, including one for assessing the performance of pregnancy registries for the potential to detect safety signals for medications early in pregnancy. FDA is expanding its Sentinel Initiative to help accomplish these aims, and is implementing an Active Risk Identification and Analysis (ARIA) system to conduct active safety surveillance of medications used during pregnancy.
Pregnancy registries have now been available for decades, and some have been more successful than others across different classes of medicines, with the most rigorous registries including prospective follow-up of women across pregnancies and careful documentation of malformations (at best with original source data and with a blinded dysmorphologist). Still, with all of its rigor, even the best-intentioned efforts with respect to pregnancy registries have limitations. As I mentioned in my testimony during the public comment portion of the workshop, the sheer volume of pregnancy data from administrative databases we now have access to is attractive, but the quality of these data needs to be good enough to ascertain a signal of risk if they are to be used as a basis for reproductive safety determination.
The flip side of using data from large administrative databases is using carefully collected data from pregnancy registries. With a pregnancy registry, accrual of a substantial number of participants can also take a considerable period of time, and initial risk estimates of outcomes can have typically large confidence intervals, which can make it difficult to discern whether a drug is safe for women of reproductive age.
Another key issue is a lack of participation from manufacturers with respect to commitment to collection of high-quality reproductive safety data. History has shown that many medication manufacturers, unless required to have a dedicated registry as part of a postmarketing requirement or commitment, will invest sparse resources to track data on safety of fetal drug exposure. Participation is typically voluntary and varies from company to company unless, as noted previously, there is a postmarketing requirement or commitment tied to the approval of a medication. Just as a recent concrete example, the manufacturer of a new medication recently approved by the FDA for the treatment of postpartum depression (which will include presumably sexually active women well into the first postpartum year) has no plan to support the collection of reproductive safety data on this new medication because it is not required to, based on current FDA guidelines and the absence of a postmarketing requirement to do so.
Looking ahead
While the PLLR was a huge step forward in the field from the old pregnancy category system that could misinform women contemplating pregnancy, it also sets the stage for the next iteration of a system that allows us to generate information more quickly about the reproductive safety of medications. In psychiatry, as many as 10% of women use SSRIs during pregnancy. With drugs like atypical antipsychotics being used across disease states — in schizophrenia, bipolar disorder, depression, anxiety, insomnia, and PTSD — and where new classes of medicine are becoming available, like with ketamine or steroids, we need to have a system by which we can more quickly ascertain reproductive safety information. This information informs treatment decisions during a critical life event of deciding to try to become pregnant or during an actual pregnancy.
In my mind, it is reassuring when a registry has even as few as 50-60 cases of fetal exposure without an increase in the risk for malformation, because it can mean we are not seeing a repeat of the past with medications like thalidomide and sodium valproate. However, patients and clinicians are starved for better data. Risk assessment is also different from clinician to clinician and patient to patient. We want to empower patients to make decisions that work for them based on more rapidly accumulating information and help inform their decisions.
To come out on the “other side” of the PLLR, , which can be confusing when study results frequently conflict. I believe we have an obligation today to do this better, because the areas of reproductive toxicology and pharmacovigilance are growing incredibly quickly, and clinicians and patients are seeing these volumes of data being published without the ability to integrate that information in a systematic way.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Full disclosure information for Dr. Cohen is available at womensmentalhealth.org. Email Dr. Cohen at [email protected].
For the last 30 years, the Center for Women’s Mental Health at Massachusetts General Hospital (MGH) has had as part of its mission, the conveying of accurate information about the reproductive safety of psychiatric medications. There has been a spectrum of medicines developed across psychiatric indications over the last several decades, and many studies over those decades have attempted to delineate the reproductive safety of these agents.
With the development of new antidepressants and second-generation antipsychotics has come an appreciation of the utility of these agents across a wide range of psychiatric disease states and psychiatric symptoms. More and more data demonstrate the efficacy of these medicines for mood and anxiety disorders; these agents are also used for a broad array of symptoms from insomnia, irritability, and symptoms of posttraumatic stress disorder (PTSD) just as examples — even absent formal approval by the US Food and Drug Administration (FDA) for these specific indications. With the growing use of medicines, including new antidepressants like selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors, and second-generation atypical antipsychotics, there has been a greater interest and appreciation of the need to provide women with the best information about reproductive safety of these medicines as well.
When I began working in reproductive psychiatry, the FDA was using the pregnancy labeling categories introduced in 1979. The categories were simple, but also oversimplified in terms of incompletely conveying information about reproductive safety. For instance, category labels of B and C under the old labeling system could be nebulous, containing sparse information (in the case of category B) or animal data and some conflicting human data (in the case of category C) that may not have translated into relevant or easily interpretable safety information for patients and clinicians.
It was on that basis the current Pregnancy and Lactation Labeling (PLLR) Final Rule was published in 2014, which was a shift from categorical labeling to more descriptive labeling, including updated actual information on the package insert about available reproductive safety data, animal data, and data on lactation.
Even following the publication of the PLLR, there has still been an acknowledgment in the field that our assessment tools for postmarketing reproductive safety surveillance are incomplete. A recent 2-day FDA workshop hosted by the Duke-Margolis Center for Health Policy on optimizing the use of postapproval pregnancy safety studies sought to discuss the many questions that still surround this issue. Based on presentations at this workshop, a framework emerged for the future of assessing the reproductive safety of medications, which included an effort to develop the most effective model using tools such as pregnancy registries and harnessing “big data,” whether through electronic health records or large administrative databases from public and private insurers. Together, these various sources of information can provide signals of potential concern, prompting the need for a more rigorous look at the reproductive safety of a medication, or provide reassurance if data fail to indicate the absence of a signal of risk.
FDA’s new commitments under the latest reauthorization of the Prescription Drug User Fee Act (PDUFA VII) include pregnancy-specific postmarketing safety requirements as well as the creation of a framework for how data from pregnancy-specific postmarketing studies can be used. The agency is also conducting demonstration projects, including one for assessing the performance of pregnancy registries for the potential to detect safety signals for medications early in pregnancy. FDA is expanding its Sentinel Initiative to help accomplish these aims, and is implementing an Active Risk Identification and Analysis (ARIA) system to conduct active safety surveillance of medications used during pregnancy.
Pregnancy registries have now been available for decades, and some have been more successful than others across different classes of medicines, with the most rigorous registries including prospective follow-up of women across pregnancies and careful documentation of malformations (at best with original source data and with a blinded dysmorphologist). Still, with all of its rigor, even the best-intentioned efforts with respect to pregnancy registries have limitations. As I mentioned in my testimony during the public comment portion of the workshop, the sheer volume of pregnancy data from administrative databases we now have access to is attractive, but the quality of these data needs to be good enough to ascertain a signal of risk if they are to be used as a basis for reproductive safety determination.
The flip side of using data from large administrative databases is using carefully collected data from pregnancy registries. With a pregnancy registry, accrual of a substantial number of participants can also take a considerable period of time, and initial risk estimates of outcomes can have typically large confidence intervals, which can make it difficult to discern whether a drug is safe for women of reproductive age.
Another key issue is a lack of participation from manufacturers with respect to commitment to collection of high-quality reproductive safety data. History has shown that many medication manufacturers, unless required to have a dedicated registry as part of a postmarketing requirement or commitment, will invest sparse resources to track data on safety of fetal drug exposure. Participation is typically voluntary and varies from company to company unless, as noted previously, there is a postmarketing requirement or commitment tied to the approval of a medication. Just as a recent concrete example, the manufacturer of a new medication recently approved by the FDA for the treatment of postpartum depression (which will include presumably sexually active women well into the first postpartum year) has no plan to support the collection of reproductive safety data on this new medication because it is not required to, based on current FDA guidelines and the absence of a postmarketing requirement to do so.
Looking ahead
While the PLLR was a huge step forward in the field from the old pregnancy category system that could misinform women contemplating pregnancy, it also sets the stage for the next iteration of a system that allows us to generate information more quickly about the reproductive safety of medications. In psychiatry, as many as 10% of women use SSRIs during pregnancy. With drugs like atypical antipsychotics being used across disease states — in schizophrenia, bipolar disorder, depression, anxiety, insomnia, and PTSD — and where new classes of medicine are becoming available, like with ketamine or steroids, we need to have a system by which we can more quickly ascertain reproductive safety information. This information informs treatment decisions during a critical life event of deciding to try to become pregnant or during an actual pregnancy.
In my mind, it is reassuring when a registry has even as few as 50-60 cases of fetal exposure without an increase in the risk for malformation, because it can mean we are not seeing a repeat of the past with medications like thalidomide and sodium valproate. However, patients and clinicians are starved for better data. Risk assessment is also different from clinician to clinician and patient to patient. We want to empower patients to make decisions that work for them based on more rapidly accumulating information and help inform their decisions.
To come out on the “other side” of the PLLR, , which can be confusing when study results frequently conflict. I believe we have an obligation today to do this better, because the areas of reproductive toxicology and pharmacovigilance are growing incredibly quickly, and clinicians and patients are seeing these volumes of data being published without the ability to integrate that information in a systematic way.
Dr. Cohen is the director of the Ammon-Pinizzotto Center for Women’s Mental Health at Massachusetts General Hospital (MGH) in Boston, which provides information resources and conducts clinical care and research in reproductive mental health. He has been a consultant to manufacturers of psychiatric medications. Full disclosure information for Dr. Cohen is available at womensmentalhealth.org. Email Dr. Cohen at [email protected].
How to prescribe Zepbound
December marks the advent of the approval of tirzepatide (Zepbound) for on-label treatment of obesity. In November 2023, the US Food and Drug Administration (FDA) approved it for the treatment of obesity in adults.
In May 2022, the FDA approved Mounjaro, which is tirzepatide, for type 2 diabetes. Since then, many physicians, including myself, have prescribed it off-label for obesity. As an endocrinologist treating both obesity and diabetes,
The Expertise
Because GLP-1 receptor agonists have been around since 2005, we’ve had over a decade of clinical experience with these medications. Table 2 provides more nuanced information on tirzepatide (as Zepbound, for obesity) based on our experiences with dulaglutide, liraglutide, semaglutide, and tirzepatide (as Mounjaro).
The Reality
In today’s increasingly complex healthcare system, the reality of providing high-quality obesity care is challenging. When discussing tirzepatide with patients, I use a 4 Cs schematic — comorbidities, cautions, costs, choices — to cover the most frequently asked questions.
Comorbidities
In trials, tirzepatide reduced A1c by about 2%. In one diabetes trial, tirzepatide reduced liver fat content significantly more than the comparator (insulin), and trials of tirzepatide in nonalcoholic steatohepatitis are ongoing. A prespecified meta-analysis of tirzepatide and cardiovascular disease estimated a 20% reduction in the risk for cardiovascular death, myocardial infarction, stroke, and hospitalized unstable angina. Tirzepatide as well as other GLP-1 agonists may be beneficial in alcohol use disorder. Prescribing tirzepatide to patients who have or are at risk of developing such comorbidities is an ideal way to target multiple metabolic diseases with one agent.
Cautions
The first principle of medicine is “do no harm.” Tirzepatide may be a poor option for individuals with a history of pancreatitis, gastroparesis, or severe gastroesophageal reflux disease. Because tirzepatide may interfere with the efficacy of estrogen-containing contraceptives during its uptitration phase, women should speak with their doctors about appropriate birth control options (eg, progestin-only, barrier methods). In clinical trials of tirzepatide, male participants were also advised to use reliable contraception. If patients are family-planning, tirzepatide should be discontinued 2 months (for women) and 4 months (for men) before conception, because its effects on fertility or pregnancy are currently unknown.
Costs
At a retail price of $1279 per month, Zepbound is only slightly more affordable than its main competitor, Wegovy (semaglutide 2.4 mg). Complex pharmacy negotiations may reduce this cost, but even with rebates, coupons, and commercial insurance, these costs still place tirzepatide out of reach for many patients. For patients who cannot access tirzepatide, clinicians should discuss more cost-feasible, evidence-based alternatives: for example, phentermine, phentermine-topiramate, naltrexone-bupropion, metformin, bupropion, or topiramate.
Choices
Patient preference drives much of today’s clinical decision-making. Some patients may be switching from semaglutide to tirzepatide, whether by choice or on the basis of physician recommendation. Although no head-to-head obesity trial exists, data from SURPASS-2 and SUSTAIN-FORTE can inform therapeutic equivalence:
- Semaglutide 1.0 mg to tirzepatide 2.5 mg will be a step-down; 5 mg will be a step-up
- Semaglutide 2.0 or 2.4 mg to tirzepatide 5 mg is probably equivalent
The decision to switch therapeutics may depend on weight loss goals, side effect tolerability, or insurance coverage. As with all medications, the use of tirzepatide should progress with shared decision-making, thorough discussions of risks vs benefits, and individualized regimens tailored to each patient’s needs.
The newly approved Zepbound is a valuable addition to our toolbox of obesity treatments. Patients and providers alike are excited for its potential as a highly effective antiobesity medication that can cause a degree of weight loss necessary to reverse comorbidities. The medical management of obesity with agents like tirzepatide holds great promise in addressing today’s obesity epidemic.
Dr. Tchang is Assistant Professor, Clinical Medicine, Division of Endocrinology, Diabetes, and Metabolism, Weill Cornell Medicine; Physician, Department of Medicine, Iris Cantor Women’s Health Center, Comprehensive Weight Control Center, New York, NY. She disclosed ties to Gelesis and Novo Nordisk.
A version of this article appeared on Medscape.com.
December marks the advent of the approval of tirzepatide (Zepbound) for on-label treatment of obesity. In November 2023, the US Food and Drug Administration (FDA) approved it for the treatment of obesity in adults.
In May 2022, the FDA approved Mounjaro, which is tirzepatide, for type 2 diabetes. Since then, many physicians, including myself, have prescribed it off-label for obesity. As an endocrinologist treating both obesity and diabetes,
The Expertise
Because GLP-1 receptor agonists have been around since 2005, we’ve had over a decade of clinical experience with these medications. Table 2 provides more nuanced information on tirzepatide (as Zepbound, for obesity) based on our experiences with dulaglutide, liraglutide, semaglutide, and tirzepatide (as Mounjaro).
The Reality
In today’s increasingly complex healthcare system, the reality of providing high-quality obesity care is challenging. When discussing tirzepatide with patients, I use a 4 Cs schematic — comorbidities, cautions, costs, choices — to cover the most frequently asked questions.
Comorbidities
In trials, tirzepatide reduced A1c by about 2%. In one diabetes trial, tirzepatide reduced liver fat content significantly more than the comparator (insulin), and trials of tirzepatide in nonalcoholic steatohepatitis are ongoing. A prespecified meta-analysis of tirzepatide and cardiovascular disease estimated a 20% reduction in the risk for cardiovascular death, myocardial infarction, stroke, and hospitalized unstable angina. Tirzepatide as well as other GLP-1 agonists may be beneficial in alcohol use disorder. Prescribing tirzepatide to patients who have or are at risk of developing such comorbidities is an ideal way to target multiple metabolic diseases with one agent.
Cautions
The first principle of medicine is “do no harm.” Tirzepatide may be a poor option for individuals with a history of pancreatitis, gastroparesis, or severe gastroesophageal reflux disease. Because tirzepatide may interfere with the efficacy of estrogen-containing contraceptives during its uptitration phase, women should speak with their doctors about appropriate birth control options (eg, progestin-only, barrier methods). In clinical trials of tirzepatide, male participants were also advised to use reliable contraception. If patients are family-planning, tirzepatide should be discontinued 2 months (for women) and 4 months (for men) before conception, because its effects on fertility or pregnancy are currently unknown.
Costs
At a retail price of $1279 per month, Zepbound is only slightly more affordable than its main competitor, Wegovy (semaglutide 2.4 mg). Complex pharmacy negotiations may reduce this cost, but even with rebates, coupons, and commercial insurance, these costs still place tirzepatide out of reach for many patients. For patients who cannot access tirzepatide, clinicians should discuss more cost-feasible, evidence-based alternatives: for example, phentermine, phentermine-topiramate, naltrexone-bupropion, metformin, bupropion, or topiramate.
Choices
Patient preference drives much of today’s clinical decision-making. Some patients may be switching from semaglutide to tirzepatide, whether by choice or on the basis of physician recommendation. Although no head-to-head obesity trial exists, data from SURPASS-2 and SUSTAIN-FORTE can inform therapeutic equivalence:
- Semaglutide 1.0 mg to tirzepatide 2.5 mg will be a step-down; 5 mg will be a step-up
- Semaglutide 2.0 or 2.4 mg to tirzepatide 5 mg is probably equivalent
The decision to switch therapeutics may depend on weight loss goals, side effect tolerability, or insurance coverage. As with all medications, the use of tirzepatide should progress with shared decision-making, thorough discussions of risks vs benefits, and individualized regimens tailored to each patient’s needs.
The newly approved Zepbound is a valuable addition to our toolbox of obesity treatments. Patients and providers alike are excited for its potential as a highly effective antiobesity medication that can cause a degree of weight loss necessary to reverse comorbidities. The medical management of obesity with agents like tirzepatide holds great promise in addressing today’s obesity epidemic.
Dr. Tchang is Assistant Professor, Clinical Medicine, Division of Endocrinology, Diabetes, and Metabolism, Weill Cornell Medicine; Physician, Department of Medicine, Iris Cantor Women’s Health Center, Comprehensive Weight Control Center, New York, NY. She disclosed ties to Gelesis and Novo Nordisk.
A version of this article appeared on Medscape.com.
December marks the advent of the approval of tirzepatide (Zepbound) for on-label treatment of obesity. In November 2023, the US Food and Drug Administration (FDA) approved it for the treatment of obesity in adults.
In May 2022, the FDA approved Mounjaro, which is tirzepatide, for type 2 diabetes. Since then, many physicians, including myself, have prescribed it off-label for obesity. As an endocrinologist treating both obesity and diabetes,
The Expertise
Because GLP-1 receptor agonists have been around since 2005, we’ve had over a decade of clinical experience with these medications. Table 2 provides more nuanced information on tirzepatide (as Zepbound, for obesity) based on our experiences with dulaglutide, liraglutide, semaglutide, and tirzepatide (as Mounjaro).
The Reality
In today’s increasingly complex healthcare system, the reality of providing high-quality obesity care is challenging. When discussing tirzepatide with patients, I use a 4 Cs schematic — comorbidities, cautions, costs, choices — to cover the most frequently asked questions.
Comorbidities
In trials, tirzepatide reduced A1c by about 2%. In one diabetes trial, tirzepatide reduced liver fat content significantly more than the comparator (insulin), and trials of tirzepatide in nonalcoholic steatohepatitis are ongoing. A prespecified meta-analysis of tirzepatide and cardiovascular disease estimated a 20% reduction in the risk for cardiovascular death, myocardial infarction, stroke, and hospitalized unstable angina. Tirzepatide as well as other GLP-1 agonists may be beneficial in alcohol use disorder. Prescribing tirzepatide to patients who have or are at risk of developing such comorbidities is an ideal way to target multiple metabolic diseases with one agent.
Cautions
The first principle of medicine is “do no harm.” Tirzepatide may be a poor option for individuals with a history of pancreatitis, gastroparesis, or severe gastroesophageal reflux disease. Because tirzepatide may interfere with the efficacy of estrogen-containing contraceptives during its uptitration phase, women should speak with their doctors about appropriate birth control options (eg, progestin-only, barrier methods). In clinical trials of tirzepatide, male participants were also advised to use reliable contraception. If patients are family-planning, tirzepatide should be discontinued 2 months (for women) and 4 months (for men) before conception, because its effects on fertility or pregnancy are currently unknown.
Costs
At a retail price of $1279 per month, Zepbound is only slightly more affordable than its main competitor, Wegovy (semaglutide 2.4 mg). Complex pharmacy negotiations may reduce this cost, but even with rebates, coupons, and commercial insurance, these costs still place tirzepatide out of reach for many patients. For patients who cannot access tirzepatide, clinicians should discuss more cost-feasible, evidence-based alternatives: for example, phentermine, phentermine-topiramate, naltrexone-bupropion, metformin, bupropion, or topiramate.
Choices
Patient preference drives much of today’s clinical decision-making. Some patients may be switching from semaglutide to tirzepatide, whether by choice or on the basis of physician recommendation. Although no head-to-head obesity trial exists, data from SURPASS-2 and SUSTAIN-FORTE can inform therapeutic equivalence:
- Semaglutide 1.0 mg to tirzepatide 2.5 mg will be a step-down; 5 mg will be a step-up
- Semaglutide 2.0 or 2.4 mg to tirzepatide 5 mg is probably equivalent
The decision to switch therapeutics may depend on weight loss goals, side effect tolerability, or insurance coverage. As with all medications, the use of tirzepatide should progress with shared decision-making, thorough discussions of risks vs benefits, and individualized regimens tailored to each patient’s needs.
The newly approved Zepbound is a valuable addition to our toolbox of obesity treatments. Patients and providers alike are excited for its potential as a highly effective antiobesity medication that can cause a degree of weight loss necessary to reverse comorbidities. The medical management of obesity with agents like tirzepatide holds great promise in addressing today’s obesity epidemic.
Dr. Tchang is Assistant Professor, Clinical Medicine, Division of Endocrinology, Diabetes, and Metabolism, Weill Cornell Medicine; Physician, Department of Medicine, Iris Cantor Women’s Health Center, Comprehensive Weight Control Center, New York, NY. She disclosed ties to Gelesis and Novo Nordisk.
A version of this article appeared on Medscape.com.
Technology for primary care — terrific, terrifying, or both?
We have all been using technology in our primary care practices for a long time but newer formats have been emerging so fast that our minds, much less our staff’s minds, may be spinning.
Our old friend the telephone, a time-soaking nemesis for scheduling, checking coverage, questions calls, prescribing, quick consults, and follow-up is being replaced by EHR portals and SMS for messaging (e.g. DoctorConnect, SimplePractice), drop-in televisits and patient education links on our websites (e.g. Schmitt Pediatric Care, Remedy Connect), and chatbots for scheduling (e.g. CHEC-UP). While time is saved, what is lost may be hearing the subtext of anxiety or misperceptions in parents’ voices that would change our advice and the empathetic human connection in conversations with our patients. A hybrid approach may be better.
The paper appointment book has been replaced by scheduling systems sometimes lacking in flexibility for double booking, sibling visits, and variable length or extremely valuable multi-professional visits. Allowing patients to book their own visits may place complex problems in inappropriate slots, so only allowing online requests for visits is safer. On the other hand, many of us can now squeeze in “same day” televisits (e.g. Blueberry Pediatrics), sometimes from outside our practice (e.g., zocdoc), to increase payments and even entice new patients to enroll.
Amazing advances in technology are being made in specialty care such as genetic modifications (CRISPR), immunotherapies (mRNA vaccines and AI drug design), robot-assisted surgery, and 3-D printing of body parts and prosthetics. Technology as treatment such as transcranial magnetic stimulation and vagal stimulation are finding value in psychiatry.
But beside being aware of and able to order such specialty technologies, innovations are now extending our senses in primary care such as amplified or visual stethoscopes, bedside ultrasound (e.g. Butterfly), remote visualization (oto-, endo-)scopes, photographic vision screens (e.g. iScreen) for skin lesion (VisualDx) and genetic syndrome facial recognition. We need to be sure that technologies are tested and calibrated for children and different racial groups and genders to provide safe and equitable care. Early adoption may not always be the best approach. Costs of technology, as usual, may limit access to these advanced care aids especially, as usual, in practices serving low income and rural communities.
Patients, especially younger parents and youth, now expect to participate and can directly benefit from technology as part of their health care. Validated parent or self-report screens (e.g. EHRs, Phreesia) can detect important issues early for more effective intervention. Such questionnaires typically provide a pass/fail result or score, but other delivery systems (e.g. CHADIS) include interpretation, assist patients/parents in setting visit priorities and health goals, and even chain results of one questionnaire to secondary screens to hone in on problems, sometimes obviating a time-consuming second visit. Patient-completed comprehensive questionnaires (e.g. Well Visit Planner, CHADIS) allow us time to use our skills to focus on concerns, education, and management rather than asking myriad routine questions. Some (e.g. CHADIS) even create visit documentation reducing our “pajama time” write ups (and burnout); automate repeated online measures to track progress; and use questionnaire results to trigger related patient-specific education and resources rather than the often-ignored generic EHR handouts.
Digital therapeutics such as apps for anxiety (e.g. Calm), depression (e.g. SparkRx, Cass), weight control (e.g. Noom, Lose it), fitness, or sleep tracking (e.g. Whoop) help educate and, in some cases, provide real-time feedback to personalize discovery of contributing factors in order to maintain motivation for positive health behavior change. Some video games improve ADHD symptoms (e.g. EndeavorRX). Virtual reality scenarios have been shown to desensitize those with PTSD and social anxiety or teach social skills to children with autism.
Systems that trigger resource listings (including apps) from screen results can help, but now with over 10,000 apps for mental health, knowing what to recommend for what conditions is a challenge for which ratings (e.g. MINDapps.org) can help. With few product reps visiting to tell us what’s new, we need to read critically about innovations, search the web, subscribe to the AAP SOAPM LISTSERV, visit exhibitors at professional meetings, and talk with peers.
All the digital data collected from health care technology, if assembled with privacy constraints and analyzed with advanced statistical methods, have the possibility, with or without inclusion of genomic data, to allow for more accurate diagnostic and treatment decision support. While AI can search widely for patterns, it needs to be “trained” on appropriate data to make correct conclusions. We are all aware that the history determines 85% of both diagnosis and treatment decisions, particularly in primary care where x-rays or lab tests are not often needed.
But history in EHR notes is often idiosyncratic, entered hours after the visit by the clinician, and does not include the information needed to define diagnostic or guideline criteria, even if the clinician knows and considered those criteria. EHR templates are presented blank and are onerous and time consuming for clinicians. In addition, individual patient barriers to care, preferences, and environmental or subjective concerns are infrequently documented even though they may make the biggest difference to adherence and/or outcomes.
Notes made from voice to text digital AI translation of the encounter (e.g. Nuance DAX) are even less likely to include diagnostic criteria as it would be inappropriate to speak these. To use EHR history data to train AI and to test efficacy of care using variations of guidelines, guideline-related data is needed from online patient entries in questionnaires that are transformed to fill in templates along with some structured choices for clinician entries forming visit notes (e.g. CHADIS). New apps to facilitate clinician documentation of guidelines (e.g. AvoMD) could streamline visits as well as help document guideline criteria. The resulting combination of guideline-relevant patient histories and objective data to test and iteratively refine guidelines will allow a process known as a “Learning Health System.”
Technology to collect this kind of data can allow for the aspirational American Academy of Pediatrics CHILD Registry to approach this goal. Population-level data can provide surveillance for illness, toxins, effects of climate change, social drivers of health, and even effects of technologies themselves such as social media and remote learning so that we can attempt to make the best choices for the future.
Clinicians, staff, and patients will need to develop trust in technology as it infiltrates all aspects of health care. Professionals need both evidence and experience to trust a technology, which takes time and effort. Disinformation in the media may reduce trust or evoke unwarranted trust, as we have all seen regarding vaccines. Clear and coherent public health messaging can help but is no longer a panacea for developing trust in health care. Our nonjudgmental listening and informed opinions are needed more than ever.
The biggest issues for new technology are likely to be the need for workflow adjustments, changing our habit patterns, training, and cost/benefit analyses. With today’s high staff churn, confusion and even chaos can ensue when adopting new technology.
Staff need to be part of the selection process, if at all possible, and discuss how roles and flow will need to change. Having one staff member be a champion and expert for new tech can move adoption to a shared process rather than imposing “one more thing.” It is crucial to discuss the benefits for patients and staff even if the change is required. Sometimes cost savings can include a bonus for staff or free group lunches. Providing a certificate of achievement or title promotion for mastering new tech may be appropriate. Giving some time off from other tasks to learn new workflows can reduce resistance rather than just adding it on to a regular workload. Office “huddles” going forward can include examples of benefits staff have observed or heard about from the adoption. There are quality improvement processes that engage the team — some that earn MOC-4 or CEU credits — that apply to making workflow changes and measuring them iteratively.
If technology takes over important aspects of the work of medical professionals, even if it is faster and/or more accurate, it may degrade clinical observational, interactional, and decision-making skills through lack of use. It may also remove the sense of self-efficacy that motivates professionals to endure onerous training and desire to enter the field. Using technology may reduce empathetic interactions that are basic to humanistic motivation, work satisfaction, and even community respect. Moral injury is already rampant in medicine from restrictions on freedom to do what we see as important for our patients. Technology has great potential and already is enhancing our ability to provide the best care for patients but the risks need to be watched for and ameliorated.
When technology automates comprehensive visit documentation that highlights priority and risk areas from patient input and individualizes decision support, it can facilitate the personalized care that we and our patients want to experience. We must not be so awed, intrigued, or wary of new technology to miss its benefits nor give up our good clinical judgment about the technology or about our patients.
Dr. Howard is assistant professor of pediatrics at The Johns Hopkins University School of Medicine, Baltimore, and creator of CHADIS. She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to MDedge News. E-mail her at [email protected].
We have all been using technology in our primary care practices for a long time but newer formats have been emerging so fast that our minds, much less our staff’s minds, may be spinning.
Our old friend the telephone, a time-soaking nemesis for scheduling, checking coverage, questions calls, prescribing, quick consults, and follow-up is being replaced by EHR portals and SMS for messaging (e.g. DoctorConnect, SimplePractice), drop-in televisits and patient education links on our websites (e.g. Schmitt Pediatric Care, Remedy Connect), and chatbots for scheduling (e.g. CHEC-UP). While time is saved, what is lost may be hearing the subtext of anxiety or misperceptions in parents’ voices that would change our advice and the empathetic human connection in conversations with our patients. A hybrid approach may be better.
The paper appointment book has been replaced by scheduling systems sometimes lacking in flexibility for double booking, sibling visits, and variable length or extremely valuable multi-professional visits. Allowing patients to book their own visits may place complex problems in inappropriate slots, so only allowing online requests for visits is safer. On the other hand, many of us can now squeeze in “same day” televisits (e.g. Blueberry Pediatrics), sometimes from outside our practice (e.g., zocdoc), to increase payments and even entice new patients to enroll.
Amazing advances in technology are being made in specialty care such as genetic modifications (CRISPR), immunotherapies (mRNA vaccines and AI drug design), robot-assisted surgery, and 3-D printing of body parts and prosthetics. Technology as treatment such as transcranial magnetic stimulation and vagal stimulation are finding value in psychiatry.
But beside being aware of and able to order such specialty technologies, innovations are now extending our senses in primary care such as amplified or visual stethoscopes, bedside ultrasound (e.g. Butterfly), remote visualization (oto-, endo-)scopes, photographic vision screens (e.g. iScreen) for skin lesion (VisualDx) and genetic syndrome facial recognition. We need to be sure that technologies are tested and calibrated for children and different racial groups and genders to provide safe and equitable care. Early adoption may not always be the best approach. Costs of technology, as usual, may limit access to these advanced care aids especially, as usual, in practices serving low income and rural communities.
Patients, especially younger parents and youth, now expect to participate and can directly benefit from technology as part of their health care. Validated parent or self-report screens (e.g. EHRs, Phreesia) can detect important issues early for more effective intervention. Such questionnaires typically provide a pass/fail result or score, but other delivery systems (e.g. CHADIS) include interpretation, assist patients/parents in setting visit priorities and health goals, and even chain results of one questionnaire to secondary screens to hone in on problems, sometimes obviating a time-consuming second visit. Patient-completed comprehensive questionnaires (e.g. Well Visit Planner, CHADIS) allow us time to use our skills to focus on concerns, education, and management rather than asking myriad routine questions. Some (e.g. CHADIS) even create visit documentation reducing our “pajama time” write ups (and burnout); automate repeated online measures to track progress; and use questionnaire results to trigger related patient-specific education and resources rather than the often-ignored generic EHR handouts.
Digital therapeutics such as apps for anxiety (e.g. Calm), depression (e.g. SparkRx, Cass), weight control (e.g. Noom, Lose it), fitness, or sleep tracking (e.g. Whoop) help educate and, in some cases, provide real-time feedback to personalize discovery of contributing factors in order to maintain motivation for positive health behavior change. Some video games improve ADHD symptoms (e.g. EndeavorRX). Virtual reality scenarios have been shown to desensitize those with PTSD and social anxiety or teach social skills to children with autism.
Systems that trigger resource listings (including apps) from screen results can help, but now with over 10,000 apps for mental health, knowing what to recommend for what conditions is a challenge for which ratings (e.g. MINDapps.org) can help. With few product reps visiting to tell us what’s new, we need to read critically about innovations, search the web, subscribe to the AAP SOAPM LISTSERV, visit exhibitors at professional meetings, and talk with peers.
All the digital data collected from health care technology, if assembled with privacy constraints and analyzed with advanced statistical methods, have the possibility, with or without inclusion of genomic data, to allow for more accurate diagnostic and treatment decision support. While AI can search widely for patterns, it needs to be “trained” on appropriate data to make correct conclusions. We are all aware that the history determines 85% of both diagnosis and treatment decisions, particularly in primary care where x-rays or lab tests are not often needed.
But history in EHR notes is often idiosyncratic, entered hours after the visit by the clinician, and does not include the information needed to define diagnostic or guideline criteria, even if the clinician knows and considered those criteria. EHR templates are presented blank and are onerous and time consuming for clinicians. In addition, individual patient barriers to care, preferences, and environmental or subjective concerns are infrequently documented even though they may make the biggest difference to adherence and/or outcomes.
Notes made from voice to text digital AI translation of the encounter (e.g. Nuance DAX) are even less likely to include diagnostic criteria as it would be inappropriate to speak these. To use EHR history data to train AI and to test efficacy of care using variations of guidelines, guideline-related data is needed from online patient entries in questionnaires that are transformed to fill in templates along with some structured choices for clinician entries forming visit notes (e.g. CHADIS). New apps to facilitate clinician documentation of guidelines (e.g. AvoMD) could streamline visits as well as help document guideline criteria. The resulting combination of guideline-relevant patient histories and objective data to test and iteratively refine guidelines will allow a process known as a “Learning Health System.”
Technology to collect this kind of data can allow for the aspirational American Academy of Pediatrics CHILD Registry to approach this goal. Population-level data can provide surveillance for illness, toxins, effects of climate change, social drivers of health, and even effects of technologies themselves such as social media and remote learning so that we can attempt to make the best choices for the future.
Clinicians, staff, and patients will need to develop trust in technology as it infiltrates all aspects of health care. Professionals need both evidence and experience to trust a technology, which takes time and effort. Disinformation in the media may reduce trust or evoke unwarranted trust, as we have all seen regarding vaccines. Clear and coherent public health messaging can help but is no longer a panacea for developing trust in health care. Our nonjudgmental listening and informed opinions are needed more than ever.
The biggest issues for new technology are likely to be the need for workflow adjustments, changing our habit patterns, training, and cost/benefit analyses. With today’s high staff churn, confusion and even chaos can ensue when adopting new technology.
Staff need to be part of the selection process, if at all possible, and discuss how roles and flow will need to change. Having one staff member be a champion and expert for new tech can move adoption to a shared process rather than imposing “one more thing.” It is crucial to discuss the benefits for patients and staff even if the change is required. Sometimes cost savings can include a bonus for staff or free group lunches. Providing a certificate of achievement or title promotion for mastering new tech may be appropriate. Giving some time off from other tasks to learn new workflows can reduce resistance rather than just adding it on to a regular workload. Office “huddles” going forward can include examples of benefits staff have observed or heard about from the adoption. There are quality improvement processes that engage the team — some that earn MOC-4 or CEU credits — that apply to making workflow changes and measuring them iteratively.
If technology takes over important aspects of the work of medical professionals, even if it is faster and/or more accurate, it may degrade clinical observational, interactional, and decision-making skills through lack of use. It may also remove the sense of self-efficacy that motivates professionals to endure onerous training and desire to enter the field. Using technology may reduce empathetic interactions that are basic to humanistic motivation, work satisfaction, and even community respect. Moral injury is already rampant in medicine from restrictions on freedom to do what we see as important for our patients. Technology has great potential and already is enhancing our ability to provide the best care for patients but the risks need to be watched for and ameliorated.
When technology automates comprehensive visit documentation that highlights priority and risk areas from patient input and individualizes decision support, it can facilitate the personalized care that we and our patients want to experience. We must not be so awed, intrigued, or wary of new technology to miss its benefits nor give up our good clinical judgment about the technology or about our patients.
Dr. Howard is assistant professor of pediatrics at The Johns Hopkins University School of Medicine, Baltimore, and creator of CHADIS. She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to MDedge News. E-mail her at [email protected].
We have all been using technology in our primary care practices for a long time but newer formats have been emerging so fast that our minds, much less our staff’s minds, may be spinning.
Our old friend the telephone, a time-soaking nemesis for scheduling, checking coverage, questions calls, prescribing, quick consults, and follow-up is being replaced by EHR portals and SMS for messaging (e.g. DoctorConnect, SimplePractice), drop-in televisits and patient education links on our websites (e.g. Schmitt Pediatric Care, Remedy Connect), and chatbots for scheduling (e.g. CHEC-UP). While time is saved, what is lost may be hearing the subtext of anxiety or misperceptions in parents’ voices that would change our advice and the empathetic human connection in conversations with our patients. A hybrid approach may be better.
The paper appointment book has been replaced by scheduling systems sometimes lacking in flexibility for double booking, sibling visits, and variable length or extremely valuable multi-professional visits. Allowing patients to book their own visits may place complex problems in inappropriate slots, so only allowing online requests for visits is safer. On the other hand, many of us can now squeeze in “same day” televisits (e.g. Blueberry Pediatrics), sometimes from outside our practice (e.g., zocdoc), to increase payments and even entice new patients to enroll.
Amazing advances in technology are being made in specialty care such as genetic modifications (CRISPR), immunotherapies (mRNA vaccines and AI drug design), robot-assisted surgery, and 3-D printing of body parts and prosthetics. Technology as treatment such as transcranial magnetic stimulation and vagal stimulation are finding value in psychiatry.
But beside being aware of and able to order such specialty technologies, innovations are now extending our senses in primary care such as amplified or visual stethoscopes, bedside ultrasound (e.g. Butterfly), remote visualization (oto-, endo-)scopes, photographic vision screens (e.g. iScreen) for skin lesion (VisualDx) and genetic syndrome facial recognition. We need to be sure that technologies are tested and calibrated for children and different racial groups and genders to provide safe and equitable care. Early adoption may not always be the best approach. Costs of technology, as usual, may limit access to these advanced care aids especially, as usual, in practices serving low income and rural communities.
Patients, especially younger parents and youth, now expect to participate and can directly benefit from technology as part of their health care. Validated parent or self-report screens (e.g. EHRs, Phreesia) can detect important issues early for more effective intervention. Such questionnaires typically provide a pass/fail result or score, but other delivery systems (e.g. CHADIS) include interpretation, assist patients/parents in setting visit priorities and health goals, and even chain results of one questionnaire to secondary screens to hone in on problems, sometimes obviating a time-consuming second visit. Patient-completed comprehensive questionnaires (e.g. Well Visit Planner, CHADIS) allow us time to use our skills to focus on concerns, education, and management rather than asking myriad routine questions. Some (e.g. CHADIS) even create visit documentation reducing our “pajama time” write ups (and burnout); automate repeated online measures to track progress; and use questionnaire results to trigger related patient-specific education and resources rather than the often-ignored generic EHR handouts.
Digital therapeutics such as apps for anxiety (e.g. Calm), depression (e.g. SparkRx, Cass), weight control (e.g. Noom, Lose it), fitness, or sleep tracking (e.g. Whoop) help educate and, in some cases, provide real-time feedback to personalize discovery of contributing factors in order to maintain motivation for positive health behavior change. Some video games improve ADHD symptoms (e.g. EndeavorRX). Virtual reality scenarios have been shown to desensitize those with PTSD and social anxiety or teach social skills to children with autism.
Systems that trigger resource listings (including apps) from screen results can help, but now with over 10,000 apps for mental health, knowing what to recommend for what conditions is a challenge for which ratings (e.g. MINDapps.org) can help. With few product reps visiting to tell us what’s new, we need to read critically about innovations, search the web, subscribe to the AAP SOAPM LISTSERV, visit exhibitors at professional meetings, and talk with peers.
All the digital data collected from health care technology, if assembled with privacy constraints and analyzed with advanced statistical methods, have the possibility, with or without inclusion of genomic data, to allow for more accurate diagnostic and treatment decision support. While AI can search widely for patterns, it needs to be “trained” on appropriate data to make correct conclusions. We are all aware that the history determines 85% of both diagnosis and treatment decisions, particularly in primary care where x-rays or lab tests are not often needed.
But history in EHR notes is often idiosyncratic, entered hours after the visit by the clinician, and does not include the information needed to define diagnostic or guideline criteria, even if the clinician knows and considered those criteria. EHR templates are presented blank and are onerous and time consuming for clinicians. In addition, individual patient barriers to care, preferences, and environmental or subjective concerns are infrequently documented even though they may make the biggest difference to adherence and/or outcomes.
Notes made from voice to text digital AI translation of the encounter (e.g. Nuance DAX) are even less likely to include diagnostic criteria as it would be inappropriate to speak these. To use EHR history data to train AI and to test efficacy of care using variations of guidelines, guideline-related data is needed from online patient entries in questionnaires that are transformed to fill in templates along with some structured choices for clinician entries forming visit notes (e.g. CHADIS). New apps to facilitate clinician documentation of guidelines (e.g. AvoMD) could streamline visits as well as help document guideline criteria. The resulting combination of guideline-relevant patient histories and objective data to test and iteratively refine guidelines will allow a process known as a “Learning Health System.”
Technology to collect this kind of data can allow for the aspirational American Academy of Pediatrics CHILD Registry to approach this goal. Population-level data can provide surveillance for illness, toxins, effects of climate change, social drivers of health, and even effects of technologies themselves such as social media and remote learning so that we can attempt to make the best choices for the future.
Clinicians, staff, and patients will need to develop trust in technology as it infiltrates all aspects of health care. Professionals need both evidence and experience to trust a technology, which takes time and effort. Disinformation in the media may reduce trust or evoke unwarranted trust, as we have all seen regarding vaccines. Clear and coherent public health messaging can help but is no longer a panacea for developing trust in health care. Our nonjudgmental listening and informed opinions are needed more than ever.
The biggest issues for new technology are likely to be the need for workflow adjustments, changing our habit patterns, training, and cost/benefit analyses. With today’s high staff churn, confusion and even chaos can ensue when adopting new technology.
Staff need to be part of the selection process, if at all possible, and discuss how roles and flow will need to change. Having one staff member be a champion and expert for new tech can move adoption to a shared process rather than imposing “one more thing.” It is crucial to discuss the benefits for patients and staff even if the change is required. Sometimes cost savings can include a bonus for staff or free group lunches. Providing a certificate of achievement or title promotion for mastering new tech may be appropriate. Giving some time off from other tasks to learn new workflows can reduce resistance rather than just adding it on to a regular workload. Office “huddles” going forward can include examples of benefits staff have observed or heard about from the adoption. There are quality improvement processes that engage the team — some that earn MOC-4 or CEU credits — that apply to making workflow changes and measuring them iteratively.
If technology takes over important aspects of the work of medical professionals, even if it is faster and/or more accurate, it may degrade clinical observational, interactional, and decision-making skills through lack of use. It may also remove the sense of self-efficacy that motivates professionals to endure onerous training and desire to enter the field. Using technology may reduce empathetic interactions that are basic to humanistic motivation, work satisfaction, and even community respect. Moral injury is already rampant in medicine from restrictions on freedom to do what we see as important for our patients. Technology has great potential and already is enhancing our ability to provide the best care for patients but the risks need to be watched for and ameliorated.
When technology automates comprehensive visit documentation that highlights priority and risk areas from patient input and individualizes decision support, it can facilitate the personalized care that we and our patients want to experience. We must not be so awed, intrigued, or wary of new technology to miss its benefits nor give up our good clinical judgment about the technology or about our patients.
Dr. Howard is assistant professor of pediatrics at The Johns Hopkins University School of Medicine, Baltimore, and creator of CHADIS. She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to MDedge News. E-mail her at [email protected].
Why Are Prion Diseases on the Rise?
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
In 1986, in Britain, cattle started dying.
The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.
The United States banned UK beef imports in 1996 and only lifted the ban in 2020.
The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”
Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.
And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.
Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.
But one thing is known: Cases are increasing.
I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.
Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.
The main findings are seen here.
Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.
The question is, why are there more cases?
Whenever this type of question comes up with any disease, there are basically three possibilities:
First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.
Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.
Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.
But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.
F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Can AI enhance mental health treatment?
Three questions for clinicians
Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.
Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.
On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.
The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
• How can AI augment, not replace, the work of my staff?
For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.
One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.
New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
• Do I have patient consent to use this tool?
Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.
In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.
Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
• Where is there a risk of bias?
There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.
This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.
It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
Clinicians have the power to shape AI’s impact
While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.
Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.
Three questions for clinicians
Three questions for clinicians
Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.
Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.
On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.
The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
• How can AI augment, not replace, the work of my staff?
For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.
One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.
New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
• Do I have patient consent to use this tool?
Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.
In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.
Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
• Where is there a risk of bias?
There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.
This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.
It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
Clinicians have the power to shape AI’s impact
While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.
Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.
Artificial intelligence (AI) is already impacting the mental health care space, with several new tools available to both clinicians and patients. While this technology could be a game-changer amid a mental health crisis and clinician shortage, there are important ethical and efficacy concerns clinicians should be aware of.
Current use cases illustrate both the potential and risks of AI. On one hand, AI has the potential to improve patient care with tools that can support diagnoses and inform treatment decisions at scale. The UK’s National Health Service is using an AI-powered diagnostic tool to help clinicians diagnose mental health disorders and determine the severity of a patient’s needs. Other tools leverage AI to analyze a patient’s voice for signs of depression or anxiety.
On the other hand, there are serious potential risks involving privacy, bias, and misinformation. One chatbot tool designed to counsel patients through disordered eating was shut down after giving problematic weight-loss advice.
The number of AI tools in the healthcare space is expected to increase fivefold by 2035. Keeping up with these advances is just as important for clinicians as keeping up with the latest medication and treatment options. That means being aware of both the limitations and the potential of AI. Here are three questions clinicians can ask as they explore ways to integrate these tools into their practice while navigating the risks.
• How can AI augment, not replace, the work of my staff?
For example, documentation and the use of electronic health records have consistently been linked to clinician burnout. Using AI to cut down on documentation would leave clinicians with more time and energy to focus on patient care.
One study from the National Library of Medicine found that physicians who did not have enough time to complete documentation were nearly three times more likely to report burnout. In some cases, clinic schedules were deliberately shortened to allow time for documentation.
New tools are emerging that use audio recording, transcription services, and large language models to generate clinical summaries and other documentation support. Amazon and 3M have partnered to solve documentation challenges using AI. This is an area I’ll definitely be keeping an eye on as it develops.
• Do I have patient consent to use this tool?
Since most AI tools remain relatively new, there is a gap in the legal and regulatory framework needed to ensure patient privacy and data protection. Clinicians should draw on existing guardrails and best practices to protect patient privacy and prioritize informed consent. The bottom line: Patients need to know how their data will be used and agree to it.
In the example above regarding documentation, a clinician should obtain patient consent before using technology that records or transcribes sessions. This extends to disclosing the use of AI chat tools and other touch points that occur between sessions. One mental health nonprofit has come under fire for using ChatGPT to provide mental health counseling to thousands of patients who weren’t aware the responses were generated by AI.
Beyond disclosing the use of these tools, clinicians should sufficiently explain how they work to ensure patients understand what they’re consenting to. Some technology companies offer guidance on how informed consent applies to their products and even offer template consent forms to support clinicians. Ultimately, accountability for maintaining patient privacy rests with the clinician, not the company behind the AI tool.
• Where is there a risk of bias?
There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.
This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.
It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
Clinicians have the power to shape AI’s impact
While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.
Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.
Clinician responsibilities during times of geopolitical conflict
In the realm of clinical psychology and psychiatry, our primary duty and commitment is (and should be) to the well-being of our patients. Yet, as we find ourselves in an era marked by escalating geopolitical conflict, such as the Israel-Hamas war, probably more aptly titled the Israeli-Hamas-Hezbollah-Houthi war (a clarification that elucidates a later point), clinicians are increasingly confronted with ethical dilemmas that extend far beyond what is outlined in our code of ethics.
These challenges are not only impacting us on a personal level but are also spilling over into our professional lives, creating a divisive and non-collegial environment within the healthcare community. We commit to “do no harm” when delivering care and yet we are doing harm to one another as colleagues.
We are no strangers to the complexities of human behavior and the intricate tapestry of emotions that are involved with our professional work. However, the current geopolitical landscape has added an extra layer of difficulty to our already taxing professional lives. We are, after all, human first with unconscious drives that govern how we negotiate cognitive dissonance and our need for the illusion of absolute justice as Yuval Noah Harari explains in a recent podcast.
Humans are notoriously bad at holding the multiplicity of experience in mind and various (often competing narratives) that impede the capacity for nuanced thinking. We would like to believe we are better and more capable than the average person in doing so, but divisiveness in our profession has become disturbingly pronounced, making it essential for us to carve out reflective space, more than ever.
The personal and professional divide
Geopolitical conflicts like the current war have a unique capacity to ignite strong emotions and deeply held convictions. It’s not hard to quickly become embroiled in passionate and engaged debate.
While discussion and discourse are healthy, these are bleeding into professional spheres, creating rifts within our clinical communities and contributing to a culture where not everyone feels safe. Look at any professional listserv in medicine or psychology and you will find the evidence. It should be an immediate call to action that we need to be fostering a different type of environment.
The impact of divisiveness is profound, hindering opportunities for collaboration, mentorship, and the free exchange of ideas among clinicians. It may lead to misunderstandings, mistrust, and an erosion of the support systems we rely on, ultimately diverting energy away from the pursuit of providing quality patient-care.
Balancing obligations and limits
Because of the inherent power differential that accompanies being in a provider role (physician and psychologist alike), we have a social and moral responsibility to be mindful of what we share – for the sake of humanity. There is an implicit assumption that a provider’s guidance should be adhered to and respected. In other words, words carry tremendous weight and deeply matter, and people in the general public ascribe significant meaning to messages put out by professionals.
When providers steer from their lanes of professional expertise to provide the general public with opinions or recommendations on nonmedical topics, problematic precedents can be set. We may be doing people a disservice.
Unfortunately, I have heard several anecdotes about clinicians who spend their patient’s time in session pushing their own ideological agendas. The patient-provider relationship is founded on principles of trust, empathy, and collaboration, with the primary goal of improving overall well-being and addressing a specific presenting problem. Of course, issues emerge that need to be addressed outside of the initial scope of treatment, an inherent part of the process. However, a grave concern emerges when clinicians initiate dialogue that is not meaningful to a patient, disclose and discuss their personal ideologies, or put pressure on patients to explain their beliefs in an attempt to change the patients’ minds.
Clinicians pushing their own agenda during patient sessions is antithetical to the objectives of psychotherapy and compromises the therapeutic alliance by diverting the focus of care in a way that serves the clinician rather than the client. It is quite the opposite of the patient-centered care that we strive for in training and practice.
Even within one’s theoretical professional scope of competence, I have seen the impact of emotions running high during this conflict, and have witnessed trained professionals making light of, or even mocking, hostages and their behavior upon release. These are care providers who could elucidate the complexities of captor-captive dynamics and the impact of trauma for the general public, yet they are contributing to dangerous perceptions and divisiveness.
I have also seen providers justify sexual violence, diminishing survivor and witness testimony due to ideological differences and strong personal beliefs. This is harmful to those impacted and does a disservice to our profession at large. In a helping profession we should strive to support and advocate for anyone who has been maltreated or experienced any form of victimization, violence, or abuse. This should be a professional standard.
As clinicians, we have an ethical obligation to uphold the well-being, autonomy, and dignity of our patients — and humanity. It is crucial to recognize the limits of our expertise and the ethical concerns that can arise in light of geopolitical conflict. How can we balance our duty to provide psychological support while also being cautious about delving into the realms of political analysis, foreign policy, or international relations?
The pitfalls of well-intentioned speaking out
In the age of social media and instant communication, a critical aspect to consider is the role of speaking out. The point I made above, in naming all partaking in the current conflict, speaks to this issue.
As providers and programs, we must be mindful of the inadvertent harm that can arise from making brief, underdeveloped, uninformed, or emotionally charged statements. Expressing opinions without a solid understanding of the historical, cultural, and political nuances of a conflict can contribute to misinformation and further polarization.
Anecdotally, there appears to be some significant degree of bias emerging within professional fields (e.g., psychology, medicine) and an innate calling for providers to “weigh in” as the war continues. Obviously, physicians and psychologists are trained to provide care and to be humanistic and empathic, but the majority do not have expertise in geopolitics or a nuanced awareness of the complexities of the conflict in the Middle East.
While hearts may be in the right place, issuing statements on complicated humanitarian/political situations can inadvertently have unintended and harmful consequences (in terms of antisemitism and islamophobia, increased incidence of hate crimes, and colleagues not feeling safe within professional societies or member organizations).
Unsophisticated, overly simplistic, and reductionistic statements that do not adequately convey nuance will not reflect the range of experience reflected by providers in the field (or the patients we treat). It is essential for clinicians and institutions putting out public statements to engage in deep reflection and utilize discernment. We must recognize that our words carry weight, given our position of influence as treatment providers. To minimize harm, we should seek to provide information that is fair, vetted, and balanced, and encourage open, respectful dialogue rather than asserting definitive positions.
Ultimately, as providers we must strive to seek unity and inclusivity amidst the current challenges. It is important for us to embody a spirit of collaboration during a time demarcated by deep fragmentation.
By acknowledging our limitations, promoting informed discussion, and avoiding the pitfalls of uninformed advocacy, we can contribute to a more compassionate and understanding world, even in the face of the most divisive geopolitical conflicts. We have an obligation to uphold when it comes to ourselves as professionals, and we need to foster healthy, respectful dialogue while maintaining an awareness of our blind spots.
Dr. Feldman is a licensed clinical psychologist in private practice in Miami. She is an adjunct professor in the College of Psychology at Nova Southeastern University, Fort Lauderdale, Fla., where she teaches clinical psychology doctoral students. She is an affiliate of Baptist West Kendall Hospital/FIU Family Medicine Residency Program and serves as president on the board of directors of The Southeast Florida Association for Psychoanalytic Psychology. The opinions expressed by Dr. Feldman are her own and do not represent the institutions with which she is affiliated. She has no disclosures.
In the realm of clinical psychology and psychiatry, our primary duty and commitment is (and should be) to the well-being of our patients. Yet, as we find ourselves in an era marked by escalating geopolitical conflict, such as the Israel-Hamas war, probably more aptly titled the Israeli-Hamas-Hezbollah-Houthi war (a clarification that elucidates a later point), clinicians are increasingly confronted with ethical dilemmas that extend far beyond what is outlined in our code of ethics.
These challenges are not only impacting us on a personal level but are also spilling over into our professional lives, creating a divisive and non-collegial environment within the healthcare community. We commit to “do no harm” when delivering care and yet we are doing harm to one another as colleagues.
We are no strangers to the complexities of human behavior and the intricate tapestry of emotions that are involved with our professional work. However, the current geopolitical landscape has added an extra layer of difficulty to our already taxing professional lives. We are, after all, human first with unconscious drives that govern how we negotiate cognitive dissonance and our need for the illusion of absolute justice as Yuval Noah Harari explains in a recent podcast.
Humans are notoriously bad at holding the multiplicity of experience in mind and various (often competing narratives) that impede the capacity for nuanced thinking. We would like to believe we are better and more capable than the average person in doing so, but divisiveness in our profession has become disturbingly pronounced, making it essential for us to carve out reflective space, more than ever.
The personal and professional divide
Geopolitical conflicts like the current war have a unique capacity to ignite strong emotions and deeply held convictions. It’s not hard to quickly become embroiled in passionate and engaged debate.
While discussion and discourse are healthy, these are bleeding into professional spheres, creating rifts within our clinical communities and contributing to a culture where not everyone feels safe. Look at any professional listserv in medicine or psychology and you will find the evidence. It should be an immediate call to action that we need to be fostering a different type of environment.
The impact of divisiveness is profound, hindering opportunities for collaboration, mentorship, and the free exchange of ideas among clinicians. It may lead to misunderstandings, mistrust, and an erosion of the support systems we rely on, ultimately diverting energy away from the pursuit of providing quality patient-care.
Balancing obligations and limits
Because of the inherent power differential that accompanies being in a provider role (physician and psychologist alike), we have a social and moral responsibility to be mindful of what we share – for the sake of humanity. There is an implicit assumption that a provider’s guidance should be adhered to and respected. In other words, words carry tremendous weight and deeply matter, and people in the general public ascribe significant meaning to messages put out by professionals.
When providers steer from their lanes of professional expertise to provide the general public with opinions or recommendations on nonmedical topics, problematic precedents can be set. We may be doing people a disservice.
Unfortunately, I have heard several anecdotes about clinicians who spend their patient’s time in session pushing their own ideological agendas. The patient-provider relationship is founded on principles of trust, empathy, and collaboration, with the primary goal of improving overall well-being and addressing a specific presenting problem. Of course, issues emerge that need to be addressed outside of the initial scope of treatment, an inherent part of the process. However, a grave concern emerges when clinicians initiate dialogue that is not meaningful to a patient, disclose and discuss their personal ideologies, or put pressure on patients to explain their beliefs in an attempt to change the patients’ minds.
Clinicians pushing their own agenda during patient sessions is antithetical to the objectives of psychotherapy and compromises the therapeutic alliance by diverting the focus of care in a way that serves the clinician rather than the client. It is quite the opposite of the patient-centered care that we strive for in training and practice.
Even within one’s theoretical professional scope of competence, I have seen the impact of emotions running high during this conflict, and have witnessed trained professionals making light of, or even mocking, hostages and their behavior upon release. These are care providers who could elucidate the complexities of captor-captive dynamics and the impact of trauma for the general public, yet they are contributing to dangerous perceptions and divisiveness.
I have also seen providers justify sexual violence, diminishing survivor and witness testimony due to ideological differences and strong personal beliefs. This is harmful to those impacted and does a disservice to our profession at large. In a helping profession we should strive to support and advocate for anyone who has been maltreated or experienced any form of victimization, violence, or abuse. This should be a professional standard.
As clinicians, we have an ethical obligation to uphold the well-being, autonomy, and dignity of our patients — and humanity. It is crucial to recognize the limits of our expertise and the ethical concerns that can arise in light of geopolitical conflict. How can we balance our duty to provide psychological support while also being cautious about delving into the realms of political analysis, foreign policy, or international relations?
The pitfalls of well-intentioned speaking out
In the age of social media and instant communication, a critical aspect to consider is the role of speaking out. The point I made above, in naming all partaking in the current conflict, speaks to this issue.
As providers and programs, we must be mindful of the inadvertent harm that can arise from making brief, underdeveloped, uninformed, or emotionally charged statements. Expressing opinions without a solid understanding of the historical, cultural, and political nuances of a conflict can contribute to misinformation and further polarization.
Anecdotally, there appears to be some significant degree of bias emerging within professional fields (e.g., psychology, medicine) and an innate calling for providers to “weigh in” as the war continues. Obviously, physicians and psychologists are trained to provide care and to be humanistic and empathic, but the majority do not have expertise in geopolitics or a nuanced awareness of the complexities of the conflict in the Middle East.
While hearts may be in the right place, issuing statements on complicated humanitarian/political situations can inadvertently have unintended and harmful consequences (in terms of antisemitism and islamophobia, increased incidence of hate crimes, and colleagues not feeling safe within professional societies or member organizations).
Unsophisticated, overly simplistic, and reductionistic statements that do not adequately convey nuance will not reflect the range of experience reflected by providers in the field (or the patients we treat). It is essential for clinicians and institutions putting out public statements to engage in deep reflection and utilize discernment. We must recognize that our words carry weight, given our position of influence as treatment providers. To minimize harm, we should seek to provide information that is fair, vetted, and balanced, and encourage open, respectful dialogue rather than asserting definitive positions.
Ultimately, as providers we must strive to seek unity and inclusivity amidst the current challenges. It is important for us to embody a spirit of collaboration during a time demarcated by deep fragmentation.
By acknowledging our limitations, promoting informed discussion, and avoiding the pitfalls of uninformed advocacy, we can contribute to a more compassionate and understanding world, even in the face of the most divisive geopolitical conflicts. We have an obligation to uphold when it comes to ourselves as professionals, and we need to foster healthy, respectful dialogue while maintaining an awareness of our blind spots.
Dr. Feldman is a licensed clinical psychologist in private practice in Miami. She is an adjunct professor in the College of Psychology at Nova Southeastern University, Fort Lauderdale, Fla., where she teaches clinical psychology doctoral students. She is an affiliate of Baptist West Kendall Hospital/FIU Family Medicine Residency Program and serves as president on the board of directors of The Southeast Florida Association for Psychoanalytic Psychology. The opinions expressed by Dr. Feldman are her own and do not represent the institutions with which she is affiliated. She has no disclosures.
In the realm of clinical psychology and psychiatry, our primary duty and commitment is (and should be) to the well-being of our patients. Yet, as we find ourselves in an era marked by escalating geopolitical conflict, such as the Israel-Hamas war, probably more aptly titled the Israeli-Hamas-Hezbollah-Houthi war (a clarification that elucidates a later point), clinicians are increasingly confronted with ethical dilemmas that extend far beyond what is outlined in our code of ethics.
These challenges are not only impacting us on a personal level but are also spilling over into our professional lives, creating a divisive and non-collegial environment within the healthcare community. We commit to “do no harm” when delivering care and yet we are doing harm to one another as colleagues.
We are no strangers to the complexities of human behavior and the intricate tapestry of emotions that are involved with our professional work. However, the current geopolitical landscape has added an extra layer of difficulty to our already taxing professional lives. We are, after all, human first with unconscious drives that govern how we negotiate cognitive dissonance and our need for the illusion of absolute justice as Yuval Noah Harari explains in a recent podcast.
Humans are notoriously bad at holding the multiplicity of experience in mind and various (often competing narratives) that impede the capacity for nuanced thinking. We would like to believe we are better and more capable than the average person in doing so, but divisiveness in our profession has become disturbingly pronounced, making it essential for us to carve out reflective space, more than ever.
The personal and professional divide
Geopolitical conflicts like the current war have a unique capacity to ignite strong emotions and deeply held convictions. It’s not hard to quickly become embroiled in passionate and engaged debate.
While discussion and discourse are healthy, these are bleeding into professional spheres, creating rifts within our clinical communities and contributing to a culture where not everyone feels safe. Look at any professional listserv in medicine or psychology and you will find the evidence. It should be an immediate call to action that we need to be fostering a different type of environment.
The impact of divisiveness is profound, hindering opportunities for collaboration, mentorship, and the free exchange of ideas among clinicians. It may lead to misunderstandings, mistrust, and an erosion of the support systems we rely on, ultimately diverting energy away from the pursuit of providing quality patient-care.
Balancing obligations and limits
Because of the inherent power differential that accompanies being in a provider role (physician and psychologist alike), we have a social and moral responsibility to be mindful of what we share – for the sake of humanity. There is an implicit assumption that a provider’s guidance should be adhered to and respected. In other words, words carry tremendous weight and deeply matter, and people in the general public ascribe significant meaning to messages put out by professionals.
When providers steer from their lanes of professional expertise to provide the general public with opinions or recommendations on nonmedical topics, problematic precedents can be set. We may be doing people a disservice.
Unfortunately, I have heard several anecdotes about clinicians who spend their patient’s time in session pushing their own ideological agendas. The patient-provider relationship is founded on principles of trust, empathy, and collaboration, with the primary goal of improving overall well-being and addressing a specific presenting problem. Of course, issues emerge that need to be addressed outside of the initial scope of treatment, an inherent part of the process. However, a grave concern emerges when clinicians initiate dialogue that is not meaningful to a patient, disclose and discuss their personal ideologies, or put pressure on patients to explain their beliefs in an attempt to change the patients’ minds.
Clinicians pushing their own agenda during patient sessions is antithetical to the objectives of psychotherapy and compromises the therapeutic alliance by diverting the focus of care in a way that serves the clinician rather than the client. It is quite the opposite of the patient-centered care that we strive for in training and practice.
Even within one’s theoretical professional scope of competence, I have seen the impact of emotions running high during this conflict, and have witnessed trained professionals making light of, or even mocking, hostages and their behavior upon release. These are care providers who could elucidate the complexities of captor-captive dynamics and the impact of trauma for the general public, yet they are contributing to dangerous perceptions and divisiveness.
I have also seen providers justify sexual violence, diminishing survivor and witness testimony due to ideological differences and strong personal beliefs. This is harmful to those impacted and does a disservice to our profession at large. In a helping profession we should strive to support and advocate for anyone who has been maltreated or experienced any form of victimization, violence, or abuse. This should be a professional standard.
As clinicians, we have an ethical obligation to uphold the well-being, autonomy, and dignity of our patients — and humanity. It is crucial to recognize the limits of our expertise and the ethical concerns that can arise in light of geopolitical conflict. How can we balance our duty to provide psychological support while also being cautious about delving into the realms of political analysis, foreign policy, or international relations?
The pitfalls of well-intentioned speaking out
In the age of social media and instant communication, a critical aspect to consider is the role of speaking out. The point I made above, in naming all partaking in the current conflict, speaks to this issue.
As providers and programs, we must be mindful of the inadvertent harm that can arise from making brief, underdeveloped, uninformed, or emotionally charged statements. Expressing opinions without a solid understanding of the historical, cultural, and political nuances of a conflict can contribute to misinformation and further polarization.
Anecdotally, there appears to be some significant degree of bias emerging within professional fields (e.g., psychology, medicine) and an innate calling for providers to “weigh in” as the war continues. Obviously, physicians and psychologists are trained to provide care and to be humanistic and empathic, but the majority do not have expertise in geopolitics or a nuanced awareness of the complexities of the conflict in the Middle East.
While hearts may be in the right place, issuing statements on complicated humanitarian/political situations can inadvertently have unintended and harmful consequences (in terms of antisemitism and islamophobia, increased incidence of hate crimes, and colleagues not feeling safe within professional societies or member organizations).
Unsophisticated, overly simplistic, and reductionistic statements that do not adequately convey nuance will not reflect the range of experience reflected by providers in the field (or the patients we treat). It is essential for clinicians and institutions putting out public statements to engage in deep reflection and utilize discernment. We must recognize that our words carry weight, given our position of influence as treatment providers. To minimize harm, we should seek to provide information that is fair, vetted, and balanced, and encourage open, respectful dialogue rather than asserting definitive positions.
Ultimately, as providers we must strive to seek unity and inclusivity amidst the current challenges. It is important for us to embody a spirit of collaboration during a time demarcated by deep fragmentation.
By acknowledging our limitations, promoting informed discussion, and avoiding the pitfalls of uninformed advocacy, we can contribute to a more compassionate and understanding world, even in the face of the most divisive geopolitical conflicts. We have an obligation to uphold when it comes to ourselves as professionals, and we need to foster healthy, respectful dialogue while maintaining an awareness of our blind spots.
Dr. Feldman is a licensed clinical psychologist in private practice in Miami. She is an adjunct professor in the College of Psychology at Nova Southeastern University, Fort Lauderdale, Fla., where she teaches clinical psychology doctoral students. She is an affiliate of Baptist West Kendall Hospital/FIU Family Medicine Residency Program and serves as president on the board of directors of The Southeast Florida Association for Psychoanalytic Psychology. The opinions expressed by Dr. Feldman are her own and do not represent the institutions with which she is affiliated. She has no disclosures.