Clinical Psychiatry News is the online destination and multimedia properties of Clinica Psychiatry News, the independent news publication for psychiatrists. Since 1971, Clinical Psychiatry News has been the leading source of news and commentary about clinical developments in psychiatry as well as health care policy and regulations that affect the physician's practice.

Theme
medstat_cpn
Top Sections
Conference Coverage
Families in Psychiatry
Weighty Issues
cpn

Dear Drupal User: You're seeing this because you're logged in to Drupal, and not redirected to MDedge.com/psychiatry. 

Main menu
CPN Main Menu
Explore menu
CPN Explore Menu
Proclivity ID
18814001
Unpublish
Specialty Focus
Addiction Medicine
Bipolar Disorder
Depression
Schizophrenia & Other Psychotic Disorders
Negative Keywords
Bipolar depression
Depression
adolescent depression
adolescent major depressive disorder
adolescent schizophrenia
adolescent with major depressive disorder
animals
autism
baby
brexpiprazole
child
child bipolar
child depression
child schizophrenia
children with bipolar disorder
children with depression
children with major depressive disorder
compulsive behaviors
cure
elderly bipolar
elderly depression
elderly major depressive disorder
elderly schizophrenia
elderly with dementia
first break
first episode
gambling
gaming
geriatric depression
geriatric major depressive disorder
geriatric schizophrenia
infant
ketamine
kid
major depressive disorder
major depressive disorder in adolescents
major depressive disorder in children
parenting
pediatric
pediatric bipolar
pediatric depression
pediatric major depressive disorder
pediatric schizophrenia
pregnancy
pregnant
rexulti
skin care
suicide
teen
wine
Negative Keywords Excluded Elements
header[@id='header']
section[contains(@class, 'nav-hidden')]
footer[@id='footer']
div[contains(@class, 'pane-pub-article-cpn')]
div[contains(@class, 'pane-pub-home-cpn')]
div[contains(@class, 'pane-pub-topic-cpn')]
div[contains(@class, 'panel-panel-inner')]
div[contains(@class, 'pane-node-field-article-topics')]
section[contains(@class, 'footer-nav-section-wrapper')]
Altmetric
Article Authors "autobrand" affiliation
Clinical Psychiatry News
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Top 25
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Publication LayerRX Default ID
796,797
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off

EHR: A progress report

Article Type
Changed
Wed, 09/21/2022 - 15:01

I wrote my first column on electronic health records in the mid-1990s. At the time, it seemed like an idea whose time had come. After all, in an era when just about every essential process in medicine had already been computerized, we physicians continued to process clinical data – our key asset – with pen and paper. Most of us were reluctant to make the switch, and for good reason: choosing the right EHR system was difficult at best, and once the choice was made, conversion was a nightmare. Plus, there was no clear incentive to do it.

Then, the government stepped in. Shortly after his inauguration in 2000, President George W. Bush outlined a plan to ensure that most Americans had electronic health records within 10 years. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” The goal was to eliminate missing charts, duplication of lab testing, ineffective documentation, and inordinate amounts of time spent on paperwork, not to mention illegible handwriting, poor coordination of care between physicians, and many other problems. Studies were quoted, suggesting that EHR shortened inpatient stays, decreased risk of adverse drug interactions, improved the consistency and content of records, and improved continuity of care and follow-up.

Dr. Joseph S. Eastern

The EHR Incentive Program (later renamed the Promoting Interoperability Program) was introduced to encourage physicians and hospitals “to adopt, implement, upgrade, and demonstrate meaningful use of certified electronic health record technology.”

Nearly a quarter-century later, implementation is well behind schedule. According to a 2019 federal study, while nearly all hospitals (96%) have adopted a certified EHR, only 72% of office-based physicians have done so.

There are multiple reasons for this. For one thing, EHR is still by and large slower than pen and paper, because direct data entry is still primarily done by keyboard. Voice recognition, hand-held and wireless devices have been developed, but most work only on specialized tasks. Even the best systems take more clinician time per encounter than the manual processes they replace.

Physicians have been slow to warm to a system that slows them down and forces them to change the way they think and work. In addition, paper systems never crash; the prospect of a server malfunction or Internet failure bringing an entire clinic to a grinding halt is not particularly inviting.

The special needs of dermatology – high patient volumes, multiple diagnoses and prescriptions per patient, the wide variety of procedures we perform, and digital image storage – present further hurdles.

Nevertheless, the march toward electronic record keeping continues, and I continue to receive many questions about choosing a good EHR system. As always, I cannot recommend any specific products since every office has unique needs and requirements.



The key phrase to keep in mind is caveat emptor. Several regulatory bodies exist to test vendor claims and certify system behaviors, but different agencies use different criteria that may or may not be relevant to your requirements. Vaporware is still as common as real software; beware the “feature in the next release” that might never appear, particularly if you need it right now.

Avoid the temptation to buy a flashy new system and then try to adapt it to your office; figure out your needs first, then find a system that meets them.

Unfortunately, there is no easy way around doing the work of comparing one system with another. The most important information a vendor can give you is the names and addresses of two or more offices where you can go watch their system in action. Site visits are time-consuming, but they are only way to pick the best EHR the first time around.

Don’t be the first office using a new system. Let the vendor work out the bugs somewhere else.

Above all, if you have disorganized paper records, don’t count on EHR to automatically solve your problems. Well-designed paper systems usually lend themselves to effective automation, but automating a poorly designed system just increases the chaos. If your paper system is in disarray, solve that problem before considering EHR.

With all of its problems and hurdles, EHRs will inevitably be a part of most of our lives. And for those who take the time to do it right, it will ultimately be an improvement.

Think of information technologies as power tools: They can help you to do things better, but they can also amplify your errors. So choose carefully.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].

Publications
Topics
Sections

I wrote my first column on electronic health records in the mid-1990s. At the time, it seemed like an idea whose time had come. After all, in an era when just about every essential process in medicine had already been computerized, we physicians continued to process clinical data – our key asset – with pen and paper. Most of us were reluctant to make the switch, and for good reason: choosing the right EHR system was difficult at best, and once the choice was made, conversion was a nightmare. Plus, there was no clear incentive to do it.

Then, the government stepped in. Shortly after his inauguration in 2000, President George W. Bush outlined a plan to ensure that most Americans had electronic health records within 10 years. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” The goal was to eliminate missing charts, duplication of lab testing, ineffective documentation, and inordinate amounts of time spent on paperwork, not to mention illegible handwriting, poor coordination of care between physicians, and many other problems. Studies were quoted, suggesting that EHR shortened inpatient stays, decreased risk of adverse drug interactions, improved the consistency and content of records, and improved continuity of care and follow-up.

Dr. Joseph S. Eastern

The EHR Incentive Program (later renamed the Promoting Interoperability Program) was introduced to encourage physicians and hospitals “to adopt, implement, upgrade, and demonstrate meaningful use of certified electronic health record technology.”

Nearly a quarter-century later, implementation is well behind schedule. According to a 2019 federal study, while nearly all hospitals (96%) have adopted a certified EHR, only 72% of office-based physicians have done so.

There are multiple reasons for this. For one thing, EHR is still by and large slower than pen and paper, because direct data entry is still primarily done by keyboard. Voice recognition, hand-held and wireless devices have been developed, but most work only on specialized tasks. Even the best systems take more clinician time per encounter than the manual processes they replace.

Physicians have been slow to warm to a system that slows them down and forces them to change the way they think and work. In addition, paper systems never crash; the prospect of a server malfunction or Internet failure bringing an entire clinic to a grinding halt is not particularly inviting.

The special needs of dermatology – high patient volumes, multiple diagnoses and prescriptions per patient, the wide variety of procedures we perform, and digital image storage – present further hurdles.

Nevertheless, the march toward electronic record keeping continues, and I continue to receive many questions about choosing a good EHR system. As always, I cannot recommend any specific products since every office has unique needs and requirements.



The key phrase to keep in mind is caveat emptor. Several regulatory bodies exist to test vendor claims and certify system behaviors, but different agencies use different criteria that may or may not be relevant to your requirements. Vaporware is still as common as real software; beware the “feature in the next release” that might never appear, particularly if you need it right now.

Avoid the temptation to buy a flashy new system and then try to adapt it to your office; figure out your needs first, then find a system that meets them.

Unfortunately, there is no easy way around doing the work of comparing one system with another. The most important information a vendor can give you is the names and addresses of two or more offices where you can go watch their system in action. Site visits are time-consuming, but they are only way to pick the best EHR the first time around.

Don’t be the first office using a new system. Let the vendor work out the bugs somewhere else.

Above all, if you have disorganized paper records, don’t count on EHR to automatically solve your problems. Well-designed paper systems usually lend themselves to effective automation, but automating a poorly designed system just increases the chaos. If your paper system is in disarray, solve that problem before considering EHR.

With all of its problems and hurdles, EHRs will inevitably be a part of most of our lives. And for those who take the time to do it right, it will ultimately be an improvement.

Think of information technologies as power tools: They can help you to do things better, but they can also amplify your errors. So choose carefully.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].

I wrote my first column on electronic health records in the mid-1990s. At the time, it seemed like an idea whose time had come. After all, in an era when just about every essential process in medicine had already been computerized, we physicians continued to process clinical data – our key asset – with pen and paper. Most of us were reluctant to make the switch, and for good reason: choosing the right EHR system was difficult at best, and once the choice was made, conversion was a nightmare. Plus, there was no clear incentive to do it.

Then, the government stepped in. Shortly after his inauguration in 2000, President George W. Bush outlined a plan to ensure that most Americans had electronic health records within 10 years. “By computerizing health records,” the president said, “we can avoid dangerous medical mistakes, reduce costs, and improve care.” The goal was to eliminate missing charts, duplication of lab testing, ineffective documentation, and inordinate amounts of time spent on paperwork, not to mention illegible handwriting, poor coordination of care between physicians, and many other problems. Studies were quoted, suggesting that EHR shortened inpatient stays, decreased risk of adverse drug interactions, improved the consistency and content of records, and improved continuity of care and follow-up.

Dr. Joseph S. Eastern

The EHR Incentive Program (later renamed the Promoting Interoperability Program) was introduced to encourage physicians and hospitals “to adopt, implement, upgrade, and demonstrate meaningful use of certified electronic health record technology.”

Nearly a quarter-century later, implementation is well behind schedule. According to a 2019 federal study, while nearly all hospitals (96%) have adopted a certified EHR, only 72% of office-based physicians have done so.

There are multiple reasons for this. For one thing, EHR is still by and large slower than pen and paper, because direct data entry is still primarily done by keyboard. Voice recognition, hand-held and wireless devices have been developed, but most work only on specialized tasks. Even the best systems take more clinician time per encounter than the manual processes they replace.

Physicians have been slow to warm to a system that slows them down and forces them to change the way they think and work. In addition, paper systems never crash; the prospect of a server malfunction or Internet failure bringing an entire clinic to a grinding halt is not particularly inviting.

The special needs of dermatology – high patient volumes, multiple diagnoses and prescriptions per patient, the wide variety of procedures we perform, and digital image storage – present further hurdles.

Nevertheless, the march toward electronic record keeping continues, and I continue to receive many questions about choosing a good EHR system. As always, I cannot recommend any specific products since every office has unique needs and requirements.



The key phrase to keep in mind is caveat emptor. Several regulatory bodies exist to test vendor claims and certify system behaviors, but different agencies use different criteria that may or may not be relevant to your requirements. Vaporware is still as common as real software; beware the “feature in the next release” that might never appear, particularly if you need it right now.

Avoid the temptation to buy a flashy new system and then try to adapt it to your office; figure out your needs first, then find a system that meets them.

Unfortunately, there is no easy way around doing the work of comparing one system with another. The most important information a vendor can give you is the names and addresses of two or more offices where you can go watch their system in action. Site visits are time-consuming, but they are only way to pick the best EHR the first time around.

Don’t be the first office using a new system. Let the vendor work out the bugs somewhere else.

Above all, if you have disorganized paper records, don’t count on EHR to automatically solve your problems. Well-designed paper systems usually lend themselves to effective automation, but automating a poorly designed system just increases the chaos. If your paper system is in disarray, solve that problem before considering EHR.

With all of its problems and hurdles, EHRs will inevitably be a part of most of our lives. And for those who take the time to do it right, it will ultimately be an improvement.

Think of information technologies as power tools: They can help you to do things better, but they can also amplify your errors. So choose carefully.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is the author of numerous articles and textbook chapters, and is a longtime monthly columnist for Dermatology News. Write to him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19 linked to increased Alzheimer’s risk

Article Type
Changed
Thu, 12/15/2022 - 15:36

COVID-19 has been linked to a significantly increased risk for new-onset Alzheimer’s disease (AD), a new study suggests.

The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.

However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.

Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.

“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”

The findings were published online in Journal of Alzheimer’s Disease.
 

Increased risk

Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.

For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.

Overall, there were 410,748 cases of COVID-19 during the study period.

The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).

After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).

Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.

Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).

“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
 

Association, not causation

Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.

“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”

Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.

The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Publications
Topics
Sections

COVID-19 has been linked to a significantly increased risk for new-onset Alzheimer’s disease (AD), a new study suggests.

The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.

However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.

Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.

“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”

The findings were published online in Journal of Alzheimer’s Disease.
 

Increased risk

Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.

For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.

Overall, there were 410,748 cases of COVID-19 during the study period.

The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).

After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).

Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.

Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).

“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
 

Association, not causation

Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.

“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”

Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.

The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.

A version of this article first appeared on Medscape.com.

COVID-19 has been linked to a significantly increased risk for new-onset Alzheimer’s disease (AD), a new study suggests.

The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.

However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.

Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.

“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”

The findings were published online in Journal of Alzheimer’s Disease.
 

Increased risk

Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.

For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.

Overall, there were 410,748 cases of COVID-19 during the study period.

The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).

After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).

Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.

Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).

“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
 

Association, not causation

Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.

“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”

Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.

The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Issue
Neurology Reviews - 30(11)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF ALZHEIMER’S DISEASE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Quiet quitting: Are physicians dying inside bit by bit? Or setting healthy boundaries?

Article Type
Changed
Wed, 09/21/2022 - 15:03

In the past few months, “quiet quitting” has garnered increasing traction across social media platforms. My morning review of social media revealed thousands of posts ranging from “Why doing less at work could be good for you – and your employer” to “After ‘quiet quitting’ here comes ‘quiet firing.’ ”

But quiet quitting is neither quiet nor quitting.

Quiet quitting is a misnomer. Individuals are not quitting their jobs; rather, they are quitting the idea of consistently going “above and beyond” in the workplace as normal and necessary. In addition, quiet quitters are firmer with their boundaries, do not take on work above and beyond clearly stated expectations, do not respond after hours, and do not feel like they are “not doing their job” when they are not immediately available.

Individuals who “quiet quit” continue to meet the demands of their job but reject the hustle-culture mentality that you must always be available for more work and, most importantly, that your value as person and self-worth are defined and determined by your work. Quiet quitters believe that it is possible to have good boundaries and yet remain productive, engaged, and active within the workplace.

Earlier this month, NPR’s posted tutorial on how to set better boundaries at work garnered 491,000 views, reflecting employees’ difficulties in communicating their needs, thoughts, and availability to their employers. Quiet quitting refers to not only rejecting the idea of going above and beyond in the workplace but also feeling confident that there will not be negative ramifications for not consistently working beyond the expected requirements.

A focus on balance, life, loves, and family is rarely addressed or emphasized by traditional employers; employees have little skill in addressing boundaries and clarifying their value and availability. For decades, “needing” flexibility of any kind or valuing activities as much as your job were viewed as negative attributes, making those individuals less-desired employees.

Data support the quiet quitting trend. Gallup data reveal that employee engagement has fallen for 2 consecutive years in the U.S. workforce. Across the first quarter of 2022, Generation Z and younger Millennials report the lowest engagement across populations at 31%. More than half of this cohort, 54%, classified as “not engaged” in their workplace.

Why is quiet quitting gaining prominence now? COVID may play a role.

Many suggest that self-evaluation and establishing firmer boundaries is a logical response to emotional sequelae caused by COVID. Quiet quitting appears to have been fueled by the pandemic. Employees were forced into crisis mode by COVID; the lines between work, life, and home evaporated, allowing or forcing workers to evaluate their efficacy and satisfaction. With the structural impact of COVID reducing and a return to more standard work practices, it is expected that the job “rules” once held as truths come under evaluation and scrutiny.

Perhaps COVID has forced, and provided, another opportunity for us to closely examine our routines and habits and take stock of what really matters. Generations expectedly differ in their values and definitions of success. COVID has set prior established rules on fire, by forcing patterns and expectations that were neither expected nor wanted, within the context of a global health crisis. Within this backdrop, should we really believe our worth is determined by our job?

The truth is, we are still grieving what we lost during COVID and we have expectedly not assimilated to “the new normal.” Psychology has long recognized that losing structures and supports, routines and habits, causes symptoms of significant discomfort.

The idea that we would return to prior workplace expectations is naive. The idea we would “return to life as it was” is naive. It seems expected, then, that both employers and employees should evaluate their goals and communicate more openly about how each can be met.

It is incumbent upon the employers to set up clear guidelines regarding expectations, including rewards for performance and expectations for time, both within and outside of the work schedule. Employers must recognize symptoms of detachment in their employees and engage in the process of continuing clarifying roles and expectations while providing necessities for employees to succeed at their highest level. Employees, in turn, must self-examine their goals, communicate their needs, meet their responsibilities fully, and take on the challenge of determining their own definition of balance.

Maybe instead of quiet quitting, we should call it this new movement “self-awareness, growth, and evolution.” Hmmm, there’s an intriguing thought.

Dr. Calvery is professor of pediatrics at the University of Louisville (Ky.) She disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

In the past few months, “quiet quitting” has garnered increasing traction across social media platforms. My morning review of social media revealed thousands of posts ranging from “Why doing less at work could be good for you – and your employer” to “After ‘quiet quitting’ here comes ‘quiet firing.’ ”

But quiet quitting is neither quiet nor quitting.

Quiet quitting is a misnomer. Individuals are not quitting their jobs; rather, they are quitting the idea of consistently going “above and beyond” in the workplace as normal and necessary. In addition, quiet quitters are firmer with their boundaries, do not take on work above and beyond clearly stated expectations, do not respond after hours, and do not feel like they are “not doing their job” when they are not immediately available.

Individuals who “quiet quit” continue to meet the demands of their job but reject the hustle-culture mentality that you must always be available for more work and, most importantly, that your value as person and self-worth are defined and determined by your work. Quiet quitters believe that it is possible to have good boundaries and yet remain productive, engaged, and active within the workplace.

Earlier this month, NPR’s posted tutorial on how to set better boundaries at work garnered 491,000 views, reflecting employees’ difficulties in communicating their needs, thoughts, and availability to their employers. Quiet quitting refers to not only rejecting the idea of going above and beyond in the workplace but also feeling confident that there will not be negative ramifications for not consistently working beyond the expected requirements.

A focus on balance, life, loves, and family is rarely addressed or emphasized by traditional employers; employees have little skill in addressing boundaries and clarifying their value and availability. For decades, “needing” flexibility of any kind or valuing activities as much as your job were viewed as negative attributes, making those individuals less-desired employees.

Data support the quiet quitting trend. Gallup data reveal that employee engagement has fallen for 2 consecutive years in the U.S. workforce. Across the first quarter of 2022, Generation Z and younger Millennials report the lowest engagement across populations at 31%. More than half of this cohort, 54%, classified as “not engaged” in their workplace.

Why is quiet quitting gaining prominence now? COVID may play a role.

Many suggest that self-evaluation and establishing firmer boundaries is a logical response to emotional sequelae caused by COVID. Quiet quitting appears to have been fueled by the pandemic. Employees were forced into crisis mode by COVID; the lines between work, life, and home evaporated, allowing or forcing workers to evaluate their efficacy and satisfaction. With the structural impact of COVID reducing and a return to more standard work practices, it is expected that the job “rules” once held as truths come under evaluation and scrutiny.

Perhaps COVID has forced, and provided, another opportunity for us to closely examine our routines and habits and take stock of what really matters. Generations expectedly differ in their values and definitions of success. COVID has set prior established rules on fire, by forcing patterns and expectations that were neither expected nor wanted, within the context of a global health crisis. Within this backdrop, should we really believe our worth is determined by our job?

The truth is, we are still grieving what we lost during COVID and we have expectedly not assimilated to “the new normal.” Psychology has long recognized that losing structures and supports, routines and habits, causes symptoms of significant discomfort.

The idea that we would return to prior workplace expectations is naive. The idea we would “return to life as it was” is naive. It seems expected, then, that both employers and employees should evaluate their goals and communicate more openly about how each can be met.

It is incumbent upon the employers to set up clear guidelines regarding expectations, including rewards for performance and expectations for time, both within and outside of the work schedule. Employers must recognize symptoms of detachment in their employees and engage in the process of continuing clarifying roles and expectations while providing necessities for employees to succeed at their highest level. Employees, in turn, must self-examine their goals, communicate their needs, meet their responsibilities fully, and take on the challenge of determining their own definition of balance.

Maybe instead of quiet quitting, we should call it this new movement “self-awareness, growth, and evolution.” Hmmm, there’s an intriguing thought.

Dr. Calvery is professor of pediatrics at the University of Louisville (Ky.) She disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

In the past few months, “quiet quitting” has garnered increasing traction across social media platforms. My morning review of social media revealed thousands of posts ranging from “Why doing less at work could be good for you – and your employer” to “After ‘quiet quitting’ here comes ‘quiet firing.’ ”

But quiet quitting is neither quiet nor quitting.

Quiet quitting is a misnomer. Individuals are not quitting their jobs; rather, they are quitting the idea of consistently going “above and beyond” in the workplace as normal and necessary. In addition, quiet quitters are firmer with their boundaries, do not take on work above and beyond clearly stated expectations, do not respond after hours, and do not feel like they are “not doing their job” when they are not immediately available.

Individuals who “quiet quit” continue to meet the demands of their job but reject the hustle-culture mentality that you must always be available for more work and, most importantly, that your value as person and self-worth are defined and determined by your work. Quiet quitters believe that it is possible to have good boundaries and yet remain productive, engaged, and active within the workplace.

Earlier this month, NPR’s posted tutorial on how to set better boundaries at work garnered 491,000 views, reflecting employees’ difficulties in communicating their needs, thoughts, and availability to their employers. Quiet quitting refers to not only rejecting the idea of going above and beyond in the workplace but also feeling confident that there will not be negative ramifications for not consistently working beyond the expected requirements.

A focus on balance, life, loves, and family is rarely addressed or emphasized by traditional employers; employees have little skill in addressing boundaries and clarifying their value and availability. For decades, “needing” flexibility of any kind or valuing activities as much as your job were viewed as negative attributes, making those individuals less-desired employees.

Data support the quiet quitting trend. Gallup data reveal that employee engagement has fallen for 2 consecutive years in the U.S. workforce. Across the first quarter of 2022, Generation Z and younger Millennials report the lowest engagement across populations at 31%. More than half of this cohort, 54%, classified as “not engaged” in their workplace.

Why is quiet quitting gaining prominence now? COVID may play a role.

Many suggest that self-evaluation and establishing firmer boundaries is a logical response to emotional sequelae caused by COVID. Quiet quitting appears to have been fueled by the pandemic. Employees were forced into crisis mode by COVID; the lines between work, life, and home evaporated, allowing or forcing workers to evaluate their efficacy and satisfaction. With the structural impact of COVID reducing and a return to more standard work practices, it is expected that the job “rules” once held as truths come under evaluation and scrutiny.

Perhaps COVID has forced, and provided, another opportunity for us to closely examine our routines and habits and take stock of what really matters. Generations expectedly differ in their values and definitions of success. COVID has set prior established rules on fire, by forcing patterns and expectations that were neither expected nor wanted, within the context of a global health crisis. Within this backdrop, should we really believe our worth is determined by our job?

The truth is, we are still grieving what we lost during COVID and we have expectedly not assimilated to “the new normal.” Psychology has long recognized that losing structures and supports, routines and habits, causes symptoms of significant discomfort.

The idea that we would return to prior workplace expectations is naive. The idea we would “return to life as it was” is naive. It seems expected, then, that both employers and employees should evaluate their goals and communicate more openly about how each can be met.

It is incumbent upon the employers to set up clear guidelines regarding expectations, including rewards for performance and expectations for time, both within and outside of the work schedule. Employers must recognize symptoms of detachment in their employees and engage in the process of continuing clarifying roles and expectations while providing necessities for employees to succeed at their highest level. Employees, in turn, must self-examine their goals, communicate their needs, meet their responsibilities fully, and take on the challenge of determining their own definition of balance.

Maybe instead of quiet quitting, we should call it this new movement “self-awareness, growth, and evolution.” Hmmm, there’s an intriguing thought.

Dr. Calvery is professor of pediatrics at the University of Louisville (Ky.) She disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Detachment predicts worse posttraumatic outcomes

Article Type
Changed
Mon, 09/19/2022 - 13:57

Feelings of detachment following a traumatic event are a marker of more severe psychiatric outcomes, including depression and anxiety, new research suggests.

The results highlight the importance of screening for dissociation in patients who have experienced trauma, study investigator Lauren A.M. Lebois, PhD, director of the dissociative disorders and trauma research program at McLean Hospital in Belmont, Mass., told this news organization.

“Clinicians could identify individuals potentially at risk of a chronic, more severe psychiatric course before these people go down that road, and they have the opportunity to connect folks with a phased trauma treatment approach to speed their recovery,” said Dr. Lebois, who is also an assistant professor of psychiatry at Harvard Medical School, Boston.

The study was published in the American Journal of Psychiatry.
 

Underdiagnosed

Feelings of detachment or derealization are a type of dissociation. Patients with the syndrome report feeling foggy or as if they are in a dream. Dissociative diagnoses are not rare and, in fact, are more prevalent than schizophrenia.

Research supports a powerful relationship between dissociation and traumatic experiences. However, dissociation is among the most stigmatized of psychiatric conditions. Even among clinicians and researchers, beliefs about dissociation are often not based on the scientific literature, said Dr. Lebois.

“For instance, skepticism, misunderstanding, and lack of professional education about dissociation all contribute to striking rates of underdiagnosis and misdiagnoses,” she said.

Dr. Lebois and colleagues used data from the larger Advancing Understanding of Recovery After Trauma (AURORA) study and included 1,464 adults, mean age 35 years, appearing at 22 U.S. emergency departments. Patients experienced a traumatic event such as a motor vehicle crash or physical or sexual assault.

About 2 weeks after the trauma, participants reported symptoms of derealization as measured by a two-item version of the Brief Dissociative Experiences Scale.
 

Brain imaging data

A subset of 145 patients underwent functional MRI (fMRI), during which they completed an emotion reactivity task (viewing fearful-looking human faces) and a resting-state scan.

In addition to measuring history of childhood maltreatment, researchers assessed posttraumatic stress symptom severity at 2 weeks and again at 3 months using the posttraumatic stress disorder checklist. Also at 3 months, they measured depression and anxiety symptoms, pain, and functional impairment.

About 55% of self-report participants and 50% of MRI participants endorsed some level of persistent derealization at 2 weeks.

After controlling for potential confounders, including sex, age, childhood maltreatment, and current posttraumatic stress symptoms, researchers found persistent derealization was associated with increased ventromedial prefrontal cortex (vmPFC) activity while viewing fearful faces.

The vmPFC helps to regulate emotional and physical reactions. “This region puts the ‘brakes’ on your emotional and physical reactivity – helping you to calm down” after a threatening or stressful experience has passed, said Dr. Lebois.

Researchers also found an association between higher self-reported derealization and decreased resting-state connectivity between the vmPFC and the orbitofrontal cortex and right lobule VIIIa – a region of the cerebellum involved in sensorimotor function.

“This may contribute to perceptual and affective distortions experienced during derealization – for example, feelings that surroundings are fading away, unreal, or strange,” said Dr. Lebois.
 

 

 

More pain, depression, anxiety

Higher levels of self-reported derealization at 2 weeks post trauma predicted higher levels of PTSD, anxiety, and depression as well as more bodily pain and impairment in work, family, and social life at 3 months.

“When we accounted for baseline levels of posttraumatic stress symptoms and trauma history, higher levels of self-reported derealization still predicted higher posttraumatic stress disorder and depression symptoms at 3 months,” said Dr. Lebois.

Additional adjusted analyses showed increased vmPFC activity during the fearful face task predicted 3-month self-reported PTSD symptoms.

Dr. Lebois “highly recommends” clinicians screen for dissociative symptoms, including derealization, in patients with trauma. Self-report screening tools are freely available online.

She noted patients with significant dissociative symptoms often do better with a “phase-oriented” approach to trauma treatment.

“In phase one, they learn emotional regulation skills to help them take more control over when they dissociate. Then they can successfully move on to trauma processing in phase two, which can involve exposure to trauma details.”

Although the field is not yet ready to use brain scans to diagnose dissociative symptoms, the new results “take us one step closer to being able to use objective neuroimaging biomarkers of derealization to augment subjective self-report measures,” said Dr. Lebois.

A limitation of the study was it could not determine a causal relationship, as some derealization may have been present before the traumatic event. The findings may not generalize to other types of dissociation, and the derealization assessment was measured only through a self-report 2 weeks after the trauma.

Another limitation was exclusion of patients with self-inflicted injuries or who were involved in domestic violence. The researchers noted the prevalence of derealization might have been even higher if such individuals were included.
 

An important investigation

In an accompanying editorial, Lisa M. Shin, PhD, department of psychology, Tufts University, and department of psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, notes having both clinical and neuroimaging variables as well as a large sample size makes the study “an important investigation” into predictors of psychiatric symptoms post-trauma.

Investigating a specific subtype of dissociation – persistent derealization – adds to the “novelty” of the study, she said.

Dr. Lisa M. Shin

The new findings “are certainly exciting for their potential clinical relevance and contributions to neurocircuitry models of PTSD,” she writes.

Some may argue administering a short, self-report measure of derealization “is far more efficient, cost-effective, and inclusive than conducting a specialized and expensive fMRI scan that is unlikely to be available to everyone,” notes Dr. Shin.

However, she added, a potential benefit of such a scan is identification of specific brain regions as potential targets for intervention. “For example, the results of this and other studies suggest that the vmPFC is a reasonable target for transcranial magnetic stimulation or its variants.”

The new results need to be replicated in a large, independent sample, said Dr. Shin. She added it would be helpful to know if other types of dissociation, and activation in other subregions of the vmPFC, also predict psychiatric outcomes after a trauma.

The study was supported by National Institute of Mental Health grants, the U.S. Army Medical Research and Material Command, One Mind, and the Mayday Fund. Dr. Lebois has received grant support from NIMH, and her spouse receives payments from Vanderbilt University for technology licensed to Acadia Pharmaceuticals. Dr. Shin receives textbook-related royalties from Pearson.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Feelings of detachment following a traumatic event are a marker of more severe psychiatric outcomes, including depression and anxiety, new research suggests.

The results highlight the importance of screening for dissociation in patients who have experienced trauma, study investigator Lauren A.M. Lebois, PhD, director of the dissociative disorders and trauma research program at McLean Hospital in Belmont, Mass., told this news organization.

“Clinicians could identify individuals potentially at risk of a chronic, more severe psychiatric course before these people go down that road, and they have the opportunity to connect folks with a phased trauma treatment approach to speed their recovery,” said Dr. Lebois, who is also an assistant professor of psychiatry at Harvard Medical School, Boston.

The study was published in the American Journal of Psychiatry.
 

Underdiagnosed

Feelings of detachment or derealization are a type of dissociation. Patients with the syndrome report feeling foggy or as if they are in a dream. Dissociative diagnoses are not rare and, in fact, are more prevalent than schizophrenia.

Research supports a powerful relationship between dissociation and traumatic experiences. However, dissociation is among the most stigmatized of psychiatric conditions. Even among clinicians and researchers, beliefs about dissociation are often not based on the scientific literature, said Dr. Lebois.

“For instance, skepticism, misunderstanding, and lack of professional education about dissociation all contribute to striking rates of underdiagnosis and misdiagnoses,” she said.

Dr. Lebois and colleagues used data from the larger Advancing Understanding of Recovery After Trauma (AURORA) study and included 1,464 adults, mean age 35 years, appearing at 22 U.S. emergency departments. Patients experienced a traumatic event such as a motor vehicle crash or physical or sexual assault.

About 2 weeks after the trauma, participants reported symptoms of derealization as measured by a two-item version of the Brief Dissociative Experiences Scale.
 

Brain imaging data

A subset of 145 patients underwent functional MRI (fMRI), during which they completed an emotion reactivity task (viewing fearful-looking human faces) and a resting-state scan.

In addition to measuring history of childhood maltreatment, researchers assessed posttraumatic stress symptom severity at 2 weeks and again at 3 months using the posttraumatic stress disorder checklist. Also at 3 months, they measured depression and anxiety symptoms, pain, and functional impairment.

About 55% of self-report participants and 50% of MRI participants endorsed some level of persistent derealization at 2 weeks.

After controlling for potential confounders, including sex, age, childhood maltreatment, and current posttraumatic stress symptoms, researchers found persistent derealization was associated with increased ventromedial prefrontal cortex (vmPFC) activity while viewing fearful faces.

The vmPFC helps to regulate emotional and physical reactions. “This region puts the ‘brakes’ on your emotional and physical reactivity – helping you to calm down” after a threatening or stressful experience has passed, said Dr. Lebois.

Researchers also found an association between higher self-reported derealization and decreased resting-state connectivity between the vmPFC and the orbitofrontal cortex and right lobule VIIIa – a region of the cerebellum involved in sensorimotor function.

“This may contribute to perceptual and affective distortions experienced during derealization – for example, feelings that surroundings are fading away, unreal, or strange,” said Dr. Lebois.
 

 

 

More pain, depression, anxiety

Higher levels of self-reported derealization at 2 weeks post trauma predicted higher levels of PTSD, anxiety, and depression as well as more bodily pain and impairment in work, family, and social life at 3 months.

“When we accounted for baseline levels of posttraumatic stress symptoms and trauma history, higher levels of self-reported derealization still predicted higher posttraumatic stress disorder and depression symptoms at 3 months,” said Dr. Lebois.

Additional adjusted analyses showed increased vmPFC activity during the fearful face task predicted 3-month self-reported PTSD symptoms.

Dr. Lebois “highly recommends” clinicians screen for dissociative symptoms, including derealization, in patients with trauma. Self-report screening tools are freely available online.

She noted patients with significant dissociative symptoms often do better with a “phase-oriented” approach to trauma treatment.

“In phase one, they learn emotional regulation skills to help them take more control over when they dissociate. Then they can successfully move on to trauma processing in phase two, which can involve exposure to trauma details.”

Although the field is not yet ready to use brain scans to diagnose dissociative symptoms, the new results “take us one step closer to being able to use objective neuroimaging biomarkers of derealization to augment subjective self-report measures,” said Dr. Lebois.

A limitation of the study was it could not determine a causal relationship, as some derealization may have been present before the traumatic event. The findings may not generalize to other types of dissociation, and the derealization assessment was measured only through a self-report 2 weeks after the trauma.

Another limitation was exclusion of patients with self-inflicted injuries or who were involved in domestic violence. The researchers noted the prevalence of derealization might have been even higher if such individuals were included.
 

An important investigation

In an accompanying editorial, Lisa M. Shin, PhD, department of psychology, Tufts University, and department of psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, notes having both clinical and neuroimaging variables as well as a large sample size makes the study “an important investigation” into predictors of psychiatric symptoms post-trauma.

Investigating a specific subtype of dissociation – persistent derealization – adds to the “novelty” of the study, she said.

Dr. Lisa M. Shin

The new findings “are certainly exciting for their potential clinical relevance and contributions to neurocircuitry models of PTSD,” she writes.

Some may argue administering a short, self-report measure of derealization “is far more efficient, cost-effective, and inclusive than conducting a specialized and expensive fMRI scan that is unlikely to be available to everyone,” notes Dr. Shin.

However, she added, a potential benefit of such a scan is identification of specific brain regions as potential targets for intervention. “For example, the results of this and other studies suggest that the vmPFC is a reasonable target for transcranial magnetic stimulation or its variants.”

The new results need to be replicated in a large, independent sample, said Dr. Shin. She added it would be helpful to know if other types of dissociation, and activation in other subregions of the vmPFC, also predict psychiatric outcomes after a trauma.

The study was supported by National Institute of Mental Health grants, the U.S. Army Medical Research and Material Command, One Mind, and the Mayday Fund. Dr. Lebois has received grant support from NIMH, and her spouse receives payments from Vanderbilt University for technology licensed to Acadia Pharmaceuticals. Dr. Shin receives textbook-related royalties from Pearson.

A version of this article first appeared on Medscape.com.

Feelings of detachment following a traumatic event are a marker of more severe psychiatric outcomes, including depression and anxiety, new research suggests.

The results highlight the importance of screening for dissociation in patients who have experienced trauma, study investigator Lauren A.M. Lebois, PhD, director of the dissociative disorders and trauma research program at McLean Hospital in Belmont, Mass., told this news organization.

“Clinicians could identify individuals potentially at risk of a chronic, more severe psychiatric course before these people go down that road, and they have the opportunity to connect folks with a phased trauma treatment approach to speed their recovery,” said Dr. Lebois, who is also an assistant professor of psychiatry at Harvard Medical School, Boston.

The study was published in the American Journal of Psychiatry.
 

Underdiagnosed

Feelings of detachment or derealization are a type of dissociation. Patients with the syndrome report feeling foggy or as if they are in a dream. Dissociative diagnoses are not rare and, in fact, are more prevalent than schizophrenia.

Research supports a powerful relationship between dissociation and traumatic experiences. However, dissociation is among the most stigmatized of psychiatric conditions. Even among clinicians and researchers, beliefs about dissociation are often not based on the scientific literature, said Dr. Lebois.

“For instance, skepticism, misunderstanding, and lack of professional education about dissociation all contribute to striking rates of underdiagnosis and misdiagnoses,” she said.

Dr. Lebois and colleagues used data from the larger Advancing Understanding of Recovery After Trauma (AURORA) study and included 1,464 adults, mean age 35 years, appearing at 22 U.S. emergency departments. Patients experienced a traumatic event such as a motor vehicle crash or physical or sexual assault.

About 2 weeks after the trauma, participants reported symptoms of derealization as measured by a two-item version of the Brief Dissociative Experiences Scale.
 

Brain imaging data

A subset of 145 patients underwent functional MRI (fMRI), during which they completed an emotion reactivity task (viewing fearful-looking human faces) and a resting-state scan.

In addition to measuring history of childhood maltreatment, researchers assessed posttraumatic stress symptom severity at 2 weeks and again at 3 months using the posttraumatic stress disorder checklist. Also at 3 months, they measured depression and anxiety symptoms, pain, and functional impairment.

About 55% of self-report participants and 50% of MRI participants endorsed some level of persistent derealization at 2 weeks.

After controlling for potential confounders, including sex, age, childhood maltreatment, and current posttraumatic stress symptoms, researchers found persistent derealization was associated with increased ventromedial prefrontal cortex (vmPFC) activity while viewing fearful faces.

The vmPFC helps to regulate emotional and physical reactions. “This region puts the ‘brakes’ on your emotional and physical reactivity – helping you to calm down” after a threatening or stressful experience has passed, said Dr. Lebois.

Researchers also found an association between higher self-reported derealization and decreased resting-state connectivity between the vmPFC and the orbitofrontal cortex and right lobule VIIIa – a region of the cerebellum involved in sensorimotor function.

“This may contribute to perceptual and affective distortions experienced during derealization – for example, feelings that surroundings are fading away, unreal, or strange,” said Dr. Lebois.
 

 

 

More pain, depression, anxiety

Higher levels of self-reported derealization at 2 weeks post trauma predicted higher levels of PTSD, anxiety, and depression as well as more bodily pain and impairment in work, family, and social life at 3 months.

“When we accounted for baseline levels of posttraumatic stress symptoms and trauma history, higher levels of self-reported derealization still predicted higher posttraumatic stress disorder and depression symptoms at 3 months,” said Dr. Lebois.

Additional adjusted analyses showed increased vmPFC activity during the fearful face task predicted 3-month self-reported PTSD symptoms.

Dr. Lebois “highly recommends” clinicians screen for dissociative symptoms, including derealization, in patients with trauma. Self-report screening tools are freely available online.

She noted patients with significant dissociative symptoms often do better with a “phase-oriented” approach to trauma treatment.

“In phase one, they learn emotional regulation skills to help them take more control over when they dissociate. Then they can successfully move on to trauma processing in phase two, which can involve exposure to trauma details.”

Although the field is not yet ready to use brain scans to diagnose dissociative symptoms, the new results “take us one step closer to being able to use objective neuroimaging biomarkers of derealization to augment subjective self-report measures,” said Dr. Lebois.

A limitation of the study was it could not determine a causal relationship, as some derealization may have been present before the traumatic event. The findings may not generalize to other types of dissociation, and the derealization assessment was measured only through a self-report 2 weeks after the trauma.

Another limitation was exclusion of patients with self-inflicted injuries or who were involved in domestic violence. The researchers noted the prevalence of derealization might have been even higher if such individuals were included.
 

An important investigation

In an accompanying editorial, Lisa M. Shin, PhD, department of psychology, Tufts University, and department of psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, notes having both clinical and neuroimaging variables as well as a large sample size makes the study “an important investigation” into predictors of psychiatric symptoms post-trauma.

Investigating a specific subtype of dissociation – persistent derealization – adds to the “novelty” of the study, she said.

Dr. Lisa M. Shin

The new findings “are certainly exciting for their potential clinical relevance and contributions to neurocircuitry models of PTSD,” she writes.

Some may argue administering a short, self-report measure of derealization “is far more efficient, cost-effective, and inclusive than conducting a specialized and expensive fMRI scan that is unlikely to be available to everyone,” notes Dr. Shin.

However, she added, a potential benefit of such a scan is identification of specific brain regions as potential targets for intervention. “For example, the results of this and other studies suggest that the vmPFC is a reasonable target for transcranial magnetic stimulation or its variants.”

The new results need to be replicated in a large, independent sample, said Dr. Shin. She added it would be helpful to know if other types of dissociation, and activation in other subregions of the vmPFC, also predict psychiatric outcomes after a trauma.

The study was supported by National Institute of Mental Health grants, the U.S. Army Medical Research and Material Command, One Mind, and the Mayday Fund. Dr. Lebois has received grant support from NIMH, and her spouse receives payments from Vanderbilt University for technology licensed to Acadia Pharmaceuticals. Dr. Shin receives textbook-related royalties from Pearson.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AMERICAN JOURNAL OF PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Depression as a terminal illness

Article Type
Changed
Thu, 09/15/2022 - 16:48

Is there a place for palliative care?

In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3

In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.

Dr. Minna Chang

Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.

Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
 

Suicidal depression and rights

Patients with psychiatric conditions are generally not given the same rights to make decisions regarding their mental health and treatment, particularly if they wish to decline treatment. The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.

Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.

Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.

Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
 

 

 

Suicidal depression and capacity

A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.

For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.

An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.

Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.

In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
 

Medical ethics

Below, I consider this debate in the context of four pillars of medical ethics.
 

Non-maleficence

To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?

Beneficence

Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?

Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
 

Autonomy

Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.

Justice

A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.

For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.

A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
 

When enough is enough

In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?

The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?

Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.

Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
 

 

 

Euthanasia and assisted suicide in severe depression

Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.

In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.

In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.

Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.

A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
 

Conclusion

Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?

Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
 

Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.

References

1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.

2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.

3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.

4. The King’s Fund. Financial debts and loans in the NHS. 2020.

5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.

6. National Mental Health, Policy into Practice. The costs of mental ill health.

Publications
Topics
Sections

Is there a place for palliative care?

Is there a place for palliative care?

In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3

In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.

Dr. Minna Chang

Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.

Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
 

Suicidal depression and rights

Patients with psychiatric conditions are generally not given the same rights to make decisions regarding their mental health and treatment, particularly if they wish to decline treatment. The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.

Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.

Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.

Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
 

 

 

Suicidal depression and capacity

A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.

For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.

An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.

Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.

In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
 

Medical ethics

Below, I consider this debate in the context of four pillars of medical ethics.
 

Non-maleficence

To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?

Beneficence

Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?

Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
 

Autonomy

Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.

Justice

A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.

For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.

A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
 

When enough is enough

In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?

The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?

Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.

Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
 

 

 

Euthanasia and assisted suicide in severe depression

Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.

In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.

In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.

Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.

A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
 

Conclusion

Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?

Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
 

Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.

References

1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.

2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.

3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.

4. The King’s Fund. Financial debts and loans in the NHS. 2020.

5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.

6. National Mental Health, Policy into Practice. The costs of mental ill health.

In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3

In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.

Dr. Minna Chang

Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.

Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
 

Suicidal depression and rights

Patients with psychiatric conditions are generally not given the same rights to make decisions regarding their mental health and treatment, particularly if they wish to decline treatment. The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.

Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.

Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.

Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
 

 

 

Suicidal depression and capacity

A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.

For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.

An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.

Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.

In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
 

Medical ethics

Below, I consider this debate in the context of four pillars of medical ethics.
 

Non-maleficence

To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?

Beneficence

Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?

Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
 

Autonomy

Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.

Justice

A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.

For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.

A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
 

When enough is enough

In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?

The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?

Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.

Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
 

 

 

Euthanasia and assisted suicide in severe depression

Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.

In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.

In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.

Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.

A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
 

Conclusion

Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?

Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
 

Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.

References

1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.

2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.

3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.

4. The King’s Fund. Financial debts and loans in the NHS. 2020.

5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.

6. National Mental Health, Policy into Practice. The costs of mental ill health.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two states aim to curb diet pill sales to minors

Article Type
Changed
Fri, 09/16/2022 - 08:17

California and New York are on the cusp of going further than the Food and Drug Administration in restricting the sale of nonprescription diet pills to minors as pediatricians and public health advocates try to protect kids from extreme weight-loss gimmicks online.

A bill before Gov. Gavin Newsom would bar anyone under 18 in California from buying over-the-counter weight-loss supplements – whether online or in shops – without a prescription. A similar bill passed by New York lawmakers is on Gov. Kathy Hochul’s desk. Neither Democrat has indicated how he or she will act.

kaarsten/Thinkstock

If both bills are signed into law, proponents hope the momentum will build to restrict diet pill sales to children in more states. Massachusetts, New Jersey, and Missouri have introduced similar bills and backers plan to continue their push next year.

Nearly 30 million people in the United States will have an eating disorder in their lifetime; 95% of them are aged between 12 and 25, according to Johns Hopkins All Children’s Hospital. The hospital added that eating disorders pose the highest risk of mortality of any mental health disorder. And it has become easier than ever for minors to get pills that are sold online or on drugstore shelves. All dietary supplements, which include those for weight loss, accounted for nearly 35% of the $63 billion over-the-counter health products industry in 2021, according to Vision Research Reports, a market research firm.

Dietary supplements, which encompass a broad range of vitamins, herbs, and minerals, are classified by the FDA as food and don’t undergo scientific and safety testing as prescription drugs and over-the-counter medicines do.

Public health advocates want to keep weight-loss products – with ads that may promise to “Drop 5 pounds a week!” and pill names like Slim Sense – away from young people, particularly girls, since some research has linked some products to eating disorders. A study in the American Journal of Public Health, which followed more than 10,000 women aged 14-36 over 15 years, found that “those who used diet pills had more than 5 times higher adjusted odds of receiving an eating disorder diagnosis from a health care provider within 1-3 years than those who did not.”

Many pills have been found tainted with banned and dangerous ingredients that may cause cancer, heart attacks, strokes, and other ailments. For example, the FDA advised the public to avoid Slim Sense by Dr. Reade because it contains lorcaserin, which has been found to cause psychiatric disturbances and impairments in attention or memory. The FDA ordered it discontinued and the company couldn’t be reached for comment.

“Unscrupulous manufacturers are willing to take risks with consumers’ health – and they are lacing their products with illegal pharmaceuticals, banned pharmaceuticals, steroids, excessive stimulants, even experimental stimulants,” said S. Bryn Austin, ScD, founding director of the Strategic Training Initiative for the Prevention of Eating Disorders, or STRIPED, which supports the restrictions. “Consumers have no idea that this is what’s in these types of products.”

STRIPED is a public health initiative based at the Harvard School of Public Health, Boston, and Boston Children’s Hospital.

An industry trade group, the Natural Products Association, disputes that diet pills cause eating disorders, citing the lack of consumer complaints to the FDA of adverse events from their members’ products. “According to FDA data, there is no association between the two,” said Kyle Turk, the association’s director of government affairs.

The association contends that its members adhere to safe manufacturing processes, random product testing, and appropriate marketing guidelines. Representatives also worry that if minors can’t buy supplements over the counter, they may buy them from “crooks” on the black market and undermine the integrity of the industry. Under the bills, minors purchasing weight-loss products must show identification along with a prescription.

Not all business groups oppose the ban. The American Herbal Products Association, a trade group representing dietary supplement manufacturers and retailers, dropped its opposition to California’s bill once it was amended to remove ingredient categories that are found in non-diet supplements and vitamins, according to Robert Marriott, director of regulatory affairs.

Children’s advocates have found worrisome trends among young people who envision their ideal body type based on what they see on social media. According to a study commissioned by Fairplay, a nonprofit that seeks to stop harmful marketing practices targeting children, kids as young as 9 were found to be following three or more eating disorder accounts on Instagram, while the median age was 19. The authors called it a “pro–eating disorder bubble.”

Meta, which owns Instagram and Facebook, said the report lacks nuance, such as recognizing the human need to share life’s difficult moments. The company argues that blanket censorship isn’t the answer. “Experts and safety organizations have told us it’s important to strike a balance and allow people to share their personal stories while removing any content that encourages or promotes eating disorders,” Liza Crenshaw, a Meta spokesperson, said in an email.

Jason Nagata, MD, a pediatrician who cares for children and young adults with life-threatening eating disorders, believes that easy access to diet pills contributes to his patients’ conditions at UCSF Benioff Children’s Hospital in San Francisco. That was the case for one of his patients, an emaciated 11-year-old girl.

“She had basically entered a starvation state because she was not getting enough nutrition,” said Dr. Nagata, who provided supporting testimony for the California bill. “She was taking these pills and using other kinds of extreme behaviors to lose weight.”

Dr. Nagata said the number of patients he sees with eating disorders has tripled since the pandemic began. They are desperate to get diet pills, some with modest results. “We’ve had patients who have been so dependent on these products that they will be hospitalized and they’re still ordering these products on Amazon,” he said.

Public health advocates turned to state legislatures in response to the federal government’s limited authority to regulate diet pills. Under a 1994 federal law known as the Dietary Supplement Health and Education Act, the FDA “cannot step in until after there is a clear issue of harm to consumers,” said Dr. Austin.

No match for the supplement industry’s heavy lobbying on Capitol Hill, public health advocates shifted to a state-by-state approach.

There is, however, a push for the FDA to improve oversight of what goes into diet pills. Sen. Dick Durbin (D-Ill.) in April introduced a bill that would require dietary supplement manufacturers to register their products – along with the ingredients – with the regulator.

Proponents say the change is needed because manufacturers have been known to include dangerous ingredients. C. Michael White, PharmD, of the University of Connecticut, Storrs, found 35% of tainted health products came from weight-loss supplements in a review of a health fraud database.

A few ingredients have been banned, including sibutramine, a stimulant. “It was a very commonly used weight-loss supplement that ended up being removed from the U.S. market because of its elevated risk of causing things like heart attacks, strokes, and arrhythmias,” Dr. White said.

Another ingredient was phenolphthalein, which was used in laxatives until it was identified as a suspected carcinogen and banned in 1999. “To think,” he said, “that that product would still be on the U.S. market is just unconscionable.”

This story was produced by KHN, which publishes California Healthline, an editorially independent service of the California Health Care Foundation. KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.

Publications
Topics
Sections

California and New York are on the cusp of going further than the Food and Drug Administration in restricting the sale of nonprescription diet pills to minors as pediatricians and public health advocates try to protect kids from extreme weight-loss gimmicks online.

A bill before Gov. Gavin Newsom would bar anyone under 18 in California from buying over-the-counter weight-loss supplements – whether online or in shops – without a prescription. A similar bill passed by New York lawmakers is on Gov. Kathy Hochul’s desk. Neither Democrat has indicated how he or she will act.

kaarsten/Thinkstock

If both bills are signed into law, proponents hope the momentum will build to restrict diet pill sales to children in more states. Massachusetts, New Jersey, and Missouri have introduced similar bills and backers plan to continue their push next year.

Nearly 30 million people in the United States will have an eating disorder in their lifetime; 95% of them are aged between 12 and 25, according to Johns Hopkins All Children’s Hospital. The hospital added that eating disorders pose the highest risk of mortality of any mental health disorder. And it has become easier than ever for minors to get pills that are sold online or on drugstore shelves. All dietary supplements, which include those for weight loss, accounted for nearly 35% of the $63 billion over-the-counter health products industry in 2021, according to Vision Research Reports, a market research firm.

Dietary supplements, which encompass a broad range of vitamins, herbs, and minerals, are classified by the FDA as food and don’t undergo scientific and safety testing as prescription drugs and over-the-counter medicines do.

Public health advocates want to keep weight-loss products – with ads that may promise to “Drop 5 pounds a week!” and pill names like Slim Sense – away from young people, particularly girls, since some research has linked some products to eating disorders. A study in the American Journal of Public Health, which followed more than 10,000 women aged 14-36 over 15 years, found that “those who used diet pills had more than 5 times higher adjusted odds of receiving an eating disorder diagnosis from a health care provider within 1-3 years than those who did not.”

Many pills have been found tainted with banned and dangerous ingredients that may cause cancer, heart attacks, strokes, and other ailments. For example, the FDA advised the public to avoid Slim Sense by Dr. Reade because it contains lorcaserin, which has been found to cause psychiatric disturbances and impairments in attention or memory. The FDA ordered it discontinued and the company couldn’t be reached for comment.

“Unscrupulous manufacturers are willing to take risks with consumers’ health – and they are lacing their products with illegal pharmaceuticals, banned pharmaceuticals, steroids, excessive stimulants, even experimental stimulants,” said S. Bryn Austin, ScD, founding director of the Strategic Training Initiative for the Prevention of Eating Disorders, or STRIPED, which supports the restrictions. “Consumers have no idea that this is what’s in these types of products.”

STRIPED is a public health initiative based at the Harvard School of Public Health, Boston, and Boston Children’s Hospital.

An industry trade group, the Natural Products Association, disputes that diet pills cause eating disorders, citing the lack of consumer complaints to the FDA of adverse events from their members’ products. “According to FDA data, there is no association between the two,” said Kyle Turk, the association’s director of government affairs.

The association contends that its members adhere to safe manufacturing processes, random product testing, and appropriate marketing guidelines. Representatives also worry that if minors can’t buy supplements over the counter, they may buy them from “crooks” on the black market and undermine the integrity of the industry. Under the bills, minors purchasing weight-loss products must show identification along with a prescription.

Not all business groups oppose the ban. The American Herbal Products Association, a trade group representing dietary supplement manufacturers and retailers, dropped its opposition to California’s bill once it was amended to remove ingredient categories that are found in non-diet supplements and vitamins, according to Robert Marriott, director of regulatory affairs.

Children’s advocates have found worrisome trends among young people who envision their ideal body type based on what they see on social media. According to a study commissioned by Fairplay, a nonprofit that seeks to stop harmful marketing practices targeting children, kids as young as 9 were found to be following three or more eating disorder accounts on Instagram, while the median age was 19. The authors called it a “pro–eating disorder bubble.”

Meta, which owns Instagram and Facebook, said the report lacks nuance, such as recognizing the human need to share life’s difficult moments. The company argues that blanket censorship isn’t the answer. “Experts and safety organizations have told us it’s important to strike a balance and allow people to share their personal stories while removing any content that encourages or promotes eating disorders,” Liza Crenshaw, a Meta spokesperson, said in an email.

Jason Nagata, MD, a pediatrician who cares for children and young adults with life-threatening eating disorders, believes that easy access to diet pills contributes to his patients’ conditions at UCSF Benioff Children’s Hospital in San Francisco. That was the case for one of his patients, an emaciated 11-year-old girl.

“She had basically entered a starvation state because she was not getting enough nutrition,” said Dr. Nagata, who provided supporting testimony for the California bill. “She was taking these pills and using other kinds of extreme behaviors to lose weight.”

Dr. Nagata said the number of patients he sees with eating disorders has tripled since the pandemic began. They are desperate to get diet pills, some with modest results. “We’ve had patients who have been so dependent on these products that they will be hospitalized and they’re still ordering these products on Amazon,” he said.

Public health advocates turned to state legislatures in response to the federal government’s limited authority to regulate diet pills. Under a 1994 federal law known as the Dietary Supplement Health and Education Act, the FDA “cannot step in until after there is a clear issue of harm to consumers,” said Dr. Austin.

No match for the supplement industry’s heavy lobbying on Capitol Hill, public health advocates shifted to a state-by-state approach.

There is, however, a push for the FDA to improve oversight of what goes into diet pills. Sen. Dick Durbin (D-Ill.) in April introduced a bill that would require dietary supplement manufacturers to register their products – along with the ingredients – with the regulator.

Proponents say the change is needed because manufacturers have been known to include dangerous ingredients. C. Michael White, PharmD, of the University of Connecticut, Storrs, found 35% of tainted health products came from weight-loss supplements in a review of a health fraud database.

A few ingredients have been banned, including sibutramine, a stimulant. “It was a very commonly used weight-loss supplement that ended up being removed from the U.S. market because of its elevated risk of causing things like heart attacks, strokes, and arrhythmias,” Dr. White said.

Another ingredient was phenolphthalein, which was used in laxatives until it was identified as a suspected carcinogen and banned in 1999. “To think,” he said, “that that product would still be on the U.S. market is just unconscionable.”

This story was produced by KHN, which publishes California Healthline, an editorially independent service of the California Health Care Foundation. KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.

California and New York are on the cusp of going further than the Food and Drug Administration in restricting the sale of nonprescription diet pills to minors as pediatricians and public health advocates try to protect kids from extreme weight-loss gimmicks online.

A bill before Gov. Gavin Newsom would bar anyone under 18 in California from buying over-the-counter weight-loss supplements – whether online or in shops – without a prescription. A similar bill passed by New York lawmakers is on Gov. Kathy Hochul’s desk. Neither Democrat has indicated how he or she will act.

kaarsten/Thinkstock

If both bills are signed into law, proponents hope the momentum will build to restrict diet pill sales to children in more states. Massachusetts, New Jersey, and Missouri have introduced similar bills and backers plan to continue their push next year.

Nearly 30 million people in the United States will have an eating disorder in their lifetime; 95% of them are aged between 12 and 25, according to Johns Hopkins All Children’s Hospital. The hospital added that eating disorders pose the highest risk of mortality of any mental health disorder. And it has become easier than ever for minors to get pills that are sold online or on drugstore shelves. All dietary supplements, which include those for weight loss, accounted for nearly 35% of the $63 billion over-the-counter health products industry in 2021, according to Vision Research Reports, a market research firm.

Dietary supplements, which encompass a broad range of vitamins, herbs, and minerals, are classified by the FDA as food and don’t undergo scientific and safety testing as prescription drugs and over-the-counter medicines do.

Public health advocates want to keep weight-loss products – with ads that may promise to “Drop 5 pounds a week!” and pill names like Slim Sense – away from young people, particularly girls, since some research has linked some products to eating disorders. A study in the American Journal of Public Health, which followed more than 10,000 women aged 14-36 over 15 years, found that “those who used diet pills had more than 5 times higher adjusted odds of receiving an eating disorder diagnosis from a health care provider within 1-3 years than those who did not.”

Many pills have been found tainted with banned and dangerous ingredients that may cause cancer, heart attacks, strokes, and other ailments. For example, the FDA advised the public to avoid Slim Sense by Dr. Reade because it contains lorcaserin, which has been found to cause psychiatric disturbances and impairments in attention or memory. The FDA ordered it discontinued and the company couldn’t be reached for comment.

“Unscrupulous manufacturers are willing to take risks with consumers’ health – and they are lacing their products with illegal pharmaceuticals, banned pharmaceuticals, steroids, excessive stimulants, even experimental stimulants,” said S. Bryn Austin, ScD, founding director of the Strategic Training Initiative for the Prevention of Eating Disorders, or STRIPED, which supports the restrictions. “Consumers have no idea that this is what’s in these types of products.”

STRIPED is a public health initiative based at the Harvard School of Public Health, Boston, and Boston Children’s Hospital.

An industry trade group, the Natural Products Association, disputes that diet pills cause eating disorders, citing the lack of consumer complaints to the FDA of adverse events from their members’ products. “According to FDA data, there is no association between the two,” said Kyle Turk, the association’s director of government affairs.

The association contends that its members adhere to safe manufacturing processes, random product testing, and appropriate marketing guidelines. Representatives also worry that if minors can’t buy supplements over the counter, they may buy them from “crooks” on the black market and undermine the integrity of the industry. Under the bills, minors purchasing weight-loss products must show identification along with a prescription.

Not all business groups oppose the ban. The American Herbal Products Association, a trade group representing dietary supplement manufacturers and retailers, dropped its opposition to California’s bill once it was amended to remove ingredient categories that are found in non-diet supplements and vitamins, according to Robert Marriott, director of regulatory affairs.

Children’s advocates have found worrisome trends among young people who envision their ideal body type based on what they see on social media. According to a study commissioned by Fairplay, a nonprofit that seeks to stop harmful marketing practices targeting children, kids as young as 9 were found to be following three or more eating disorder accounts on Instagram, while the median age was 19. The authors called it a “pro–eating disorder bubble.”

Meta, which owns Instagram and Facebook, said the report lacks nuance, such as recognizing the human need to share life’s difficult moments. The company argues that blanket censorship isn’t the answer. “Experts and safety organizations have told us it’s important to strike a balance and allow people to share their personal stories while removing any content that encourages or promotes eating disorders,” Liza Crenshaw, a Meta spokesperson, said in an email.

Jason Nagata, MD, a pediatrician who cares for children and young adults with life-threatening eating disorders, believes that easy access to diet pills contributes to his patients’ conditions at UCSF Benioff Children’s Hospital in San Francisco. That was the case for one of his patients, an emaciated 11-year-old girl.

“She had basically entered a starvation state because she was not getting enough nutrition,” said Dr. Nagata, who provided supporting testimony for the California bill. “She was taking these pills and using other kinds of extreme behaviors to lose weight.”

Dr. Nagata said the number of patients he sees with eating disorders has tripled since the pandemic began. They are desperate to get diet pills, some with modest results. “We’ve had patients who have been so dependent on these products that they will be hospitalized and they’re still ordering these products on Amazon,” he said.

Public health advocates turned to state legislatures in response to the federal government’s limited authority to regulate diet pills. Under a 1994 federal law known as the Dietary Supplement Health and Education Act, the FDA “cannot step in until after there is a clear issue of harm to consumers,” said Dr. Austin.

No match for the supplement industry’s heavy lobbying on Capitol Hill, public health advocates shifted to a state-by-state approach.

There is, however, a push for the FDA to improve oversight of what goes into diet pills. Sen. Dick Durbin (D-Ill.) in April introduced a bill that would require dietary supplement manufacturers to register their products – along with the ingredients – with the regulator.

Proponents say the change is needed because manufacturers have been known to include dangerous ingredients. C. Michael White, PharmD, of the University of Connecticut, Storrs, found 35% of tainted health products came from weight-loss supplements in a review of a health fraud database.

A few ingredients have been banned, including sibutramine, a stimulant. “It was a very commonly used weight-loss supplement that ended up being removed from the U.S. market because of its elevated risk of causing things like heart attacks, strokes, and arrhythmias,” Dr. White said.

Another ingredient was phenolphthalein, which was used in laxatives until it was identified as a suspected carcinogen and banned in 1999. “To think,” he said, “that that product would still be on the U.S. market is just unconscionable.”

This story was produced by KHN, which publishes California Healthline, an editorially independent service of the California Health Care Foundation. KHN (Kaiser Health News) is a national newsroom that produces in-depth journalism about health issues. Together with Policy Analysis and Polling, KHN is one of the three major operating programs at KFF (Kaiser Family Foundation). KFF is an endowed nonprofit organization providing information on health issues to the nation.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mental health in America: ‘The kids are not alright’

Article Type
Changed
Fri, 09/16/2022 - 09:18

A new report shines a light on the toll the pandemic and other stressors have taken on the mental health of U.S. children and adolescents over the last 6 years.

The report shows a dramatic increase in use of acute care services for depression, anxiety, and other mental health conditions, especially among teens and preteens.

The report – The Kids Are Not Alright: Pediatric Mental Health Care Utilization from 2016-2021 – is the work of researchers at the Clarify Health Institute, the research arm of Clarify Health.

The results are “deeply concerning” and should “spark a conversation” around the need to improve access, utilization, and quality of pediatric behavioral health services, Niall Brennan, chief analytics and privacy officer for the Clarify Health Institute, told this news organization.
 

‘Startling’ trends

Leveraging an observational, national sample of insurance claims from more than 20 million children aged 1-19 years annually, the researchers observed several disturbing trends in mental health care.

From 2016 to 2021, inpatient (IP) admissions rose 61% (from 30 to 48 visits annually per 1,000) and emergency department visits rose 20% (from 55 to 66 visits annually per 1,000).

Mental health IP admissions ranged from a low of 27% in the West North Central region to a high of 137% in the Middle Atlantic region.

There were substantial increases from 2016 to 2021 in mental health IP admissions among children of all age groups, but particularly among adolescents 12 to 15 years old, increasing 84% among girls and 83% among boys in this age group.

There was also a sharp increase in mental health ED visits among girls and boys aged 12-15 years, increasing 20% overall during the study period. 

Mental health IP use grew faster from 2016 to 2021 among children with commercial insurance than among those with Medicaid (103% vs. 40%).

In contrast, mental health–specific ED visits declined 10% among children with commercial insurance and increased by 20% among those with Medicaid.

ED utilization rates in 2021 were nearly twice as high in the Medicaid population, compared with those for children with commercial insurance.

These are “startling” increases, Mr. Brennan said in an interview.

These trends “reinforce health care leaders’ responsibility to address children’s mental health, especially when considering that half of all mental health conditions onset during adolescence and carry into adulthood,” Jean Drouin, MD, Clarify Health’s chief executive office and cofounder, adds in a news release.

“With a growing consensus that mental, behavioral, and physical health intersect, this research report aims to spark a conversation about the overall wellbeing of America’s next generation,” Dr. Drouin says.
 

Concern for the future

Commenting on the new report, Anish Dube, MD, chair of the American Psychiatric Association’s Council on Children, Adolescents, and their Families, said the findings are “concerning, though unsurprising.”

“They confirm what those of us in clinical practice have experienced in the last several years. The need for mental health services continues to rise every year, while access to adequate help remains lacking,” Dr. Dube said.

“With the recent COVID-19 pandemic, concerns about the effects of climate change, global political uncertainty, and a rapidly changing employment landscape, young people in particular are vulnerable to worries about their future and feelings of helplessness and hopelessness,” he added.

Dr. Dube said there is no one right solution, and addressing this problem must consider individual and local factors.

However, some of the broader interventions needed to tackle the problem include increasing access to care by enforcing mental health parity and increasing the number of trained and qualified mental health professionals, such as child and adolescent psychiatrists, who can assess and treat these conditions in young people before they become major crises and lead to acute interventions like inpatient hospitalization.

“Public health interventions aimed at schools and families in raising awareness of mental health and well-being, and simple, cost-effective interventions to practice mental wellness will also help reduce the burden of mental illness in young people,” Dr. Dube added.

“The APA continues to fight for mental health parity enforcement and for meaningful access to mental health care for children, adolescents, and their families,” Dr. Dube said.

This research was conducted by the Clarify Health Institute. Mr. Brennan and Dr. Dube report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A new report shines a light on the toll the pandemic and other stressors have taken on the mental health of U.S. children and adolescents over the last 6 years.

The report shows a dramatic increase in use of acute care services for depression, anxiety, and other mental health conditions, especially among teens and preteens.

The report – The Kids Are Not Alright: Pediatric Mental Health Care Utilization from 2016-2021 – is the work of researchers at the Clarify Health Institute, the research arm of Clarify Health.

The results are “deeply concerning” and should “spark a conversation” around the need to improve access, utilization, and quality of pediatric behavioral health services, Niall Brennan, chief analytics and privacy officer for the Clarify Health Institute, told this news organization.
 

‘Startling’ trends

Leveraging an observational, national sample of insurance claims from more than 20 million children aged 1-19 years annually, the researchers observed several disturbing trends in mental health care.

From 2016 to 2021, inpatient (IP) admissions rose 61% (from 30 to 48 visits annually per 1,000) and emergency department visits rose 20% (from 55 to 66 visits annually per 1,000).

Mental health IP admissions ranged from a low of 27% in the West North Central region to a high of 137% in the Middle Atlantic region.

There were substantial increases from 2016 to 2021 in mental health IP admissions among children of all age groups, but particularly among adolescents 12 to 15 years old, increasing 84% among girls and 83% among boys in this age group.

There was also a sharp increase in mental health ED visits among girls and boys aged 12-15 years, increasing 20% overall during the study period. 

Mental health IP use grew faster from 2016 to 2021 among children with commercial insurance than among those with Medicaid (103% vs. 40%).

In contrast, mental health–specific ED visits declined 10% among children with commercial insurance and increased by 20% among those with Medicaid.

ED utilization rates in 2021 were nearly twice as high in the Medicaid population, compared with those for children with commercial insurance.

These are “startling” increases, Mr. Brennan said in an interview.

These trends “reinforce health care leaders’ responsibility to address children’s mental health, especially when considering that half of all mental health conditions onset during adolescence and carry into adulthood,” Jean Drouin, MD, Clarify Health’s chief executive office and cofounder, adds in a news release.

“With a growing consensus that mental, behavioral, and physical health intersect, this research report aims to spark a conversation about the overall wellbeing of America’s next generation,” Dr. Drouin says.
 

Concern for the future

Commenting on the new report, Anish Dube, MD, chair of the American Psychiatric Association’s Council on Children, Adolescents, and their Families, said the findings are “concerning, though unsurprising.”

“They confirm what those of us in clinical practice have experienced in the last several years. The need for mental health services continues to rise every year, while access to adequate help remains lacking,” Dr. Dube said.

“With the recent COVID-19 pandemic, concerns about the effects of climate change, global political uncertainty, and a rapidly changing employment landscape, young people in particular are vulnerable to worries about their future and feelings of helplessness and hopelessness,” he added.

Dr. Dube said there is no one right solution, and addressing this problem must consider individual and local factors.

However, some of the broader interventions needed to tackle the problem include increasing access to care by enforcing mental health parity and increasing the number of trained and qualified mental health professionals, such as child and adolescent psychiatrists, who can assess and treat these conditions in young people before they become major crises and lead to acute interventions like inpatient hospitalization.

“Public health interventions aimed at schools and families in raising awareness of mental health and well-being, and simple, cost-effective interventions to practice mental wellness will also help reduce the burden of mental illness in young people,” Dr. Dube added.

“The APA continues to fight for mental health parity enforcement and for meaningful access to mental health care for children, adolescents, and their families,” Dr. Dube said.

This research was conducted by the Clarify Health Institute. Mr. Brennan and Dr. Dube report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A new report shines a light on the toll the pandemic and other stressors have taken on the mental health of U.S. children and adolescents over the last 6 years.

The report shows a dramatic increase in use of acute care services for depression, anxiety, and other mental health conditions, especially among teens and preteens.

The report – The Kids Are Not Alright: Pediatric Mental Health Care Utilization from 2016-2021 – is the work of researchers at the Clarify Health Institute, the research arm of Clarify Health.

The results are “deeply concerning” and should “spark a conversation” around the need to improve access, utilization, and quality of pediatric behavioral health services, Niall Brennan, chief analytics and privacy officer for the Clarify Health Institute, told this news organization.
 

‘Startling’ trends

Leveraging an observational, national sample of insurance claims from more than 20 million children aged 1-19 years annually, the researchers observed several disturbing trends in mental health care.

From 2016 to 2021, inpatient (IP) admissions rose 61% (from 30 to 48 visits annually per 1,000) and emergency department visits rose 20% (from 55 to 66 visits annually per 1,000).

Mental health IP admissions ranged from a low of 27% in the West North Central region to a high of 137% in the Middle Atlantic region.

There were substantial increases from 2016 to 2021 in mental health IP admissions among children of all age groups, but particularly among adolescents 12 to 15 years old, increasing 84% among girls and 83% among boys in this age group.

There was also a sharp increase in mental health ED visits among girls and boys aged 12-15 years, increasing 20% overall during the study period. 

Mental health IP use grew faster from 2016 to 2021 among children with commercial insurance than among those with Medicaid (103% vs. 40%).

In contrast, mental health–specific ED visits declined 10% among children with commercial insurance and increased by 20% among those with Medicaid.

ED utilization rates in 2021 were nearly twice as high in the Medicaid population, compared with those for children with commercial insurance.

These are “startling” increases, Mr. Brennan said in an interview.

These trends “reinforce health care leaders’ responsibility to address children’s mental health, especially when considering that half of all mental health conditions onset during adolescence and carry into adulthood,” Jean Drouin, MD, Clarify Health’s chief executive office and cofounder, adds in a news release.

“With a growing consensus that mental, behavioral, and physical health intersect, this research report aims to spark a conversation about the overall wellbeing of America’s next generation,” Dr. Drouin says.
 

Concern for the future

Commenting on the new report, Anish Dube, MD, chair of the American Psychiatric Association’s Council on Children, Adolescents, and their Families, said the findings are “concerning, though unsurprising.”

“They confirm what those of us in clinical practice have experienced in the last several years. The need for mental health services continues to rise every year, while access to adequate help remains lacking,” Dr. Dube said.

“With the recent COVID-19 pandemic, concerns about the effects of climate change, global political uncertainty, and a rapidly changing employment landscape, young people in particular are vulnerable to worries about their future and feelings of helplessness and hopelessness,” he added.

Dr. Dube said there is no one right solution, and addressing this problem must consider individual and local factors.

However, some of the broader interventions needed to tackle the problem include increasing access to care by enforcing mental health parity and increasing the number of trained and qualified mental health professionals, such as child and adolescent psychiatrists, who can assess and treat these conditions in young people before they become major crises and lead to acute interventions like inpatient hospitalization.

“Public health interventions aimed at schools and families in raising awareness of mental health and well-being, and simple, cost-effective interventions to practice mental wellness will also help reduce the burden of mental illness in young people,” Dr. Dube added.

“The APA continues to fight for mental health parity enforcement and for meaningful access to mental health care for children, adolescents, and their families,” Dr. Dube said.

This research was conducted by the Clarify Health Institute. Mr. Brennan and Dr. Dube report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Vitamins or cocoa: Which preserves cognition?

Article Type
Changed
Thu, 12/15/2022 - 15:36

 

Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.

Findings from a phase 3 study show daily multivitamin use, but not cocoa, is linked to a significantly slower rate of age-related cognitive decline.

Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.

Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.

“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”

The full results were published online in Alzheimer’s and Dementia.
 

Unexpected effect

The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.

In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.

More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.

Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).

The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).

Researchers found similar protective effects for memory and executive function. 

Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.

“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.

“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
 

More research needed

Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.

“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.

“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.

Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.

The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.

The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Publications
Topics
Sections

 

Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.

Findings from a phase 3 study show daily multivitamin use, but not cocoa, is linked to a significantly slower rate of age-related cognitive decline.

Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.

Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.

“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”

The full results were published online in Alzheimer’s and Dementia.
 

Unexpected effect

The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.

In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.

More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.

Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).

The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).

Researchers found similar protective effects for memory and executive function. 

Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.

“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.

“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
 

More research needed

Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.

“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.

“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.

Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.

The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.

The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.

Findings from a phase 3 study show daily multivitamin use, but not cocoa, is linked to a significantly slower rate of age-related cognitive decline.

Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.

Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.

“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”

The full results were published online in Alzheimer’s and Dementia.
 

Unexpected effect

The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.

In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.

More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.

Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).

The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).

Researchers found similar protective effects for memory and executive function. 

Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.

“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.

“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
 

More research needed

Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.

“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.

“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.

Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.

The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.

The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(11)
Issue
Neurology Reviews - 30(11)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALZHEIMER’S AND DEMENTIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Schizophrenia and postmodernism: A philosophical exercise in treatment

Article Type
Changed
Thu, 09/15/2022 - 14:02

Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.

A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.

Dr. Nicolas Badre

We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.

It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
 

Schizophrenia and ‘objective reality’

A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.

Dr. Vladimir Khalafian

In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.

Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2

It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3

 

 

Postmodernism and psychiatry

Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.

As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.

Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.

A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. Postmodernism gives psychiatrists a framework to authentically connect to a psychotic human being. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
 

Psychiatry and philosophy

However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.

“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”

In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. https://iep.utm.edu/objectiv/.

2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.

3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.

4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.

5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.

Publications
Topics
Sections

Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.

A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.

Dr. Nicolas Badre

We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.

It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
 

Schizophrenia and ‘objective reality’

A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.

Dr. Vladimir Khalafian

In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.

Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2

It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3

 

 

Postmodernism and psychiatry

Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.

As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.

Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.

A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. Postmodernism gives psychiatrists a framework to authentically connect to a psychotic human being. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
 

Psychiatry and philosophy

However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.

“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”

In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. https://iep.utm.edu/objectiv/.

2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.

3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.

4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.

5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.

Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.

A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.

Dr. Nicolas Badre

We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.

It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
 

Schizophrenia and ‘objective reality’

A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.

Dr. Vladimir Khalafian

In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.

Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2

It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3

 

 

Postmodernism and psychiatry

Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.

As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.

Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.

A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. Postmodernism gives psychiatrists a framework to authentically connect to a psychotic human being. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
 

Psychiatry and philosophy

However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.

“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”

In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.

Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.

References

1. https://iep.utm.edu/objectiv/.

2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.

3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.

4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.

5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Dr. Caveman’ had a leg up on amputation

Article Type
Changed
Tue, 02/14/2023 - 12:59

 

Monkey see, monkey do (advanced medical procedures)

We don’t tend to think too kindly of our prehistoric ancestors. We throw around the word “caveman” – hardly a term of endearment – and depictions of Paleolithic humans rarely flatter their subjects. In many ways, though, our conceptions are correct. Humans of the Stone Age lived short, often brutish lives, but civilization had to start somewhere, and our prehistoric ancestors were often far more capable than we give them credit for.

Tim Maloney/Nature

Case in point is a recent discovery from an archaeological dig in Borneo: A young adult who lived 31,000 years ago was discovered with the lower third of their left leg amputated. Save the clever retort about the person’s untimely death, because this individual did not die from the surgery. The amputation occurred when the individual was a child and the subject lived for several years after the operation.

Amputation is usually unnecessary given our current level of medical technology, but it’s actually quite an advanced procedure, and this example predates the previous first case of amputation by nearly 25,000 years. Not only did the surgeon need to cut at an appropriate place, they needed to understand blood loss, the risk of infection, and the need to preserve skin in order to seal the wound back up. That’s quite a lot for our Paleolithic doctor to know, and it’s even more impressive considering the, shall we say, limited tools they would have had available to perform the operation.

Rocks. They cut off the leg with a rock. And it worked.

This discovery also gives insight into the amputee’s society. Someone knew that amputation was the right move for this person, indicating that it had been done before. In addition, the individual would not have been able to spring back into action hunting mammoths right away, they would require care for the rest of their lives. And clearly the community provided, given the individual’s continued life post operation and their burial in a place of honor.

If only the American health care system was capable of such feats of compassion, but that would require the majority of politicians to be as clever as cavemen. We’re not hopeful on those odds.
 

The first step is admitting you have a crying baby. The second step is … a step

Knock, knock.

Who’s there?

Crying baby.

Crying baby who?

Current Biology/Ohmura et al.

Crying baby who … umm … doesn’t have a punchline. Let’s try this again.

A priest, a rabbi, and a crying baby walk into a bar and … nope, that’s not going to work.

Why did the crying baby cross the road? Ugh, never mind.

Clearly, crying babies are no laughing matter. What crying babies need is science. And the latest innovation – it’s fresh from a study conducted at the RIKEN Center for Brain Science in Saitama, Japan – in the science of crying babies is … walking. Researchers observed 21 unhappy infants and compared their responses to four strategies: being held by their walking mothers, held by their sitting mothers, lying in a motionless crib, or lying in a rocking cot.

The best strategy is for the mother – the experiment only involved mothers, but the results should apply to any caregiver – to pick up the crying baby, walk around for 5 minutes, sit for another 5-8 minutes, and then put the infant back to bed, the researchers said in a written statement.

The walking strategy, however, isn’t perfect. “Walking for 5 minutes promoted sleep, but only for crying infants. Surprisingly, this effect was absent when babies were already calm beforehand,” lead author Kumi O. Kuroda, MD, PhD, explained in a separate statement from the center.

It also doesn’t work on adults. We could not get a crying LOTME writer to fall asleep no matter how long his mother carried him around the office.
 

 

 

New way to detect Parkinson’s has already passed the sniff test

We humans aren’t generally known for our superpowers, but a woman from Scotland may just be the Smelling Superhero. Not only was she able to literally smell Parkinson’s disease (PD) on her husband 12 years before his diagnosis; she is also the reason that scientists have found a new way to test for PD.

© Siri Stafford/Thinkstock

Joy Milne, a retired nurse, told the BBC that her husband “had this musty rather unpleasant smell especially round his shoulders and the back of his neck and his skin had definitely changed.” She put two and two together after he had been diagnosed with PD and she came in contact with others with the same scent at a support group.

Researchers at the University of Manchester, working with Ms. Milne, have now created a skin test that uses mass spectroscopy to analyze a sample of the patient’s sebum in just 3 minutes and is 95% accurate. They tested 79 people with Parkinson’s and 71 without using this method and found “specific compounds unique to PD sebum samples when compared to healthy controls. Furthermore, we have identified two classes of lipids, namely, triacylglycerides and diglycerides, as components of human sebum that are significantly differentially expressed in PD,” they said in JACS Au.

This test could be available to general physicians within 2 years, which would provide new opportunities to the people who are waiting in line for neurologic consults. Ms. Milne’s husband passed away in 2015, but her courageous help and amazing nasal abilities may help millions down the line.
 

The power of flirting

It’s a common office stereotype: Women flirt with the boss to get ahead in the workplace, while men in power sexually harass women in subordinate positions. Nobody ever suspects the guys in the cubicles. A recent study takes a different look and paints a different picture.

Mart Production/Pexels

The investigators conducted multiple online and lab experiments in how social sexual identity drives behavior in a workplace setting in relation to job placement. They found that it was most often men in lower-power positions who are insecure about their roles who initiate social sexual behavior, even though they know it’s offensive. Why? Power.

They randomly paired over 200 undergraduate students in a male/female fashion, placed them in subordinate and boss-like roles, and asked them to choose from a series of social sexual questions they wanted to ask their teammate. Male participants who were placed in subordinate positions to a female boss chose social sexual questions more often than did male bosses, female subordinates, and female bosses.

So what does this say about the threat of workplace harassment? The researchers found that men and women differ in their strategy for flirtation. For men, it’s a way to gain more power. But problems arise when they rationalize their behavior with a character trait like being a “big flirt.”

“When we take on that identity, it leads to certain behavioral patterns that reinforce the identity. And then, people use that identity as an excuse,” lead author Laura Kray of the University of California, Berkeley, said in a statement from the school.

The researchers make a point to note that the study isn’t about whether flirting is good or bad, nor are they suggesting that people in powerful positions don’t sexually harass underlings. It’s meant to provide insight to improve corporate sexual harassment training. A comment or conversation held in jest could potentially be a warning sign for future behavior.

Publications
Topics
Sections

 

Monkey see, monkey do (advanced medical procedures)

We don’t tend to think too kindly of our prehistoric ancestors. We throw around the word “caveman” – hardly a term of endearment – and depictions of Paleolithic humans rarely flatter their subjects. In many ways, though, our conceptions are correct. Humans of the Stone Age lived short, often brutish lives, but civilization had to start somewhere, and our prehistoric ancestors were often far more capable than we give them credit for.

Tim Maloney/Nature

Case in point is a recent discovery from an archaeological dig in Borneo: A young adult who lived 31,000 years ago was discovered with the lower third of their left leg amputated. Save the clever retort about the person’s untimely death, because this individual did not die from the surgery. The amputation occurred when the individual was a child and the subject lived for several years after the operation.

Amputation is usually unnecessary given our current level of medical technology, but it’s actually quite an advanced procedure, and this example predates the previous first case of amputation by nearly 25,000 years. Not only did the surgeon need to cut at an appropriate place, they needed to understand blood loss, the risk of infection, and the need to preserve skin in order to seal the wound back up. That’s quite a lot for our Paleolithic doctor to know, and it’s even more impressive considering the, shall we say, limited tools they would have had available to perform the operation.

Rocks. They cut off the leg with a rock. And it worked.

This discovery also gives insight into the amputee’s society. Someone knew that amputation was the right move for this person, indicating that it had been done before. In addition, the individual would not have been able to spring back into action hunting mammoths right away, they would require care for the rest of their lives. And clearly the community provided, given the individual’s continued life post operation and their burial in a place of honor.

If only the American health care system was capable of such feats of compassion, but that would require the majority of politicians to be as clever as cavemen. We’re not hopeful on those odds.
 

The first step is admitting you have a crying baby. The second step is … a step

Knock, knock.

Who’s there?

Crying baby.

Crying baby who?

Current Biology/Ohmura et al.

Crying baby who … umm … doesn’t have a punchline. Let’s try this again.

A priest, a rabbi, and a crying baby walk into a bar and … nope, that’s not going to work.

Why did the crying baby cross the road? Ugh, never mind.

Clearly, crying babies are no laughing matter. What crying babies need is science. And the latest innovation – it’s fresh from a study conducted at the RIKEN Center for Brain Science in Saitama, Japan – in the science of crying babies is … walking. Researchers observed 21 unhappy infants and compared their responses to four strategies: being held by their walking mothers, held by their sitting mothers, lying in a motionless crib, or lying in a rocking cot.

The best strategy is for the mother – the experiment only involved mothers, but the results should apply to any caregiver – to pick up the crying baby, walk around for 5 minutes, sit for another 5-8 minutes, and then put the infant back to bed, the researchers said in a written statement.

The walking strategy, however, isn’t perfect. “Walking for 5 minutes promoted sleep, but only for crying infants. Surprisingly, this effect was absent when babies were already calm beforehand,” lead author Kumi O. Kuroda, MD, PhD, explained in a separate statement from the center.

It also doesn’t work on adults. We could not get a crying LOTME writer to fall asleep no matter how long his mother carried him around the office.
 

 

 

New way to detect Parkinson’s has already passed the sniff test

We humans aren’t generally known for our superpowers, but a woman from Scotland may just be the Smelling Superhero. Not only was she able to literally smell Parkinson’s disease (PD) on her husband 12 years before his diagnosis; she is also the reason that scientists have found a new way to test for PD.

© Siri Stafford/Thinkstock

Joy Milne, a retired nurse, told the BBC that her husband “had this musty rather unpleasant smell especially round his shoulders and the back of his neck and his skin had definitely changed.” She put two and two together after he had been diagnosed with PD and she came in contact with others with the same scent at a support group.

Researchers at the University of Manchester, working with Ms. Milne, have now created a skin test that uses mass spectroscopy to analyze a sample of the patient’s sebum in just 3 minutes and is 95% accurate. They tested 79 people with Parkinson’s and 71 without using this method and found “specific compounds unique to PD sebum samples when compared to healthy controls. Furthermore, we have identified two classes of lipids, namely, triacylglycerides and diglycerides, as components of human sebum that are significantly differentially expressed in PD,” they said in JACS Au.

This test could be available to general physicians within 2 years, which would provide new opportunities to the people who are waiting in line for neurologic consults. Ms. Milne’s husband passed away in 2015, but her courageous help and amazing nasal abilities may help millions down the line.
 

The power of flirting

It’s a common office stereotype: Women flirt with the boss to get ahead in the workplace, while men in power sexually harass women in subordinate positions. Nobody ever suspects the guys in the cubicles. A recent study takes a different look and paints a different picture.

Mart Production/Pexels

The investigators conducted multiple online and lab experiments in how social sexual identity drives behavior in a workplace setting in relation to job placement. They found that it was most often men in lower-power positions who are insecure about their roles who initiate social sexual behavior, even though they know it’s offensive. Why? Power.

They randomly paired over 200 undergraduate students in a male/female fashion, placed them in subordinate and boss-like roles, and asked them to choose from a series of social sexual questions they wanted to ask their teammate. Male participants who were placed in subordinate positions to a female boss chose social sexual questions more often than did male bosses, female subordinates, and female bosses.

So what does this say about the threat of workplace harassment? The researchers found that men and women differ in their strategy for flirtation. For men, it’s a way to gain more power. But problems arise when they rationalize their behavior with a character trait like being a “big flirt.”

“When we take on that identity, it leads to certain behavioral patterns that reinforce the identity. And then, people use that identity as an excuse,” lead author Laura Kray of the University of California, Berkeley, said in a statement from the school.

The researchers make a point to note that the study isn’t about whether flirting is good or bad, nor are they suggesting that people in powerful positions don’t sexually harass underlings. It’s meant to provide insight to improve corporate sexual harassment training. A comment or conversation held in jest could potentially be a warning sign for future behavior.

 

Monkey see, monkey do (advanced medical procedures)

We don’t tend to think too kindly of our prehistoric ancestors. We throw around the word “caveman” – hardly a term of endearment – and depictions of Paleolithic humans rarely flatter their subjects. In many ways, though, our conceptions are correct. Humans of the Stone Age lived short, often brutish lives, but civilization had to start somewhere, and our prehistoric ancestors were often far more capable than we give them credit for.

Tim Maloney/Nature

Case in point is a recent discovery from an archaeological dig in Borneo: A young adult who lived 31,000 years ago was discovered with the lower third of their left leg amputated. Save the clever retort about the person’s untimely death, because this individual did not die from the surgery. The amputation occurred when the individual was a child and the subject lived for several years after the operation.

Amputation is usually unnecessary given our current level of medical technology, but it’s actually quite an advanced procedure, and this example predates the previous first case of amputation by nearly 25,000 years. Not only did the surgeon need to cut at an appropriate place, they needed to understand blood loss, the risk of infection, and the need to preserve skin in order to seal the wound back up. That’s quite a lot for our Paleolithic doctor to know, and it’s even more impressive considering the, shall we say, limited tools they would have had available to perform the operation.

Rocks. They cut off the leg with a rock. And it worked.

This discovery also gives insight into the amputee’s society. Someone knew that amputation was the right move for this person, indicating that it had been done before. In addition, the individual would not have been able to spring back into action hunting mammoths right away, they would require care for the rest of their lives. And clearly the community provided, given the individual’s continued life post operation and their burial in a place of honor.

If only the American health care system was capable of such feats of compassion, but that would require the majority of politicians to be as clever as cavemen. We’re not hopeful on those odds.
 

The first step is admitting you have a crying baby. The second step is … a step

Knock, knock.

Who’s there?

Crying baby.

Crying baby who?

Current Biology/Ohmura et al.

Crying baby who … umm … doesn’t have a punchline. Let’s try this again.

A priest, a rabbi, and a crying baby walk into a bar and … nope, that’s not going to work.

Why did the crying baby cross the road? Ugh, never mind.

Clearly, crying babies are no laughing matter. What crying babies need is science. And the latest innovation – it’s fresh from a study conducted at the RIKEN Center for Brain Science in Saitama, Japan – in the science of crying babies is … walking. Researchers observed 21 unhappy infants and compared their responses to four strategies: being held by their walking mothers, held by their sitting mothers, lying in a motionless crib, or lying in a rocking cot.

The best strategy is for the mother – the experiment only involved mothers, but the results should apply to any caregiver – to pick up the crying baby, walk around for 5 minutes, sit for another 5-8 minutes, and then put the infant back to bed, the researchers said in a written statement.

The walking strategy, however, isn’t perfect. “Walking for 5 minutes promoted sleep, but only for crying infants. Surprisingly, this effect was absent when babies were already calm beforehand,” lead author Kumi O. Kuroda, MD, PhD, explained in a separate statement from the center.

It also doesn’t work on adults. We could not get a crying LOTME writer to fall asleep no matter how long his mother carried him around the office.
 

 

 

New way to detect Parkinson’s has already passed the sniff test

We humans aren’t generally known for our superpowers, but a woman from Scotland may just be the Smelling Superhero. Not only was she able to literally smell Parkinson’s disease (PD) on her husband 12 years before his diagnosis; she is also the reason that scientists have found a new way to test for PD.

© Siri Stafford/Thinkstock

Joy Milne, a retired nurse, told the BBC that her husband “had this musty rather unpleasant smell especially round his shoulders and the back of his neck and his skin had definitely changed.” She put two and two together after he had been diagnosed with PD and she came in contact with others with the same scent at a support group.

Researchers at the University of Manchester, working with Ms. Milne, have now created a skin test that uses mass spectroscopy to analyze a sample of the patient’s sebum in just 3 minutes and is 95% accurate. They tested 79 people with Parkinson’s and 71 without using this method and found “specific compounds unique to PD sebum samples when compared to healthy controls. Furthermore, we have identified two classes of lipids, namely, triacylglycerides and diglycerides, as components of human sebum that are significantly differentially expressed in PD,” they said in JACS Au.

This test could be available to general physicians within 2 years, which would provide new opportunities to the people who are waiting in line for neurologic consults. Ms. Milne’s husband passed away in 2015, but her courageous help and amazing nasal abilities may help millions down the line.
 

The power of flirting

It’s a common office stereotype: Women flirt with the boss to get ahead in the workplace, while men in power sexually harass women in subordinate positions. Nobody ever suspects the guys in the cubicles. A recent study takes a different look and paints a different picture.

Mart Production/Pexels

The investigators conducted multiple online and lab experiments in how social sexual identity drives behavior in a workplace setting in relation to job placement. They found that it was most often men in lower-power positions who are insecure about their roles who initiate social sexual behavior, even though they know it’s offensive. Why? Power.

They randomly paired over 200 undergraduate students in a male/female fashion, placed them in subordinate and boss-like roles, and asked them to choose from a series of social sexual questions they wanted to ask their teammate. Male participants who were placed in subordinate positions to a female boss chose social sexual questions more often than did male bosses, female subordinates, and female bosses.

So what does this say about the threat of workplace harassment? The researchers found that men and women differ in their strategy for flirtation. For men, it’s a way to gain more power. But problems arise when they rationalize their behavior with a character trait like being a “big flirt.”

“When we take on that identity, it leads to certain behavioral patterns that reinforce the identity. And then, people use that identity as an excuse,” lead author Laura Kray of the University of California, Berkeley, said in a statement from the school.

The researchers make a point to note that the study isn’t about whether flirting is good or bad, nor are they suggesting that people in powerful positions don’t sexually harass underlings. It’s meant to provide insight to improve corporate sexual harassment training. A comment or conversation held in jest could potentially be a warning sign for future behavior.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article