User login
Depression as a terminal illness
Is there a place for palliative care?
In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3
In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.
Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.
Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
Suicidal depression and rights
The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.
Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.
Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.
Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
Suicidal depression and capacity
A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.
For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.
An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.
Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.
In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
Medical ethics
Below, I consider this debate in the context of four pillars of medical ethics.
Non-maleficence
To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?
Beneficence
Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?
Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
Autonomy
Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.
Justice
A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.
For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.
A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
When enough is enough
In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?
The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?
Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.
Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
Euthanasia and assisted suicide in severe depression
Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.
In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.
In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.
Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.
A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
Conclusion
Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?
Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.
References
1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.
2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.
3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.
4. The King’s Fund. Financial debts and loans in the NHS. 2020.
5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.
6. National Mental Health, Policy into Practice. The costs of mental ill health.
Is there a place for palliative care?
Is there a place for palliative care?
In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3
In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.
Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.
Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
Suicidal depression and rights
The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.
Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.
Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.
Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
Suicidal depression and capacity
A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.
For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.
An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.
Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.
In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
Medical ethics
Below, I consider this debate in the context of four pillars of medical ethics.
Non-maleficence
To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?
Beneficence
Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?
Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
Autonomy
Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.
Justice
A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.
For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.
A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
When enough is enough
In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?
The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?
Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.
Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
Euthanasia and assisted suicide in severe depression
Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.
In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.
In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.
Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.
A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
Conclusion
Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?
Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.
References
1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.
2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.
3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.
4. The King’s Fund. Financial debts and loans in the NHS. 2020.
5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.
6. National Mental Health, Policy into Practice. The costs of mental ill health.
In 2020, there were 5,224 suicide deaths registered in England and Wales.1 The Mental Health Foundation, a London-based charitable organization, reports that approximately 70% of such deaths are in patients with depression.2 The number of attempted suicides is much higher – the South West London and St. George’s Mental Health Trust estimates that at least 140,000 people attempt suicide in England and Wales every year.3
In suicidal depression, the psychological pain is often unbearable and feels overwhelmingly incompatible with life. One is no longer living but merely surviving, and eventually the exhaustion will lead to decompensation. This is marked by suicide. The goal is to end the suffering permanently and this is achieved through death.
Depression, like all other physical and mental illnesses, runs a course. This is highly variable between individuals and can be the case even between separate relapse episodes in the same patient. Like many diagnoses, depression is known to lead to death in a significant number of people. Many suicidally depressed patients feel that death will be an inevitable result of the illness.
Suicide is often viewed as a symptom of severe depression, but what if we considered death as part of the disease process itself? Consequently, would it be justifiable to consider depression in these patients as a form of terminal illness, since without treatment, the condition would lead to death? Accordingly, could there be a place for palliative care in a small minority of suicidally depressed patients? Taking such a perspective would mean that instead of placing the focus on the prevention of deaths and prolonging of lifespan, the focus would be on making the patients comfortable as the disease progresses, maintaining their dignity, and promoting autonomy.
Suicidal depression and rights
The rationale for this is that psychiatric patients do not have the capacity to make such decisions in the acute setting, because of the direct effects of the unwell mind on their decision-making processes and cognitive faculties. While this may be true in some cases, there is limited evidence that this applies to all suicidally depressed patients in all cases.
Another argument against allowing suicidally depressed patients to decline treatment is the notion that the episode of depression can be successfully treated, and the patients can return to their normal level of functioning. However, in individuals with a previous history of severe depression, it is possible that they will relapse again at some point. In the same way, a cancer can be treated, and patients could return to their baseline level of functioning, only for the cancer to then return later in life. In both cases, these relapses are emotionally and physically exhausting and painful to get through. The difference is that a cancer patient can decline further treatment and opt for no treatment or for palliative treatment, knowing that the disease will shorten life expectancy. For suicidal depression, this is not an option. Such patients may be sectioned, admitted, and treated against their will. Suicide, which could be considered a natural endpoint of the depressive illness, is unacceptable.
Is it fair to confiscate one’s right to decline treatment, solely because that person suffers from a mental illness, as opposed to a physical one? Numerous studies have demonstrated clear structural, neurological, and neurochemical changes in suicidal depression. This is evidence that such a condition encompasses a clear physical property. Other conditions, such as dementia and chronic pain, have previously been accepted for euthanasia in certain countries. Pain is a subjective experience of nociceptive and neurochemical signaling. In the same way, depression is a subjective experience involving aberrant neurochemical signaling. The difference is that physical pain can often be localized. However, patients with suicidal depression often experience very severe and tangible pain that can be difficult to articulate and for others to understand if they have never experienced it themselves.
Like distinct forms of physical pain, suicidal depression creates a different form of pain, but it is pain, nonetheless. Is it therefore fair for suicidally depressed patients to be given lesser rights than those suffering from physical illnesses in determining their fate?
Suicidal depression and capacity
A patient is assumed to have capacity unless proven otherwise. This is often the reverse when managing psychiatric patients. However, if patients are able to fulfill all criteria required for demonstrating capacity (understanding the information, retaining, weighing up, and communicating the decision), surely they have demonstrated capacity to make their decisions, whether that is to receive or to refuse treatment.
For physical illnesses, adults with capacity are permitted to make decisions that their treating teams may not agree with, but this disagreement alone is generally insufficient to override the decisions. These patients, unlike in suicidal depression, have the right to refuse lifesaving or life-prolonging treatment.
An argument for this is that in terminal physical illnesses, death is a passive process and neither the patient nor the physician are actively causing it. However, in many palliative settings, patients can be given medications and treatment for symptomatic relief, even if these may hasten their death. The principle that makes this permissible is that the primary aim is to improve the symptoms and ensure comfort. The unintended effect includes side effects and hastened death. Similarly, in suicidal depression, one could argue that the patient should be permitted medications that may hasten or lead to death, so long as the primary aim is to improve the symptoms of the unbearable mental pain and suffering.
Let us consider an alternative scenario. What if previously suicidal patients are currently in remission from depression and make advanced directives? In their current healthy state, they assert that if, in the future, they were to relapse, they would not want any form of treatment. Instead, they wish for the disease to run its course, which may end in death through suicide.
In this case, the circumstances in which the statement was made would be entirely valid – the patients at that moment have capacity, are not under coercion, are able to articulate logical thought processes, and their reasoning would not be affected by a concurrent psychiatric pathology. Furthermore, they can demonstrate that suicide is not an impulsive decision and have considered the consequences of suicide on themselves and others. If the patients can demonstrate all the above, what would the ethical grounds be for refusing this advanced directive?
Medical ethics
Below, I consider this debate in the context of four pillars of medical ethics.
Non-maleficence
To determine whether an action is in line with non-maleficence, one must ask whether the proposed treatment will improve or resolve one’s condition. In the case of severe suicidal depression, the treatment may help patients in the short term, but what happens if or when they relapse? The treatment will likely prolong life, but also inadvertently prolong suffering. What if the patients do not wish to go through this again? The treatment regime can be profoundly taxing for the patients, the loved ones, and sometimes even for the treating team. Are we doing more harm by forcing these patients to stay alive against their will?
Beneficence
Beneficence is the moral duty to promote the action that is in the patient’s best interest. But who should determine what the patient’s best interests are if the patient and the doctor disagree? Usually, this decision is made by the treating doctor, who considers the patient’s past and present wishes, beliefs and values, and capacity assessment. Supposing that the law was not a restriction, could one’s psychiatrist ever agree on psychiatric grounds alone that it is indeed in the patient’s best interests to die?
Doctors play a central role in the duty of care. But care does not always mean active treatment. Caring encompasses physical, psychological, and spiritual welfare and includes considering an individual patient’s dignity, personal circumstances, and wishes. In certain circumstances, keeping patients with capacity alive against their wishes could be more harmful than caring.
Autonomy
Autonomy gives the patients ultimate decision-making responsibility for their own lives. It allows patients with capacity to decline treatment that is recommended by their physicians and to make decisions regarding their own death. However, in suicidally depressed patients, this autonomy is confiscated. Severely unwell patients, at high risk of committing suicide, are not permitted the autonomy to make the decision regarding their treatment, suicide, and death.
Justice
A justice-orientated and utilitarian view questions whether spending resources on these patients wastes time, resources, and expertise, and whether resources should instead be spent on patients who do want treatment.
For example, the British National Health Service holds an outstanding debt of £13.4 billion.4 The financial cost of treating mental illness in 2020/2021 was £14.31 billion.5 The NHS estimates that wider costs to national economy, including welfare benefits, housing support, social workers, community support, lost productivity at work, etc., amounts to approximately £77 billion annually.6 Many severely depressed patients are so unwell that their ability to contribute to society, financially, socially, and otherwise, is minimal. If patients with capacity genuinely want to die and society would benefit from a reduction in the pressures on health and social care services, would it not be in both their best interests to allow them to die? This way, resources could be redirected to service users who would appreciate and benefit from them the most.
A consequentialist view focuses on whether the action will benefit the patient overall; the action itself is not so relevant. According to this view, keeping suicidally depressed patients alive against their wishes would be ethical if the patients lack capacity. Keeping them safe and treating them until they are better would overall be in the patients’ best interests. However, if the patients do have capacity and wish to die, forcing them to stay alive and undergo treatment against their wishes would merely prolong their suffering and thus could be considered unethical.
When enough is enough
In suicidal treatment-resistant depression, where the patient has tried multiple treatments over time and carefully considered alternatives, when is it time to stop trying? For physical illness, patients can refuse treatment provided they can demonstrate capacity. In depression, they can refuse treatment only if they can demonstrate that they are not at serious risk to themselves or others. Most societies consider suicide as a serious risk to self and therefore unacceptable. However, if we considered suicide as a natural endpoint of the disease process, should the patient have the right to refuse treatment and allow the disease to progress to death?
The treatment regime can be a lengthy process and the repeated failures to improve can be physically and mentally exhausting and further compound the hopelessness. Treatments often have side effects, which further erode the patient’s physical and mental wellbeing. Is there a time when giving up and withdrawing active treatment is in the patient’s best interests, especially if that is what the patient wants?
Terminal diseases are incurable and likely to hasten one’s death. Severe suicidal treatment-resistant depression conforms to both conditions – it is unresponsive to treatment and has a high likelihood of precipitating premature death through suicide. Most terminal illnesses can be managed with palliative treatment. In the context of severe suicidal depression, euthanasia and assisted suicide could be considered as means of palliative care.
Palliative care involves managing the patient’s symptomatology, dignity, and comfort. Euthanasia and assisted suicide help to address all of these. Like palliative care, euthanasia and assisted suicide aim to improve symptoms of depression by alleviating pain and suffering, even if they may hasten death.
Euthanasia and assisted suicide in severe depression
Euthanasia and assisted suicide are legal in seven countries. Two countries (Belgium and the Netherlands) permit euthanasia for psychiatric illnesses. Passive euthanasia is practiced in most countries, e.g., withholding artificial life support. In suicidal depression, it could be considered that this withholding of treatment may directly lead to death by suicide.
In active euthanasia and assisted suicide, the patient is given a chemical that will directly lead to death. Euthanasia and assisted suicide allow individuals to die with dignity in a controlled and organized manner. It ends the patients’ suffering and allows them to finally find peace. The difficulties that led them to seek euthanasia/assisted suicide indicate a loss of control of the pain and suffering in life, and euthanasia allows them to regain this control and autonomy through death. It allows these individuals to properly say goodbye to their loved ones, and a chance to share their thoughts and feelings.
In contrast, suicide is often covert, clandestine, and planned in secret, and it frequently requires individuals to be dishonest with their closest loved ones. The suicide often comes as a shock to the loved ones and profound grief, questions, anger, pain, sorrow, and guilt follow. These are due to questions that have been left unanswered, thoughts that were never shared, regret that they had not done more to help, and anguish knowing that their loved one died alone, in unbearable mental agony, unable to speak to anyone about this final hurdle.
Euthanasia and assisted suicide provide a path to overcome all these issues. They encourage open conversations between the patients, their loved ones, and the treating team. They promote transparency, mutual support, and help prepare the loved ones for the death. In this way, euthanasia and assisted suicide can benefit both the patient and the loved ones.
A significant proportion of severely suicidally depressed patients will eventually go on to commit or attempt suicide. Thus, giving them the autonomy to choose euthanasia or assisted suicide could be considered a kind, fair, and compassionate course of action, as it respects their wishes, and allows them to escape their suffering and to die with dignity.
Conclusion
Depression has historically never been considered a terminal illness, but there is undeniable evidence that a significant number of deaths every year are directly caused by depression. Should we therefore shift the focus from lifesaving and life-prolonging treatment to ensuring comfort and maintaining dignity by exploring palliative options for extremely suicidally depressed patients with capacity, who are adamant on ending their lives?
Euthanasia and assisted suicide for depression pose a profound paradox when viewed through a deontological lens. According to this, the correct course of action directly corresponds to what the most “moral” action would be. The moral stance would be to help those who are suffering. But what exactly constitutes “help”? Are euthanasia and assisted suicide helping or harming? Likewise, is keeping patients with capacity alive against their wishes helping or harming? Many believe that euthanasia, assisted suicide, and suicide itself are intrinsically and morally wrong. But this poses another clear impasse. Who should be the ones to decide whether an action is moral or not? Should it be the individual? The treating physician? Or society?
Dr. Chang graduated from Imperial College London with an MBBS (medicine and surgery) and a BSc (gastroenterology and hepatology) degree.
References
1. Office for National Statistics. Suicides in England and Wales – Office for National Statistics, 2021.
2. Faulkner, A. Suicide and Deliberate Self Harm: The Fundamental Facts. Mental Health Foundation; 1997.
3. NHS. Suicide Factsheet. Southwest London and St. George’s Mental Health NHS Trust [ebook], 2022.
4. The King’s Fund. Financial debts and loans in the NHS. 2020.
5. NHS England. Mental Health Five Year Forward View Dashboard. 2018.
6. National Mental Health, Policy into Practice. The costs of mental ill health.
Schizophrenia and postmodernism: A philosophical exercise in treatment
Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.
A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.
We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.
It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
Schizophrenia and ‘objective reality’
A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.
In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.
Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2
It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3
Postmodernism and psychiatry
Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.
As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.
Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.
A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
Psychiatry and philosophy
However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.
“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”
In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. https://iep.utm.edu/objectiv/.
2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.
3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.
4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.
5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.
Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.
A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.
We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.
It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
Schizophrenia and ‘objective reality’
A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.
In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.
Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2
It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3
Postmodernism and psychiatry
Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.
As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.
Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.
A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
Psychiatry and philosophy
However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.
“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”
In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. https://iep.utm.edu/objectiv/.
2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.
3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.
4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.
5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.
Schizophrenia is defined as having episodes of psychosis: periods of time when one suffers from delusions, hallucinations, disorganized behaviors, disorganized speech, and negative symptoms. The concept of schizophrenia can be simplified as a detachment from reality. Patients who struggle with this illness frame their perceptions with a different set of rules and beliefs than the rest of society. These altered perceptions frequently become the basis of delusions, one of the most recognized symptoms of schizophrenia.
A patient with schizophrenia doesn’t have delusions, as much as having a belief system, which is not recognized by any other. It is not the mismatch between “objective reality” and the held belief, which qualifies the belief as delusional, so much as the mismatch with the beliefs of those around you. Heliocentrism denial, denying the knowledge that the earth rotates around the sun, is incorrect because it is not factual. However, heliocentrism denial is not a delusion because it is incorrect, but because society chooses it to be incorrect.
We’d like to invite the reader to a thought experiment. “Objective reality” can be referred to as “anything that exists as it is independent of any conscious awareness of it.”1 “Consciousness awareness” entails an observer. If we remove the concept of consciousness or observer from existence, how would we then define “objective reality,” as the very definition of “objective reality” points to the existence of an observer. One deduces that there is no way to define “objective reality” without invoking the notion of an observer or of consciousness.
It is our contention that the concept of an “objective reality” is tautological – it answers itself. This philosophical quandary helps explain why a person with schizophrenia may feel alienated by others who do not appreciate their perceived “objective reality.”
Schizophrenia and ‘objective reality’
A patient with schizophrenia enters a psychiatrist’s office and may realize that their belief is not shared by others and society. The schizophrenic patient may understand the concept of delusions as fixed and false beliefs. However, to them, it is everyone else who is delusional. They may attempt to convince you, as their provider, to switch to their side. They may provide you with evidence for their belief system. One could argue that believing them, in response, would be curative. If not only one’s psychiatrist, but society accepted the schizophrenic patient’s belief system, it would no longer be delusional, whether real or not. Objective reality requires the presence of an object, an observer, to grant its value of truth.
In a simplistic way, those were the arguments of postmodernist philosophers. Reality is tainted by its observer, in a similar way that the Heisenberg uncertainty principle teaches that there is a limit to our simultaneous understanding of position and momentum of particles. This perspective may explain why Michel Foucault, PhD, the famous French postmodernist philosopher, was so interested in psychiatry and in particular schizophrenia. Dr. Foucault was deeply concerned with society imposing its beliefs and value system on patients, and positioning itself as the ultimate arbiter of reality. He went on to postulate that the bigger difference between schizophrenic patients and psychiatrists was not who was in the correct plane of reality but who was granted by society to arbitrate the answer. If reality is a subjective construct enforced by a ruling class, who has the power to rule becomes of the utmost importance.
Intersubjectivity theory in psychoanalysis has many of its sensibilities rooted in such thought. It argues against the myth of the isolated mind. Truth, in the context of psychoanalysis, is seen as an emergent product of dialogue between the therapist/patient dyad. It is in line with the ontological shift from a logical-positivist model to the more modern, constructivist framework. In terms of its view of psychosis, “delusional ideas were understood as a form of absolution – a radical decontextualization serving vital and restorative defensive functions.”2
It is an interesting proposition to advance this theory further in contending that it is not the independent consciousness of two entities that create the intersubjective space; but rather that it is the intersubjective space that literally creates the conscious entities. Could it not be said that the subjective relationship is more fundamental than consciousness itself? As Chris Jaenicke, Dipl.-Psych., wrote, “infant research has opened our eyes to the fact that there is no unilateral action.”3
Postmodernism and psychiatry
Postmodernism and its precursor skepticism have significant histories within the field of philosophy. This article will not summarize centuries of philosophical thought. In brief, skepticism is a powerful philosophical tool that can powerfully point out the limitations of human knowledge and certainty.
As a pedagogic jest to trainees, we will often point out that none of us “really knows” our date of birth with absolute certainty. None of us were conscious enough to remember our birth, conscious enough to understand the concept of date or time, and conscious enough to know who participated in it. At a fundamental level, we chose to believe our date of birth. Similarly, while the world could be a fictionalized simulation,4 we chose to believe that it is real because it behaves in a consistent way that permits scientific study. Postmodernism and skepticism are philosophical tools that permit one to question everything but are themselves limited by the real and empiric lives we live.
Psychiatrists are empiricists. We treat real people, who suffer in a very perceptible way, and live in a very tangible world. We frown on the postmodernist perspective and do not spend much or any time studying it as trainees. However, postmodernism, despite its philosophical and practical flaws, and adjacency to antipsychiatry,5 is an essential tool for the psychiatrist. In addition to the standard treatments for schizophrenia, the psychiatrist should attempt to create a bond with someone who is disconnected from the world. Postmodernism provides us with a way of doing so.
A psychiatrist who understands and appreciates postmodernism can show a patient why at some level we cannot refute all delusions. This psychiatrist can subsequently have empathy that some of the core beliefs of a patient may always be left unanswered. The psychiatrist can appreciate that to some degree the reason why the patient’s beliefs are not true is because society has chosen for them not to be true. Additionally, the psychiatrist can acknowledge to the patient that in some ways the correctness of a delusion is less relevant than the power of society to enforce its reality on the patient. This connection in itself is partially curative as it restores the patient’s attachment to society; we now have some plane of reality, the relationship, which is the same.
Psychiatry and philosophy
However, tempting it may be to be satisfied with this approach as an end in itself; this would be dangerous. While gratifying to the patient to be seen and heard, they will over time only become further entrenched in that compromise formation of delusional beliefs. The role of the psychiatrist, once deep and meaningful rapport has been established and solidified, is to point out to the patient the limitations of the delusions’ belief system.
“I empathize that not all your delusions can be disproved. An extension of that thought is that many beliefs can’t be disproved. Society chooses to believe that aliens do not live on earth but at the same time we can’t disprove with absolute certainty that they don’t. We live in a world where attachment to others enriches our lives. If you continue to believe that aliens affect all existence around you, you will disconnect yourself from all of us. I hope that our therapy has shown you the importance of human connection and the sacrifice of your belief system.”
In the modern day, psychiatry has chosen to believe that schizophrenia is a biological disorder that requires treatment with antipsychotics. We choose to believe that this is likely true, and we think that our empirical experience has been consistent with this belief. However, we also think that patients with this illness are salient beings that deserve to have their thoughts examined and addressed in a therapeutic framework that seeks to understand and acknowledge them as worthy and intelligent individuals. Philosophy provides psychiatry with tools on how to do so.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. https://iep.utm.edu/objectiv/.
2. Stolorow, RD. The phenomenology of trauma and the absolutisms of everyday life: A personal journey. Psychoanal Psychol. 1999;16(3):464-8. doi: 10.1037/0736-9735.16.3.464.
3. Jaenicke C. “The Risk of Relatedness: Intersubjectivity Theory in Clinical Practice” Lanham, Md.: Jason Aronson, 2007.
4. Cuthbertson A. “Elon Musk cites Pong as evidence that we are already living in a simulation” The Independent. 2021 Dec 1. https://www.independent.co.uk/space/elon-musk-simulation-pong-video-game-b1972369.html.
5. Foucault M (Howard R, translator). “Madness and Civilization: A History of Insanity in the Age of Reason” New York: Vintage, 1965.
Low-dose oral minoxidil for the treatment of alopecia
Other than oral finasteride, vitamins, and topicals, there has been little advancement in the treatment of AGA leaving many (including me) desperate for anything remotely new.
Oral minoxidil is a peripheral vasodilator approved by the Food and Drug Administration for use in patients with hypertensive disease taken at doses ranging between 10 mg to 40 mg daily. Animal studies have shown that minoxidil affects the hair growth cycle by shortening the telogen phase and prolonging the anagen phase.
Recent case studies have also shown growing evidence for the off-label use of low-dose oral minoxidil (LDOM) for treating different types of alopecia. Topical minoxidil is metabolized into its active metabolite minoxidil sulfate, by sulfotransferase enzymes located in the outer root sheath of hair follicles. The expression of sulfotransferase varies greatly in the scalp of different individuals, and this difference is directly correlated to the wide range of responses to minoxidil treatment. LDOM is, however, more widely effective because it requires decreased follicular enzymatic activity to form its active metabolite as compared with its topical form.
In a retrospective series by Beach and colleagues evaluating the efficacy and tolerability of LDOM for treating AGA, there was increased scalp hair growth in 33 of 51 patients (65%) and decreased hair shedding in 14 of the 51 patients (27%) with LDOM. Patients with nonscarring alopecia were most likely to show improvement. Side effects were dose dependent and infrequent. The most frequent adverse effects were hypertrichosis, lightheadedness, edema, and tachycardia. No life-threatening adverse effects were observed. Although there has been a recently reported case report of severe pericardial effusion, edema, and anasarca in a woman with frontal fibrosing alopecia treated with LDOM, life threatening side effects are rare.3
To compare the efficacy of topical versus oral minoxidil, Ramos and colleagues performed a 24-week prospective study of low-dose (1 mg/day) oral minoxidil, compared with topical 5% minoxidil, in the treatment of 52 women with female pattern hair loss. Blinded analysis of trichoscopic images were evaluated to compare the change in total hair density in a target area from baseline to week 24 by three dermatologists.
Results after 24 weeks of treatment showed an increase in total hair density (12%) among the women taking oral minoxidil, compared with 7.2% in women who applied topical minoxidil (P =.09).
In the armamentarium of hair-loss treatments, dermatologists have limited choices. LDOM can be used in patients with both scarring and nonscarring alopecia if monitored regularly. Treatment doses I recommend are 1.25-5 mg daily titrated up slowly in properly selected patients without contraindications and those who are not taking other vasodilators. Self-reported dizziness, edema, and headache are common and treatments for facial hypertrichosis in women are always discussed. Clinical efficacy can be evaluated after 10-12 months of therapy and concomitant spironolactone can be given to mitigate the side effect of hypertrichosis.Patient selection is crucial as patients with severe scarring alopecia and those with active inflammatory diseases of the scalp may not see similar results. Similar to other hair loss treatments, treatment courses of 10-12 months are often needed to see visible signs of hair growth.
Dr. Talakoub and Naissan O. Wesley, MD, are cocontributors to this column. Dr. Talakoub is in private practice in McLean, Va. Dr. Wesley practices dermatology in Beverly Hills, Calif. Write to them at [email protected]. Dr. Talakoub had no relevant disclosures.
References
Beach RA et al. J Am Acad Dermatol. 2021 Mar;84(3):761-3.
Dlova et al. JAAD Case Reports. 2022 Oct;28:94-6.
Jimenez-Cauhe J et al. J Am Acad Dermatol. 2021 Jan;84(1):222-3.
Ramos PM et al. J Eur Acad Dermatol Venereol. 2020 Jan;34(1):e40-1.
Ramos PM et al. J Am Acad Dermatol. 2020 Jan;82(1):252-3.
Randolph M and Tosti A. J Am Acad Dermatol. 2021 Mar;84(3):737-46.
Vañó-Galván S et al. J Am Acad Dermatol. 2021 Jun;84(6):1644-51.
Other than oral finasteride, vitamins, and topicals, there has been little advancement in the treatment of AGA leaving many (including me) desperate for anything remotely new.
Oral minoxidil is a peripheral vasodilator approved by the Food and Drug Administration for use in patients with hypertensive disease taken at doses ranging between 10 mg to 40 mg daily. Animal studies have shown that minoxidil affects the hair growth cycle by shortening the telogen phase and prolonging the anagen phase.
Recent case studies have also shown growing evidence for the off-label use of low-dose oral minoxidil (LDOM) for treating different types of alopecia. Topical minoxidil is metabolized into its active metabolite minoxidil sulfate, by sulfotransferase enzymes located in the outer root sheath of hair follicles. The expression of sulfotransferase varies greatly in the scalp of different individuals, and this difference is directly correlated to the wide range of responses to minoxidil treatment. LDOM is, however, more widely effective because it requires decreased follicular enzymatic activity to form its active metabolite as compared with its topical form.
In a retrospective series by Beach and colleagues evaluating the efficacy and tolerability of LDOM for treating AGA, there was increased scalp hair growth in 33 of 51 patients (65%) and decreased hair shedding in 14 of the 51 patients (27%) with LDOM. Patients with nonscarring alopecia were most likely to show improvement. Side effects were dose dependent and infrequent. The most frequent adverse effects were hypertrichosis, lightheadedness, edema, and tachycardia. No life-threatening adverse effects were observed. Although there has been a recently reported case report of severe pericardial effusion, edema, and anasarca in a woman with frontal fibrosing alopecia treated with LDOM, life threatening side effects are rare.3
To compare the efficacy of topical versus oral minoxidil, Ramos and colleagues performed a 24-week prospective study of low-dose (1 mg/day) oral minoxidil, compared with topical 5% minoxidil, in the treatment of 52 women with female pattern hair loss. Blinded analysis of trichoscopic images were evaluated to compare the change in total hair density in a target area from baseline to week 24 by three dermatologists.
Results after 24 weeks of treatment showed an increase in total hair density (12%) among the women taking oral minoxidil, compared with 7.2% in women who applied topical minoxidil (P =.09).
In the armamentarium of hair-loss treatments, dermatologists have limited choices. LDOM can be used in patients with both scarring and nonscarring alopecia if monitored regularly. Treatment doses I recommend are 1.25-5 mg daily titrated up slowly in properly selected patients without contraindications and those who are not taking other vasodilators. Self-reported dizziness, edema, and headache are common and treatments for facial hypertrichosis in women are always discussed. Clinical efficacy can be evaluated after 10-12 months of therapy and concomitant spironolactone can be given to mitigate the side effect of hypertrichosis.Patient selection is crucial as patients with severe scarring alopecia and those with active inflammatory diseases of the scalp may not see similar results. Similar to other hair loss treatments, treatment courses of 10-12 months are often needed to see visible signs of hair growth.
Dr. Talakoub and Naissan O. Wesley, MD, are cocontributors to this column. Dr. Talakoub is in private practice in McLean, Va. Dr. Wesley practices dermatology in Beverly Hills, Calif. Write to them at [email protected]. Dr. Talakoub had no relevant disclosures.
References
Beach RA et al. J Am Acad Dermatol. 2021 Mar;84(3):761-3.
Dlova et al. JAAD Case Reports. 2022 Oct;28:94-6.
Jimenez-Cauhe J et al. J Am Acad Dermatol. 2021 Jan;84(1):222-3.
Ramos PM et al. J Eur Acad Dermatol Venereol. 2020 Jan;34(1):e40-1.
Ramos PM et al. J Am Acad Dermatol. 2020 Jan;82(1):252-3.
Randolph M and Tosti A. J Am Acad Dermatol. 2021 Mar;84(3):737-46.
Vañó-Galván S et al. J Am Acad Dermatol. 2021 Jun;84(6):1644-51.
Other than oral finasteride, vitamins, and topicals, there has been little advancement in the treatment of AGA leaving many (including me) desperate for anything remotely new.
Oral minoxidil is a peripheral vasodilator approved by the Food and Drug Administration for use in patients with hypertensive disease taken at doses ranging between 10 mg to 40 mg daily. Animal studies have shown that minoxidil affects the hair growth cycle by shortening the telogen phase and prolonging the anagen phase.
Recent case studies have also shown growing evidence for the off-label use of low-dose oral minoxidil (LDOM) for treating different types of alopecia. Topical minoxidil is metabolized into its active metabolite minoxidil sulfate, by sulfotransferase enzymes located in the outer root sheath of hair follicles. The expression of sulfotransferase varies greatly in the scalp of different individuals, and this difference is directly correlated to the wide range of responses to minoxidil treatment. LDOM is, however, more widely effective because it requires decreased follicular enzymatic activity to form its active metabolite as compared with its topical form.
In a retrospective series by Beach and colleagues evaluating the efficacy and tolerability of LDOM for treating AGA, there was increased scalp hair growth in 33 of 51 patients (65%) and decreased hair shedding in 14 of the 51 patients (27%) with LDOM. Patients with nonscarring alopecia were most likely to show improvement. Side effects were dose dependent and infrequent. The most frequent adverse effects were hypertrichosis, lightheadedness, edema, and tachycardia. No life-threatening adverse effects were observed. Although there has been a recently reported case report of severe pericardial effusion, edema, and anasarca in a woman with frontal fibrosing alopecia treated with LDOM, life threatening side effects are rare.3
To compare the efficacy of topical versus oral minoxidil, Ramos and colleagues performed a 24-week prospective study of low-dose (1 mg/day) oral minoxidil, compared with topical 5% minoxidil, in the treatment of 52 women with female pattern hair loss. Blinded analysis of trichoscopic images were evaluated to compare the change in total hair density in a target area from baseline to week 24 by three dermatologists.
Results after 24 weeks of treatment showed an increase in total hair density (12%) among the women taking oral minoxidil, compared with 7.2% in women who applied topical minoxidil (P =.09).
In the armamentarium of hair-loss treatments, dermatologists have limited choices. LDOM can be used in patients with both scarring and nonscarring alopecia if monitored regularly. Treatment doses I recommend are 1.25-5 mg daily titrated up slowly in properly selected patients without contraindications and those who are not taking other vasodilators. Self-reported dizziness, edema, and headache are common and treatments for facial hypertrichosis in women are always discussed. Clinical efficacy can be evaluated after 10-12 months of therapy and concomitant spironolactone can be given to mitigate the side effect of hypertrichosis.Patient selection is crucial as patients with severe scarring alopecia and those with active inflammatory diseases of the scalp may not see similar results. Similar to other hair loss treatments, treatment courses of 10-12 months are often needed to see visible signs of hair growth.
Dr. Talakoub and Naissan O. Wesley, MD, are cocontributors to this column. Dr. Talakoub is in private practice in McLean, Va. Dr. Wesley practices dermatology in Beverly Hills, Calif. Write to them at [email protected]. Dr. Talakoub had no relevant disclosures.
References
Beach RA et al. J Am Acad Dermatol. 2021 Mar;84(3):761-3.
Dlova et al. JAAD Case Reports. 2022 Oct;28:94-6.
Jimenez-Cauhe J et al. J Am Acad Dermatol. 2021 Jan;84(1):222-3.
Ramos PM et al. J Eur Acad Dermatol Venereol. 2020 Jan;34(1):e40-1.
Ramos PM et al. J Am Acad Dermatol. 2020 Jan;82(1):252-3.
Randolph M and Tosti A. J Am Acad Dermatol. 2021 Mar;84(3):737-46.
Vañó-Galván S et al. J Am Acad Dermatol. 2021 Jun;84(6):1644-51.
Demystifying psychotherapy
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
Managing psychiatric illnesses is rapidly becoming routine practice for primary care pediatricians, whether screening for symptoms of anxiety and depression, starting medication, or providing psychoeducation to youth and parents. Pediatricians can provide strategies to address the impairments of sleep, energy, motivation and appetite that can accompany these illnesses. Psychotherapy, a relationship based on understanding and providing support, should be a core element of treatment for emotional disorders, but there is a great deal of uncertainty around what therapies are supported by evidence. This month, we offer a primer on the evidence-based psychotherapies for youth and we also recognize that research defining the effectiveness of psychotherapy is limited and complex.
Cognitive-behavioral psychotherapy (CBT)
Mention psychotherapy and most people think of a patient reclining on a couch free-associating about their childhood while a therapist sits behind them taking notes. This potent image stems from psychoanalytic psychotherapy, developed in the 19th century by Sigmund Freud, and was based on his theory that unconscious conflicts drove most of the puzzling behaviors and emotional distress associated with “neurosis.” Psychoanalysis became popular in 20th century America, even for use with children. Evidence is hard to develop since psychoanalytic therapy often lasts years, there are a limited number of patients, and the method is hard to standardize.
A focus on how to shape behaviors directly also emerged in the early 20th century (in the work of John Watson and Ivan Pavlov). Aaron Beck, MD, the father of CBT, observed in his psychoanalytic treatments that many patients appeared to be experiencing emotional distress around thoughts that were not unconscious. Instead, his patients were experiencing “automatic thoughts,” or rapid, often-distorted thoughts that have the force of truth in the thinker. These thoughts create emotional distress and behaviors that may reinforce the thoughts and emotional distress. For example, a depressed patient who is uncomfortable in social situations may think “nobody ever likes me.” This may cause them to appear uncomfortable or unfriendly in a new social situation and prevent them from making connections, perpetuating a cycle of isolation, insecurity, and loneliness. Identifying these automatic thoughts, and their connection to painful feelings and perpetuating behaviors is at the core of CBT.
In CBT the therapist is much more active than in psychoanalysis. They engage patients in identifying thought distortions together, challenging them on the truth of these thoughts and recognizing the connection to emotional distress. They also identify maladaptive behaviors and focus on strategies to build new more effective behavioral responses to thoughts, feelings, and situations. This is often done with gradual “exposures” to new behaviors, which are naturally reinforced by better outcomes or lowered distress. When performed with high fidelity, CBT is a very structured treatment that is closer to an emotionally supportive form of coaching and skill building. CBT is at the core of most evidence-based psychotherapies that have emerged in the past 60 years.
CBT is the first-line treatment for anxiety disorders in children, adolescents, and adults. A variant called “exposure and response prevention” is the first-line treatment for obsessive-compulsive disorder, and is predominantly behavioral. It is focused on preventing patients with anxiety disorders from engaging in the maladaptive behaviors that lower their anxiety in the short term but cause worsened anxiety and impairment over time (such as avoiding social situations when they are worried that others won’t like them).
CBT is also a first-line treatment for major depressive episodes in teenagers and adults, although those for whom the symptoms are severe often need medication to be able to fully participate in therapy. There are variants of CBT that have demonstrated efficacy in the treatment of posttraumatic stress disorder, bulimia, and even psychosis. It makes developmental sense that therapies with a problem-focused coaching approach might be more effective in children and adolescents than open-ended exploratory psychotherapies.
Traditional CBT was not very effective for patients with a variant of depression that is marked by stormy relationships, irritability, chronic suicidality, and impulsive attempts to regulate discomfort (including bingeing, purging, sexual acting-out, drug use, and self-injury or cutting), a symptom pattern called “borderline personality disorder.” These patients often ended up on multiple medications with only modest improvements in their function and well-being.
But in the 1990s, a research psychologist named Marsha Linnehan developed a modified version of CBT to use with these patients called dialectical-behavioral therapy (DBT). The “dialectic” emphasizes the role of two things being true at once, in this case the need for acceptance and change. DBT helps patients develop distress tolerance and emotional regulation skills alongside adaptive social and communication skills. DBT has demonstrated efficacy in the treatment of these patients as well as in the treatment of other disorders marked by poor distress tolerance and self-regulation (such as substance use disorders, binge-eating disorder, and PTSD).
DBT was adapted for use in adolescents given the prevalence of these problems in this age group, and it is the first-line treatment for adolescents with these specific mood and behavioral symptoms. High-fidelity DBT has an individual, group, and family component that are all essential for the treatment to be effective.
Instruction about the principles of CBT and DBT is a part of graduate school in psychology, but not every postgraduate training program includes thorough training in their practice. Completion of this specialized training leads to certification. It is very important that families understand that anyone may call themselves a psychotherapist. Those therapists who have master’s degrees (MSW, MFT, PCC, and others) may not have had exposure to these evidence-based treatments in their shorter graduate programs. Even doctoral-level training programs often do not include complete training in the high-fidelity delivery of these therapies.
It is critical that you help families be educated consumers and ask therapists if they have training and certification in the recommended therapy. The Psychology Today website has a therapist referral resource that includes this information. Training programs can provide access to therapists who are learning these therapies; with skilled supervision, they can provide excellent treatment.
We should note that there are several other evidence-based therapies, including family-based treatment for anorexia nervosa, motivational interviewing for substance use disorders, and interpersonal psychotherapy for depression associated with high family conflict in adolescents.
There is good evidence that the quality of the alliance between therapist and patient is a critical predictor of whether a therapy will be effective. It is appropriate for your patient to look for a therapist that they can trust and talk to and that their therapist be trained in the recommended psychotherapy. Otherwise, your patient is spending valuable time and money on an enterprise that may not be effective. This can leave them and their parents feeling discouraged or even hopeless about the prospects for recovery and promote an overreliance on medications. In addition to providing your patients with effective screening, initiating medication treatment, and psychoeducation, you can enhance their ability to find an optimal therapist to relieve their suffering.
Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].
When do we stop using BMI to diagnose obesity?
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
“BMI is trash. Full stop.” This controversial tweet received 26,500 likes and almost 3,000 retweets. The 400 comments from medical and non–health care personnel ranged from agreeable to contrary to offensive.
As a Black woman who is an obesity expert living with the impact of obesity in my own life, I know the emotion that a BMI conversation can evoke. Before emotions hijack the conversation, let’s discuss BMI’s past, present, and future.
BMI: From observational measurement to clinical use
Imagine walking into your favorite clothing store where an eager clerk greets you with a shirt to try on. The fit is off, but the clerk insists that the shirt must fit because everyone who’s your height should be able to wear it. This scenario seems ridiculous. But this is how we’ve come to use the BMI. Instead of thinking that people of the same height may be the same size, we declare that they must be the same size.
The idea behind the BMI was conceived in 1832 by Belgian anthropologist and mathematician Adolphe Quetelet, but he didn’t intend for it to be a health measure. Instead, it was simply an observation of how people’s weight changed in proportion to height over their lifetime.
Fast-forward to the 20th century, when insurance companies began using weight as an indicator of health status. Weights were recorded in a “Life Table.” Individual health status was determined on the basis of arbitrary cut-offs for weight on the Life Tables. Furthermore, White men set the “normal” weight standards because they were the primary insurance holders.
In 1972, Dr. Ancel Keys, a physician and leading expert in body composition at the time, cried foul on this practice and sought to standardize the use of weight as a health indicator. Dr. Keys used Quetelet’s calculation and termed it the Body Mass Index.
By 1985, the U.S. National Institutes of Health and the World Health Organization adopted the BMI. By the 21st century, BMI had become widely used in clinical settings. For example, the Centers for Medicare & Medicaid Services adopted BMI as a quality-of-care measure, placing even more pressure on clinicians to use BMI as a health screening tool.
BMI as a tool to diagnose obesity
We can’t discuss BMI without discussing the disease of obesity. BMI is the most widely used tool to diagnose obesity. In the United States, one-third of Americans meet the criteria for obesity. Another one-third are at risk for obesity.
Compared with BMI’s relatively quick acceptance into clinical practice, however, obesity was only recently recognized as a disease.
Historically, obesity has been viewed as a lifestyle choice, fueled by misinformation and multiple forms of bias. The historical bias associated with BMI and discrimination has led some public health officials and scholars to dismiss the use of BMI or fail to recognize obesity as disease.
This is a dangerous conclusion, because it comes to the detriment of the very people disproportionately impacted by obesity-related health disparities.
Furthermore, weight bias continues to prevent people living with obesity from receiving insurance coverage for life-enhancing obesity medications and interventions.
Is it time to phase out BMI?
The BMI is intertwined with many forms of bias: age, gender, racial, ethnic, and even weight. Therefore, it is time to phase out BMI. However, phasing out BMI is complex and will take time, given that:
- Obesity is still a relatively “young” disease. 2023 marks the 10th anniversary of obesity’s recognition as a disease by the American Medical Association. Currently, BMI is the most widely used tool to diagnose obesity. Tools such as waist circumference, body composition, and metabolic health assessment will need to replace the BMI. Shifting from BMI emphasizes that obesity is more than a number on the scale. Obesity, as defined by the Obesity Medicine Association, is indeed a “chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences.”
- Much of our health research is tied to BMI. There have been some shifts in looking at non–weight-related health indicators. However, we need more robust studies evaluating other health indicators beyond weight and BMI. The availability of this data will help eliminate the need for BMI and promote individualized health assessment.
- Current treatment guidelines for obesity medications are based on BMI. (Note: Medications to treat obesity are called “anti-obesity” medications or AOMs. However, given the stigma associated with obesity, I prefer not to use the term “anti-obesity.”) Presently this interferes with long-term obesity treatment. Once BMI is “normal,” many patients lose insurance coverage for their obesity medication, despite needing long-term metabolic support to overcome the compensatory mechanism of weight regain. Obesity is a chronic disease that exists independent of weight status. Therefore, using non-BMI measures will help ensure appropriate lifetime support for obesity.
The preceding are barriers, not impossibilities. In the interim, if BMI is still used in any capacity, the BMI reference chart should be an adjusted BMI chart based on age, race, ethnicity, biological sex, and obesity-related conditions. Furthermore, BMI isn’t the sole determining factor of health status.
Instead, an “abnormal” BMI should initiate conversation and further testing, if needed, to determine an individual’s health. For example, compare two people of the same height with different BMIs and lifestyles. Current studies support that a person flagged as having a high adjusted BMI but practicing a healthy lifestyle and having no metabolic diseases is less at risk than a person with a “normal” BMI but high waist circumference and an unhealthy lifestyle.
Regardless of your personal feelings, the facts are clear. Technology empowers us with better tools than BMI to determine health status. Therefore, it’s not a matter of if we will stop using BMI but when.
Sylvia Gonsahn-Bollie, MD, DipABOM, is an integrative obesity specialist who specializes in individualized solutions for emotional and biological overeating. Connect with her at www.embraceyouweightloss.com or on Instagram @embraceyoumd. Her bestselling book, “Embrace You: Your Guide to Transforming Weight Loss Misconceptions Into Lifelong Wellness,” is Healthline.com’s Best Overall Weight Loss Book 2022 and one of Livestrong.com’s picks for the 8 Best Weight-Loss Books to Read in 2022.
A version of this article first appeared on Medscape.com.
75 Years of the Historic Partnership Between the VA and Academic Medical Centers
The US government has a legacy of providing support for veterans. Pensions were offered to disabled veterans as early as 1776, and benefits were expanded to cover medical needs as the country grew and modernized.1,2 Enacted during the Civil War, the General Pension Act increased benefits for widows and dependents.2 Rehabilitation and vocational training assistance benefits were added after World War I, and the US Department of Veterans Affairs (VA) was created in 1930 to consolidate all benefits under one umbrella organization.2,3
Prior to World War II, the VA lacked the bed capacity for the 4 million veterans who were eligible for care. This shortage became more acute by the end of the war, when the number of eligible veterans increased by 15 million.4 Although the VA successfully built bed capacity through acquisition of military hospitals, VA hospitals struggled to recruit clinical staff.2 Physicians were hesitant to join the VA because civil service salaries were lower than comparable positions in the community, and the VA offered limited opportunities for research or continuing education. These limitations negatively impacted the overall reputation of the VA. The American Medical Association (AMA) was reluctant to directly admit VA physicians for membership because of a “lower” standard of care at VA hospitals.2 This review will describe how passage of 2 legislative actions, the Servicemen’s Readjustment Act and Public Law (PL)79-293, and a key policy memorandum set the foundation for the partnership between the VA and academic medical centers. This led to improved medical care for veterans and expansion of health professions education for VA and the nation.5,6
GI Bill of Rights
The passage of the Servicemen’s Readjustment Act of 1944, better known as the GI Bill of Rights, provided education assistance, guaranteed home loans, and unemployment payments to veterans.5 All medical officers serving during the war were eligible for this benefit, which effectively increased the number of potential physician trainees at the end of World War II by almost 60,000.7 Medical education at the time was simultaneously undergoing a transformation with more rigorous training and a push to standardize medical education across state lines. While prerequisite training was not required for admission to many medical schools and curricula varied in length based on state licensing requirements, more programs were adding premedical education requirements and transitioning to the 4-year curricula seen today. At this time, only 23 states required postgraduate internships for licensure, but this number was growing.8 The American Board of Medical Specialties was established several years prior to World War II in 1934 to elevate the quality of care; the desire for residency training and board certification continued to gain traction during the 1940s.9
Medical Training
In anticipation of an influx of medical trainees, the Committee on Postwar Medical Service conducted a comprehensive survey to understand the training needs of physician veterans returning from World War II.7 The survey collected data from medical officers on their desired length of training, interest in specialty board certification, time served, and type of medical practice prior to enlisting. Length of desired training was categorized as short (up to 6 months), which would serve as a refresher course and provide updates on recent advances in medicine and surgery, and long (> 6 months), which resembled a modern internship or residency. Nineteen percent did not want additional training, 22% wished to pursue short courses, and 51% were interested in longer courses. Most respondents also wished to obtain board certification.7 The AMA played a significant role in supporting the expansion of training opportunities, encouraging all accredited hospitals to assess their capacity to determine the number of additional residents they could accommodate. The AMA also awarded hospitals with existing internship programs temporary accreditation to allow them to add extended training through residency programs.7
Medical schools devised creative solutions to meet the needs of returning physician veterans and capitalize on the available educational benefits. Postgraduate refresher courses that varied in length from hours to months were developed focusing on an array of topics. In addition to basic medical principles, courses covered general topics, such as advances in medicine, to specialty topics, such as nutrition or ophthalmology.7 Although the courses could not be counted toward board certification, participation increased by almost 300% in the 1945/1946 academic year relative to the previous year.7 Increasing access to the longer training courses, including internships and residencies, was often achieved through experiences outside the clinical setting. Yale University modified its curriculum to reduce time devoted to lectures on published materials and encourage active learning and community outreach.10 Northwestern University assigned residents to spend 1 of their 3 years “out of residence” in basic science and clinical instruction provided by the medical school. Tuition assistance from the GI Bill supported the additional expenses incurred by the medical school to fund laboratory space, equipment, and the salaries of the basic science instructors and administrative staff.11
Public Law 79-293
Public Law 79-293 was passed on January 3, 1946, establishing the Department of Medicine and Surgery within the VA. The law, which became the basis for Title 38 chapters 73 and 74, allowed VA hospitals flexibility to hire doctors, dentists, and nurses without regard to the civil service regulations and salary restrictions associated with other federal positions.6
Concerns about quality of care had been mounting for years, and the release of several sensationalized and critical articles motivated VA leadership to make sweeping changes. One article described neglect at VA hospitals.12 Excessive paperwork and low economic benefits were identified as barriers to the recruitment of qualified clinicians at the VA.2 The VA Special Medical Advisory Group investigating the claims recommended that the VA encourage their hospitals to affiliate with medical schools to improve the quality of care. This group also recommended that new VA hospitals be constructed near academic medical centers to allow access to consultants.2 Three large veterans service organizations (American Legion, Veterans of Foreign Wars, and Disabled American Veterans) conducted their own investigations in response to the media reports. The organizations reported that the quality of care in most VA hospitals was already on par with the community but indicated that the VA would benefit from expansion of medical research and training, increased bed capacity, reduction in the administrative burden on clinicians, and increased salaries for clinical staff.2
Policy Memorandum No. 2
The relationship between VA and academic medical centers was solidified on January 30, 1946, with adoption of Policy Memorandum No. 2.13 This memorandum allowed for the establishment of relationships with academic medical centers to provide “the veteran a much higher standard of medical care than could be given him with a wholly full-time medical staff.” Shortly after this memorandum was signed, residents from Northwestern University and the University of Illinois at Chicago began clinical rotations at the Hines VA facility in Chicago, Illinois.2 By 1947, 62 medical schools had committed to an affiliation with local VA hospitals and 21 deans’ committees were in operation, which were responsible for the appointment of physician residents and consultants. The AMA extended direct membership privileges to VA physicians, and by 1947 the number of residency positions doubled nationally.14,15 The almost universal support of the relationship between VA and academic affiliates provided educational opportunities for returning veterans and raised standards for medical education nationally.
Current State
Since the passage of PL 79-293 and PM No. 2, the VA-academic health professions education partnership has grown to include 113,000 trainees rotating through 150 VA medical centers annually from more than 1400 colleges and universities.16 Most VA podiatrists, psychologists, optometrists, and physicians working in VA medical centers also trained at VA, and trainees are 37% more likely to consider a job at VA after completing their clinical rotations. This unique partnership began 76 years ago and continues to provide clinicians “for VA and the nation.”
1. Glasson WH. History of military pension legislation in the United States. Columbia University Press; 1900.
2. Lewis BJ. Veterans Administration medical program relationship with medical schools in the United States. Dissertation. The American University; 1969.
3. Kracke RR. The role of the medical college in the medical care of the veteran. J Med Assoc State Ala. 1950;19(8):225-230.
4. US Department of Veterans Affairs, Office of Public Affairs. VA History in Brief. VA Pamphlet 80-97-2. Washington, DC: United States Department of Veterans Affairs; 1997.
5. Servicesmen’s Readjustment Act of 1944. 38 USC § 370 (1944).
6. To establish a Department of Medicine and Surgery in the Veterans’ Administration. 38 USC § 73-74 (1946). Accessed August 2, 2022.
7. Lueth HC. Postgraduate wishes of medical officers: final report on 21,029 questionnaires. J Am Med Assoc. 1945; 127(13):759-770.
8. Johnson V, Arestad FH, Tipner A. Medical education in the United States and Canada: forty-sixth annual report on medical education in the United States and Canada by the Council on Medical Education and Hospitals of the American Medical Association. J Am Med Assoc. 1946;131(16):1277-1310.
9. Chesney AM. Some impacts of the specialty board movement on medical education. J Assoc Am Med Coll. 1948;23(2):83-89.
10. Hiscock IV. New frontiers in health education. Can J Public Health. 1946;37(11):452-457.
11. Colwell AR. Principles of graduate medical instruction: with a specific plan of application in a medical school. J Am Med Assoc. 1945;127(13):741-746.
12. Maisel, AQ. The veteran betrayed. How long will the Veterans’ Administration continue to give third-rate medical care to first-rate men? Cosmopolitan. 1945(3):45.
13. US Veterans Administration. Policy Memorandum No. 2: Policy in association of veterans’ hospitals with medical schools. January 30, 1946.
14. American Medical Association. Digest of Official Actions: 1846-1958. JAMA. 1946;132:1094.
15. Wentz DK, Ford CV. A brief history of the internship. JAMA. 1984;252(24):3390-3394. doi:10.1001/jama.1984.03350240036035
16. US Department of Veterans Affairs, Veterans Health Administration, Office of Academic Affiliations. Health professions education academic year 2022-2021. Accessed August 8, 2022. https://www.va.gov/OAA/docs/OAA_Stats_AY_2020_2021_FINAL.pdf
The US government has a legacy of providing support for veterans. Pensions were offered to disabled veterans as early as 1776, and benefits were expanded to cover medical needs as the country grew and modernized.1,2 Enacted during the Civil War, the General Pension Act increased benefits for widows and dependents.2 Rehabilitation and vocational training assistance benefits were added after World War I, and the US Department of Veterans Affairs (VA) was created in 1930 to consolidate all benefits under one umbrella organization.2,3
Prior to World War II, the VA lacked the bed capacity for the 4 million veterans who were eligible for care. This shortage became more acute by the end of the war, when the number of eligible veterans increased by 15 million.4 Although the VA successfully built bed capacity through acquisition of military hospitals, VA hospitals struggled to recruit clinical staff.2 Physicians were hesitant to join the VA because civil service salaries were lower than comparable positions in the community, and the VA offered limited opportunities for research or continuing education. These limitations negatively impacted the overall reputation of the VA. The American Medical Association (AMA) was reluctant to directly admit VA physicians for membership because of a “lower” standard of care at VA hospitals.2 This review will describe how passage of 2 legislative actions, the Servicemen’s Readjustment Act and Public Law (PL)79-293, and a key policy memorandum set the foundation for the partnership between the VA and academic medical centers. This led to improved medical care for veterans and expansion of health professions education for VA and the nation.5,6
GI Bill of Rights
The passage of the Servicemen’s Readjustment Act of 1944, better known as the GI Bill of Rights, provided education assistance, guaranteed home loans, and unemployment payments to veterans.5 All medical officers serving during the war were eligible for this benefit, which effectively increased the number of potential physician trainees at the end of World War II by almost 60,000.7 Medical education at the time was simultaneously undergoing a transformation with more rigorous training and a push to standardize medical education across state lines. While prerequisite training was not required for admission to many medical schools and curricula varied in length based on state licensing requirements, more programs were adding premedical education requirements and transitioning to the 4-year curricula seen today. At this time, only 23 states required postgraduate internships for licensure, but this number was growing.8 The American Board of Medical Specialties was established several years prior to World War II in 1934 to elevate the quality of care; the desire for residency training and board certification continued to gain traction during the 1940s.9
Medical Training
In anticipation of an influx of medical trainees, the Committee on Postwar Medical Service conducted a comprehensive survey to understand the training needs of physician veterans returning from World War II.7 The survey collected data from medical officers on their desired length of training, interest in specialty board certification, time served, and type of medical practice prior to enlisting. Length of desired training was categorized as short (up to 6 months), which would serve as a refresher course and provide updates on recent advances in medicine and surgery, and long (> 6 months), which resembled a modern internship or residency. Nineteen percent did not want additional training, 22% wished to pursue short courses, and 51% were interested in longer courses. Most respondents also wished to obtain board certification.7 The AMA played a significant role in supporting the expansion of training opportunities, encouraging all accredited hospitals to assess their capacity to determine the number of additional residents they could accommodate. The AMA also awarded hospitals with existing internship programs temporary accreditation to allow them to add extended training through residency programs.7
Medical schools devised creative solutions to meet the needs of returning physician veterans and capitalize on the available educational benefits. Postgraduate refresher courses that varied in length from hours to months were developed focusing on an array of topics. In addition to basic medical principles, courses covered general topics, such as advances in medicine, to specialty topics, such as nutrition or ophthalmology.7 Although the courses could not be counted toward board certification, participation increased by almost 300% in the 1945/1946 academic year relative to the previous year.7 Increasing access to the longer training courses, including internships and residencies, was often achieved through experiences outside the clinical setting. Yale University modified its curriculum to reduce time devoted to lectures on published materials and encourage active learning and community outreach.10 Northwestern University assigned residents to spend 1 of their 3 years “out of residence” in basic science and clinical instruction provided by the medical school. Tuition assistance from the GI Bill supported the additional expenses incurred by the medical school to fund laboratory space, equipment, and the salaries of the basic science instructors and administrative staff.11
Public Law 79-293
Public Law 79-293 was passed on January 3, 1946, establishing the Department of Medicine and Surgery within the VA. The law, which became the basis for Title 38 chapters 73 and 74, allowed VA hospitals flexibility to hire doctors, dentists, and nurses without regard to the civil service regulations and salary restrictions associated with other federal positions.6
Concerns about quality of care had been mounting for years, and the release of several sensationalized and critical articles motivated VA leadership to make sweeping changes. One article described neglect at VA hospitals.12 Excessive paperwork and low economic benefits were identified as barriers to the recruitment of qualified clinicians at the VA.2 The VA Special Medical Advisory Group investigating the claims recommended that the VA encourage their hospitals to affiliate with medical schools to improve the quality of care. This group also recommended that new VA hospitals be constructed near academic medical centers to allow access to consultants.2 Three large veterans service organizations (American Legion, Veterans of Foreign Wars, and Disabled American Veterans) conducted their own investigations in response to the media reports. The organizations reported that the quality of care in most VA hospitals was already on par with the community but indicated that the VA would benefit from expansion of medical research and training, increased bed capacity, reduction in the administrative burden on clinicians, and increased salaries for clinical staff.2
Policy Memorandum No. 2
The relationship between VA and academic medical centers was solidified on January 30, 1946, with adoption of Policy Memorandum No. 2.13 This memorandum allowed for the establishment of relationships with academic medical centers to provide “the veteran a much higher standard of medical care than could be given him with a wholly full-time medical staff.” Shortly after this memorandum was signed, residents from Northwestern University and the University of Illinois at Chicago began clinical rotations at the Hines VA facility in Chicago, Illinois.2 By 1947, 62 medical schools had committed to an affiliation with local VA hospitals and 21 deans’ committees were in operation, which were responsible for the appointment of physician residents and consultants. The AMA extended direct membership privileges to VA physicians, and by 1947 the number of residency positions doubled nationally.14,15 The almost universal support of the relationship between VA and academic affiliates provided educational opportunities for returning veterans and raised standards for medical education nationally.
Current State
Since the passage of PL 79-293 and PM No. 2, the VA-academic health professions education partnership has grown to include 113,000 trainees rotating through 150 VA medical centers annually from more than 1400 colleges and universities.16 Most VA podiatrists, psychologists, optometrists, and physicians working in VA medical centers also trained at VA, and trainees are 37% more likely to consider a job at VA after completing their clinical rotations. This unique partnership began 76 years ago and continues to provide clinicians “for VA and the nation.”
The US government has a legacy of providing support for veterans. Pensions were offered to disabled veterans as early as 1776, and benefits were expanded to cover medical needs as the country grew and modernized.1,2 Enacted during the Civil War, the General Pension Act increased benefits for widows and dependents.2 Rehabilitation and vocational training assistance benefits were added after World War I, and the US Department of Veterans Affairs (VA) was created in 1930 to consolidate all benefits under one umbrella organization.2,3
Prior to World War II, the VA lacked the bed capacity for the 4 million veterans who were eligible for care. This shortage became more acute by the end of the war, when the number of eligible veterans increased by 15 million.4 Although the VA successfully built bed capacity through acquisition of military hospitals, VA hospitals struggled to recruit clinical staff.2 Physicians were hesitant to join the VA because civil service salaries were lower than comparable positions in the community, and the VA offered limited opportunities for research or continuing education. These limitations negatively impacted the overall reputation of the VA. The American Medical Association (AMA) was reluctant to directly admit VA physicians for membership because of a “lower” standard of care at VA hospitals.2 This review will describe how passage of 2 legislative actions, the Servicemen’s Readjustment Act and Public Law (PL)79-293, and a key policy memorandum set the foundation for the partnership between the VA and academic medical centers. This led to improved medical care for veterans and expansion of health professions education for VA and the nation.5,6
GI Bill of Rights
The passage of the Servicemen’s Readjustment Act of 1944, better known as the GI Bill of Rights, provided education assistance, guaranteed home loans, and unemployment payments to veterans.5 All medical officers serving during the war were eligible for this benefit, which effectively increased the number of potential physician trainees at the end of World War II by almost 60,000.7 Medical education at the time was simultaneously undergoing a transformation with more rigorous training and a push to standardize medical education across state lines. While prerequisite training was not required for admission to many medical schools and curricula varied in length based on state licensing requirements, more programs were adding premedical education requirements and transitioning to the 4-year curricula seen today. At this time, only 23 states required postgraduate internships for licensure, but this number was growing.8 The American Board of Medical Specialties was established several years prior to World War II in 1934 to elevate the quality of care; the desire for residency training and board certification continued to gain traction during the 1940s.9
Medical Training
In anticipation of an influx of medical trainees, the Committee on Postwar Medical Service conducted a comprehensive survey to understand the training needs of physician veterans returning from World War II.7 The survey collected data from medical officers on their desired length of training, interest in specialty board certification, time served, and type of medical practice prior to enlisting. Length of desired training was categorized as short (up to 6 months), which would serve as a refresher course and provide updates on recent advances in medicine and surgery, and long (> 6 months), which resembled a modern internship or residency. Nineteen percent did not want additional training, 22% wished to pursue short courses, and 51% were interested in longer courses. Most respondents also wished to obtain board certification.7 The AMA played a significant role in supporting the expansion of training opportunities, encouraging all accredited hospitals to assess their capacity to determine the number of additional residents they could accommodate. The AMA also awarded hospitals with existing internship programs temporary accreditation to allow them to add extended training through residency programs.7
Medical schools devised creative solutions to meet the needs of returning physician veterans and capitalize on the available educational benefits. Postgraduate refresher courses that varied in length from hours to months were developed focusing on an array of topics. In addition to basic medical principles, courses covered general topics, such as advances in medicine, to specialty topics, such as nutrition or ophthalmology.7 Although the courses could not be counted toward board certification, participation increased by almost 300% in the 1945/1946 academic year relative to the previous year.7 Increasing access to the longer training courses, including internships and residencies, was often achieved through experiences outside the clinical setting. Yale University modified its curriculum to reduce time devoted to lectures on published materials and encourage active learning and community outreach.10 Northwestern University assigned residents to spend 1 of their 3 years “out of residence” in basic science and clinical instruction provided by the medical school. Tuition assistance from the GI Bill supported the additional expenses incurred by the medical school to fund laboratory space, equipment, and the salaries of the basic science instructors and administrative staff.11
Public Law 79-293
Public Law 79-293 was passed on January 3, 1946, establishing the Department of Medicine and Surgery within the VA. The law, which became the basis for Title 38 chapters 73 and 74, allowed VA hospitals flexibility to hire doctors, dentists, and nurses without regard to the civil service regulations and salary restrictions associated with other federal positions.6
Concerns about quality of care had been mounting for years, and the release of several sensationalized and critical articles motivated VA leadership to make sweeping changes. One article described neglect at VA hospitals.12 Excessive paperwork and low economic benefits were identified as barriers to the recruitment of qualified clinicians at the VA.2 The VA Special Medical Advisory Group investigating the claims recommended that the VA encourage their hospitals to affiliate with medical schools to improve the quality of care. This group also recommended that new VA hospitals be constructed near academic medical centers to allow access to consultants.2 Three large veterans service organizations (American Legion, Veterans of Foreign Wars, and Disabled American Veterans) conducted their own investigations in response to the media reports. The organizations reported that the quality of care in most VA hospitals was already on par with the community but indicated that the VA would benefit from expansion of medical research and training, increased bed capacity, reduction in the administrative burden on clinicians, and increased salaries for clinical staff.2
Policy Memorandum No. 2
The relationship between VA and academic medical centers was solidified on January 30, 1946, with adoption of Policy Memorandum No. 2.13 This memorandum allowed for the establishment of relationships with academic medical centers to provide “the veteran a much higher standard of medical care than could be given him with a wholly full-time medical staff.” Shortly after this memorandum was signed, residents from Northwestern University and the University of Illinois at Chicago began clinical rotations at the Hines VA facility in Chicago, Illinois.2 By 1947, 62 medical schools had committed to an affiliation with local VA hospitals and 21 deans’ committees were in operation, which were responsible for the appointment of physician residents and consultants. The AMA extended direct membership privileges to VA physicians, and by 1947 the number of residency positions doubled nationally.14,15 The almost universal support of the relationship between VA and academic affiliates provided educational opportunities for returning veterans and raised standards for medical education nationally.
Current State
Since the passage of PL 79-293 and PM No. 2, the VA-academic health professions education partnership has grown to include 113,000 trainees rotating through 150 VA medical centers annually from more than 1400 colleges and universities.16 Most VA podiatrists, psychologists, optometrists, and physicians working in VA medical centers also trained at VA, and trainees are 37% more likely to consider a job at VA after completing their clinical rotations. This unique partnership began 76 years ago and continues to provide clinicians “for VA and the nation.”
1. Glasson WH. History of military pension legislation in the United States. Columbia University Press; 1900.
2. Lewis BJ. Veterans Administration medical program relationship with medical schools in the United States. Dissertation. The American University; 1969.
3. Kracke RR. The role of the medical college in the medical care of the veteran. J Med Assoc State Ala. 1950;19(8):225-230.
4. US Department of Veterans Affairs, Office of Public Affairs. VA History in Brief. VA Pamphlet 80-97-2. Washington, DC: United States Department of Veterans Affairs; 1997.
5. Servicesmen’s Readjustment Act of 1944. 38 USC § 370 (1944).
6. To establish a Department of Medicine and Surgery in the Veterans’ Administration. 38 USC § 73-74 (1946). Accessed August 2, 2022.
7. Lueth HC. Postgraduate wishes of medical officers: final report on 21,029 questionnaires. J Am Med Assoc. 1945; 127(13):759-770.
8. Johnson V, Arestad FH, Tipner A. Medical education in the United States and Canada: forty-sixth annual report on medical education in the United States and Canada by the Council on Medical Education and Hospitals of the American Medical Association. J Am Med Assoc. 1946;131(16):1277-1310.
9. Chesney AM. Some impacts of the specialty board movement on medical education. J Assoc Am Med Coll. 1948;23(2):83-89.
10. Hiscock IV. New frontiers in health education. Can J Public Health. 1946;37(11):452-457.
11. Colwell AR. Principles of graduate medical instruction: with a specific plan of application in a medical school. J Am Med Assoc. 1945;127(13):741-746.
12. Maisel, AQ. The veteran betrayed. How long will the Veterans’ Administration continue to give third-rate medical care to first-rate men? Cosmopolitan. 1945(3):45.
13. US Veterans Administration. Policy Memorandum No. 2: Policy in association of veterans’ hospitals with medical schools. January 30, 1946.
14. American Medical Association. Digest of Official Actions: 1846-1958. JAMA. 1946;132:1094.
15. Wentz DK, Ford CV. A brief history of the internship. JAMA. 1984;252(24):3390-3394. doi:10.1001/jama.1984.03350240036035
16. US Department of Veterans Affairs, Veterans Health Administration, Office of Academic Affiliations. Health professions education academic year 2022-2021. Accessed August 8, 2022. https://www.va.gov/OAA/docs/OAA_Stats_AY_2020_2021_FINAL.pdf
1. Glasson WH. History of military pension legislation in the United States. Columbia University Press; 1900.
2. Lewis BJ. Veterans Administration medical program relationship with medical schools in the United States. Dissertation. The American University; 1969.
3. Kracke RR. The role of the medical college in the medical care of the veteran. J Med Assoc State Ala. 1950;19(8):225-230.
4. US Department of Veterans Affairs, Office of Public Affairs. VA History in Brief. VA Pamphlet 80-97-2. Washington, DC: United States Department of Veterans Affairs; 1997.
5. Servicesmen’s Readjustment Act of 1944. 38 USC § 370 (1944).
6. To establish a Department of Medicine and Surgery in the Veterans’ Administration. 38 USC § 73-74 (1946). Accessed August 2, 2022.
7. Lueth HC. Postgraduate wishes of medical officers: final report on 21,029 questionnaires. J Am Med Assoc. 1945; 127(13):759-770.
8. Johnson V, Arestad FH, Tipner A. Medical education in the United States and Canada: forty-sixth annual report on medical education in the United States and Canada by the Council on Medical Education and Hospitals of the American Medical Association. J Am Med Assoc. 1946;131(16):1277-1310.
9. Chesney AM. Some impacts of the specialty board movement on medical education. J Assoc Am Med Coll. 1948;23(2):83-89.
10. Hiscock IV. New frontiers in health education. Can J Public Health. 1946;37(11):452-457.
11. Colwell AR. Principles of graduate medical instruction: with a specific plan of application in a medical school. J Am Med Assoc. 1945;127(13):741-746.
12. Maisel, AQ. The veteran betrayed. How long will the Veterans’ Administration continue to give third-rate medical care to first-rate men? Cosmopolitan. 1945(3):45.
13. US Veterans Administration. Policy Memorandum No. 2: Policy in association of veterans’ hospitals with medical schools. January 30, 1946.
14. American Medical Association. Digest of Official Actions: 1846-1958. JAMA. 1946;132:1094.
15. Wentz DK, Ford CV. A brief history of the internship. JAMA. 1984;252(24):3390-3394. doi:10.1001/jama.1984.03350240036035
16. US Department of Veterans Affairs, Veterans Health Administration, Office of Academic Affiliations. Health professions education academic year 2022-2021. Accessed August 8, 2022. https://www.va.gov/OAA/docs/OAA_Stats_AY_2020_2021_FINAL.pdf
When the public misplaces their trust
Not long ago, the grandmother of my son’s friend died of COVID-19 infection. She was elderly and unvaccinated. Her grandson had no regrets over her unvaccinated status. “Why would she inject poison into her body?” he said, and then expressed a strong opinion that she had died because the hospital physicians refused to give her ivermectin and hydroxychloroquine. My son, wisely, did not push the issue.
Soon thereafter, my personal family physician emailed a newsletter to his patients (me included) with 3 important messages: (1) COVID vaccines were available in the office; (2) He was not going to prescribe hydroxychloroquine, no matter how adamantly it was requested; and (3) He warned against threatening him or his staff with lawsuits or violence over refusal to prescribe any unproven medication.
How, as a country, have we come to this? A sizeable portion of the public trusts the advice of quacks, hacks, and political opportunists over that of the nation’s most expert scientists and physicians. The National Institutes of Health maintains a website with up-to-date recommendations on the use of treatments for COVID-19. They assess the existing evidence and make recommendations for or against a wide array of interventions. (They recommend against the use of both ivermectin and hydroxychloroquine.) The Centers for Disease Control and Prevention publishes extensively about the current knowledge on the safety and efficacy of vaccines. Neither agency is part of a “deep state” or conspiracy. They are comprised of some of the nation’s leading scientists, including physicians, trying to protect the public from disease and foster good health.
Sadly, some physicians have been a source of inaccurate vaccine information; some even prescribe ineffective treatments despite the evidence. These physicians are either letting their politics override their good sense or are improperly assessing the scientific literature, or both. Medical licensing agencies, and specialty certification boards, need to find ways to prevent this—ways that can survive judicial scrutiny and allow for legitimate scientific debate.
I have been tempted to just accept the current situation as the inevitable outcome of social media–fueled tribalism. But when we know that the COVID death rate among the unvaccinated is 9 times that of people who have received a booster dose,1 I can’t sit idly and watch the Internet pundits prevail. Instead, I continue to advise and teach my students to have confidence in trustworthy authorities and websites. Mistakes will be made; corrections will be issued. However, this is not evidence of malintent or incompetence, but rather, the scientific process in action.
I tell my students that one of the biggest challenges facing them and society is to figure out how to stop, or at least minimize the effects of, incorrect information, misleading statements, and outright lies in a society that values free speech. Physicians—young and old alike—must remain committed to communicating factual information to a not-always-receptive audience. And I wish my young colleagues luck; I hope that their passion for family medicine and their insights into social media may be just the combination that’s needed to redirect the public’s trust back to where it belongs during a health care crisis.
1. Fleming-Dutra KE. COVID-19 Epidemiology and Vaccination Rates in the United States. Presented to the Authorization Committee on Immunization Practices, July 19, 2022. Accessed August 9, 2022. https://www.cdc.gov/vaccines/acip/meetings/downloads/slides-2022-07-19/02-COVID-Fleming-Dutra-508.pdf
Not long ago, the grandmother of my son’s friend died of COVID-19 infection. She was elderly and unvaccinated. Her grandson had no regrets over her unvaccinated status. “Why would she inject poison into her body?” he said, and then expressed a strong opinion that she had died because the hospital physicians refused to give her ivermectin and hydroxychloroquine. My son, wisely, did not push the issue.
Soon thereafter, my personal family physician emailed a newsletter to his patients (me included) with 3 important messages: (1) COVID vaccines were available in the office; (2) He was not going to prescribe hydroxychloroquine, no matter how adamantly it was requested; and (3) He warned against threatening him or his staff with lawsuits or violence over refusal to prescribe any unproven medication.
How, as a country, have we come to this? A sizeable portion of the public trusts the advice of quacks, hacks, and political opportunists over that of the nation’s most expert scientists and physicians. The National Institutes of Health maintains a website with up-to-date recommendations on the use of treatments for COVID-19. They assess the existing evidence and make recommendations for or against a wide array of interventions. (They recommend against the use of both ivermectin and hydroxychloroquine.) The Centers for Disease Control and Prevention publishes extensively about the current knowledge on the safety and efficacy of vaccines. Neither agency is part of a “deep state” or conspiracy. They are comprised of some of the nation’s leading scientists, including physicians, trying to protect the public from disease and foster good health.
Sadly, some physicians have been a source of inaccurate vaccine information; some even prescribe ineffective treatments despite the evidence. These physicians are either letting their politics override their good sense or are improperly assessing the scientific literature, or both. Medical licensing agencies, and specialty certification boards, need to find ways to prevent this—ways that can survive judicial scrutiny and allow for legitimate scientific debate.
I have been tempted to just accept the current situation as the inevitable outcome of social media–fueled tribalism. But when we know that the COVID death rate among the unvaccinated is 9 times that of people who have received a booster dose,1 I can’t sit idly and watch the Internet pundits prevail. Instead, I continue to advise and teach my students to have confidence in trustworthy authorities and websites. Mistakes will be made; corrections will be issued. However, this is not evidence of malintent or incompetence, but rather, the scientific process in action.
I tell my students that one of the biggest challenges facing them and society is to figure out how to stop, or at least minimize the effects of, incorrect information, misleading statements, and outright lies in a society that values free speech. Physicians—young and old alike—must remain committed to communicating factual information to a not-always-receptive audience. And I wish my young colleagues luck; I hope that their passion for family medicine and their insights into social media may be just the combination that’s needed to redirect the public’s trust back to where it belongs during a health care crisis.
Not long ago, the grandmother of my son’s friend died of COVID-19 infection. She was elderly and unvaccinated. Her grandson had no regrets over her unvaccinated status. “Why would she inject poison into her body?” he said, and then expressed a strong opinion that she had died because the hospital physicians refused to give her ivermectin and hydroxychloroquine. My son, wisely, did not push the issue.
Soon thereafter, my personal family physician emailed a newsletter to his patients (me included) with 3 important messages: (1) COVID vaccines were available in the office; (2) He was not going to prescribe hydroxychloroquine, no matter how adamantly it was requested; and (3) He warned against threatening him or his staff with lawsuits or violence over refusal to prescribe any unproven medication.
How, as a country, have we come to this? A sizeable portion of the public trusts the advice of quacks, hacks, and political opportunists over that of the nation’s most expert scientists and physicians. The National Institutes of Health maintains a website with up-to-date recommendations on the use of treatments for COVID-19. They assess the existing evidence and make recommendations for or against a wide array of interventions. (They recommend against the use of both ivermectin and hydroxychloroquine.) The Centers for Disease Control and Prevention publishes extensively about the current knowledge on the safety and efficacy of vaccines. Neither agency is part of a “deep state” or conspiracy. They are comprised of some of the nation’s leading scientists, including physicians, trying to protect the public from disease and foster good health.
Sadly, some physicians have been a source of inaccurate vaccine information; some even prescribe ineffective treatments despite the evidence. These physicians are either letting their politics override their good sense or are improperly assessing the scientific literature, or both. Medical licensing agencies, and specialty certification boards, need to find ways to prevent this—ways that can survive judicial scrutiny and allow for legitimate scientific debate.
I have been tempted to just accept the current situation as the inevitable outcome of social media–fueled tribalism. But when we know that the COVID death rate among the unvaccinated is 9 times that of people who have received a booster dose,1 I can’t sit idly and watch the Internet pundits prevail. Instead, I continue to advise and teach my students to have confidence in trustworthy authorities and websites. Mistakes will be made; corrections will be issued. However, this is not evidence of malintent or incompetence, but rather, the scientific process in action.
I tell my students that one of the biggest challenges facing them and society is to figure out how to stop, or at least minimize the effects of, incorrect information, misleading statements, and outright lies in a society that values free speech. Physicians—young and old alike—must remain committed to communicating factual information to a not-always-receptive audience. And I wish my young colleagues luck; I hope that their passion for family medicine and their insights into social media may be just the combination that’s needed to redirect the public’s trust back to where it belongs during a health care crisis.
1. Fleming-Dutra KE. COVID-19 Epidemiology and Vaccination Rates in the United States. Presented to the Authorization Committee on Immunization Practices, July 19, 2022. Accessed August 9, 2022. https://www.cdc.gov/vaccines/acip/meetings/downloads/slides-2022-07-19/02-COVID-Fleming-Dutra-508.pdf
1. Fleming-Dutra KE. COVID-19 Epidemiology and Vaccination Rates in the United States. Presented to the Authorization Committee on Immunization Practices, July 19, 2022. Accessed August 9, 2022. https://www.cdc.gov/vaccines/acip/meetings/downloads/slides-2022-07-19/02-COVID-Fleming-Dutra-508.pdf
Where a child eats breakfast is important
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
We’ve been told for decades that a child who doesn’t start the day with a good breakfast is entering school at a serious disadvantage. The brain needs a good supply of energy to learn optimally. So the standard wisdom goes. Subsidized school breakfast programs have been built around this chestnut. But, is there solid evidence to support the notion that simply adding a morning meal to a child’s schedule will improve his or her school performance? It sounds like common sense, but is it just one of those old grandmother’s nuggets that doesn’t stand up under close scrutiny?
A recent study from Spain suggests that the relationship between breakfast and school performance is not merely related to the nutritional needs of a growing brain. Using data from nearly 4,000 Spanish children aged 4-14 collected in a 2017 national health survey, the investigators found “skipping breakfast and eating breakfast out of the home were linked to greater odds of psychosocial behavioral problems than eating breakfast at home.” And, we already know that, in general, children who misbehave in school don’t thrive academically.
There were also associations between the absence or presence of certain food groups in the morning meal with behavioral problems. But the data lacked the granularity to draw any firm conclusions – although the authors felt that what they consider a healthy Spanish diet may have had a positive influence on behavior.
The findings in this study may simply be another example of the many positive influences that have been associated with family meals and have little to do with what is actually consumed. The association may not have much to do with the family gathering together at a single Norman Rockwell sitting, a reality that I suspect seldom occurs. The apparent positive influence of breakfast may be that it reflects a family’s priorities: that food is important, that sleep is important, and that school is important – so important that scheduling the morning should focus on sending the child off well prepared. The child who is allowed to stay up to an unhealthy hour is likely to be difficult to arouse in the morning for breakfast and getting off to school.
It may be that the child’s behavior problems are so disruptive and taxing for the family that even with their best efforts, the parents can’t find the time and energy to provide a breakfast in the home.
On the other hand, the study doesn’t tell us how many children aren’t offered breakfast at home because their families simply can’t afford it. Obviously, the answer depends on the socioeconomic mix of a given community. In some localities this may represent a sizable percentage of the population.
So where does this leave us? Unfortunately, as I read through the discussion at the end of this paper I felt that the authors were leaning too much toward further research based on the potential associations between behavior and specific food groups their data suggested.
For me, the take-home message from this paper is that our existing efforts to improve academic success with food offered in school should also include strategies that promote eating breakfast at home. For example, the backpack take-home food distribution programs that seem to have been effective could include breakfast-targeted items packaged in a way that encourage families to provide breakfast at home.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Five contract red flags every physician should know
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Recruiting health care workers is a challenge these days for both private practice and hospital employers, and competition can be fierce. In order to be competitive, employers need to review the package they are offering potential candidates and understand that it’s more than just compensation and benefits that matter.
As someone who reviews physician contracts extensively, there are some common examples of language that may cause a candidate to choose a different position.
Probationary period
Although every employer wants to find out if they like the physician or midlevel employee that they have just hired before fully committing, the inclusion of a probationary period (usually 90 days) is offensive to a candidate, especially one with a choice of contracts.
Essentially, the employer is asking the employee to (potentially) relocate, go through the credentialing process, and turn down other potential offers, all for the possibility that they could easily be terminated. Probationary periods typically allow an employee to be immediately terminated without notice or cause, which can then leave them stranded without a paycheck (and with a new home and/or other recent commitments).
Moreover, contracts with probationary periods tend to terminate the employee without covering any tail costs or clarifying that the employer will not enforce restrictive provisions (even if unlikely to be legally enforceable based on the short relationship).
It is important to understand that the process of a person finding a new position, which includes interviewing, contract negotiation, and credentialing, can take up to 6 months. For this reason, probationary provisions create real job insecurity for a candidate.
Entering into a new affiliation is a leap of faith both for the employer and the employee. If the circumstances do not work out, the employer should fairly compensate the employee for the notice period and ask them not to return to work or otherwise allow them to keep working the notice period while they search for a new position.
Acceleration of notice
Another objectionable provision that employers like to include in their contracts is one which allows the employer to accelerate and immediately terminate an employee who has given proper notice.
The contract will contain a standard notice provision, but when the health care professional submits notice, their last date is suddenly accelerated, and they are released without further compensation, notice, or benefits. This type of provision is particularly offensive to health care employees who take the step of giving proper contractual notice and, similar to the probationary language, can create real job insecurity for an employee who suddenly loses their paycheck and has no new job to start.
Medical workers should be paid for the entire notice period whether or not they are allowed to work. Unfortunately, this type of provision is sometimes hidden in contracts and not noticed by employees, who tend to focus on the notice provision itself. I consider this provision to be a red flag about the employer when I review clients’ contracts.
Malpractice tail
Although many employers will claim it is not unusual for an employee to pay for their own malpractice tail, in the current marketplace, the payment of tail can be a deciding factor in whether a candidate accepts a contract.
At a minimum, employers should consider paying for the tail under circumstances where they non-renew a contract, terminate without cause, or the contract is terminated for the employer’s breach. Similarly, I like to seek out payment of the tail by the employer where the contract is terminated owing to a change in the law, use of a force majeure provision, loss of the employer’s hospital contract, or similar provisions where termination is outside the control of the employee.
Employers should also consider a provision where they share the cost of a tail or cover the entire cost on the basis of years of service in order to stand out to a potential candidate.
Noncompete provisions
I do not find noncompete provisions to be generally unacceptable when properly written; however, employers should reevaluate the reasonableness of their noncompete language frequently, because such language can make the difference in whether a candidate accepts a contract.
A reasonable noncompete that only protects the employer as necessary and does not restrict the reasonable practice of medicine is always preferable and can be the deciding factor for a candidate. Tying enforcement of a noncompete to reasons for termination (similar to the tail) can also make a positive difference in a candidate’s review of a contract.
Egregious noncompetes, where the candidate is simply informed that the language is “not negotiable,” are unlikely to be compelling to a candidate with other options.
Specifics on location, call, schedule
One item potential employees find extremely frustrating about contracts is when it fails to include promises made regarding location, call, and schedule.
These particular items affect a physician’s expectations about a job, including commute time, family life, and lifestyle. An employer or recruiter that makes a lot of promises on these points but won’t commit to the details in writing (or at least offer mutual agreement on these issues) can cause an uncertain candidate to choose the job that offers greater certainty.
There are many provisions of a contract that can make a difference to a particular job applicant. A savvy employer seeking to capture a particular health care professional should find out what the specific goals and needs of the candidate might be and consider adjusting the contract to best satisfy the candidate.
At the end of the day, however, at least for those physicians and others reviewing contracts that are fairly equivalent, it may be the fairness of the contract provisions that end up being the deciding factor.
Ms. Adler is Health Law Group Practice Leader for the law firm Roetzel in Chicago. She reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.