User login
The cult of the suicide risk assessment
Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.
The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.
On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.
Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.
In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).
One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.
One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”
Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
Provider liability
Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.
Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.
Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
Limitations of risk assessments
Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5
Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.
Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”
Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
“Quantity over quality”
Regardless of those sobering reports,
One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.
Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.
Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.
It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.
References
1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.
2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.
3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.
4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.
5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.
6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.
7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.
8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.
9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.
10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.
11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.
12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.
13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.
14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.
Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.
The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.
On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.
Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.
In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).
One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.
One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”
Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
Provider liability
Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.
Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.
Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
Limitations of risk assessments
Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5
Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.
Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”
Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
“Quantity over quality”
Regardless of those sobering reports,
One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.
Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.
Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.
It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.
References
1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.
2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.
3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.
4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.
5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.
6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.
7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.
8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.
9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.
10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.
11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.
12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.
13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.
14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.
Suicide is not a trivial matter – it upends families, robs partners of a loved one, prevents children from having a parent, and can destroy a parent’s most cherished being. It is not surprising that societies have repeatedly made it a goal to study and reduce suicide within their populations.
The suicide rate in the United States is trending upward, from about 10 per 100,000 in 2000 to about 15 per 100,000 in more recent reports. The increasing suicide rates have been accompanied by increasing distress among many strata of society. From a public health level, analysts are not just witnessing increasing suicide rates, but a shocking rise in all “deaths of despair,”1 among which suicide can be considered the ultimate example.
On an individual level, many know someone who has died of suicide or suffered from a serious suicide attempt. From the public health level to the individual level, advocacy has called for various interventions in the field of psychiatry to remedy this tragic problem.
Psychiatrists have been firsthand witnesses to this increasing demand for suicide interventions. When in residency, the norm was to perform a suicide risk assessment at the time of admission to the hospital and again at the time of discharge. As the years passed, the new normal within psychiatric hospitals has shifted to asking about suicidality on a daily basis.
In what seems to us like an escalating arms race, the emerging standard of care at many facilities is now not only for daily suicide risk assessments by each psychiatrist, but also to require nurses to ask about suicidality during every 8-hour shift – in addition to documented inquiries about suicidality by other allied staff on the psychiatric unit. As a result, it is not uncommon for a patient hospitalized at an academic center to receive more than half a dozen suicide risk assessments in a day (first by the medical student, at least once – often more than once – by the resident, again by the attending psychiatrist, then the social worker and three nurses in 24 hours).
One of the concerns about such an approach is the lack of logic inherent to many risk assessment tools and symptom scales. Many of us are familiar with the Patient Health Questionnaire (PHQ-9) to assess depression.2 The PHQ-9 asks to consider “over the last 2 weeks, how often have you ...” in relation to nine symptoms associated with depression. It has always defied reason to perform a PHQ-9 every day and expect the answers to change from “nearly every day” to “not at all,” considering only 1 day has passed since the last time the patient has answered the questions. Yet daily, or near daily, PHQ-9 scores are a frequently used tool of tracking symptom improvement in response to treatments, such as electroconvulsive therapy, performed multiple times a week.
One can argue that the patient’s perspective on how symptomatic he or she has been over the past 2 weeks may change rapidly with alleviation of a depressed mood. However, the PHQ-9 is both reported to be, and often regarded as, an objective score. If one wishes to utilize it as such, the defense of its use should not be that it is a subjective report with just as much utility as “Rate your depression on a scale of 0-27.”
Similarly, many suicide scales were intended to assess thoughts of suicide in the past month3 or have been re-tooled to address this particular concern by asking “since the last contact.”4 It is baffling to see a chart with many dozens of suicide risk assessments with at times widely differing answers, yet all measuring thoughts of suicide in the past month. Is one to expect the answer to “How many times have you had these thoughts [of suicide ideation]? (1) Less than once a week (2) Once a week ...” to change between 8 a.m. and noon? Furthermore, for the purpose of assessing acute risk of suicidality in the immediate future, to only consider symptoms since the last contact – or past 2 weeks, past month, etc. – is of unclear significance.
Provider liability
Another concern is the liability placed on providers. A common problem encountered in the inpatient setting is insurance companies refusing to reimburse a hospital stay for depressed patients denying suicidality.
Any provider in the position of caring for such a patient must ask: What is the likelihood of someone providing a false negative – a false denial of suicidality? Is the likelihood of a suicidal person denying suicidality different if asked 5 or 10 or more times in a day? There are innumerable instances where a patient at a very high risk of self-harm has denied suicidality, been discharged from the hospital, and suffered terrible consequences. Ethically, the psychiatrist aware of this risk is no more at ease discharging these patients, whether it is one suicide risk scale or a dozen that suggests a patient is at low risk.
Alternatively, it may feel untenable from a medicolegal perspective for a psychiatrist to discharge a patient denying suicidality when the chart includes over a dozen previously documented elevated suicide risk assessments in the past 72 hours. By placing the job of suicide risk assessment in the hands of providers of varying levels of training and responsibility, a situation is created in which the seasoned psychiatrist who would otherwise be comfortable discharging a patient feels unable to do so because every other note-writer in the record – from the triage nurse to the medical assistant to the sitter in the emergency department – has recorded the patient as high risk for suicide. When put in such a position, the thought often occurs that systems of care, rather than individual providers, are protected most by ever escalating requirements for suicide risk documentation. To make a clinical decision contrary to the body of suicide risk documentation puts the provider at risk of being scapegoated by the system of care, which can point to its illogical and ineffective, though profusely documented, suicide prevention protocols.
Limitations of risk assessments
Considering the ongoing rise in the use of suicide risk assessments, one would expect that the evidence for their efficacy was robust and well established. Yet a thorough review of suicide risk assessments funded by the MacArthur Foundation, which examined decades of research, came to disheartening conclusions: “predictive ability has not improved over the past 50 years”; “no risk factor category or subcategory is substantially stronger than any other”; and “predicting solely according to base rates may be comparable to prediction with current risk factors.”5
Those findings were consistent with the conclusions of many other studies, which have summarized the utility of suicide risk assessments as follows: “occurrence of suicide is too low to identify those individuals who are likely to die by suicide”;6 “suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near zero”;7 “risk stratification is too inaccurate to be clinically useful and might even be harmful”;8 “suicide risk prediction [lacks] any items or information that to a useful degree permit the identification of persons who will complete suicide”;9 “existing suicide prediction tools have little current clinical value”;10 “our current preoccupation with risk assessment has ... created a mythology with no evidence to support it.”11 And that’s to cite just a few.
Sadly, we have known about the limitations of suicide risk assessments for many decades. In 1983 a large VA prospective study, which aimed to identify veterans who will die by suicide, examined 4,800 patients with a wide range of instruments and measures.12 This study concluded that “discriminant analysis was clearly inadequate in correctly classifying the subjects. For an event as rare as suicide, our predictive tools and guides are simply not equal to the task.” The authors described the feelings of many in stating “courts and public opinion expect physicians to be able to pick out the particular persons who will later commit suicide. Although we may reconstruct causal chains and motives, we do not possess the tools to predict suicides.”
Yet, even several decades prior, in 1954, Dr. Albert Rosen performed an elegant statistical analysis and predicted that, considering the low base rate of suicide, suicide risk assessments are “of no practical value, for it would be impossible to treat the prodigious number of false positives.”13 It seems that we continue to be unable to accept Dr. Rosen’s premonition despite decades of confirmatory evidence.
“Quantity over quality”
Regardless of those sobering reports,
One can reasonably argue that the periodic performance of a suicide risk assessment may have clinical utility in reminding us of modifiable risk factors such as intoxication, social isolation, and access to lethal means. One can also reasonably argue that these risk assessments may provide useful education to patients and their families on epidemiological risk factors such as gender, age, and marital status. But our pursuit of serial suicide risk assessments throughout the day is encouraging providers to focus on a particular risk factor that changes from moment to moment and has particularly low validity, that being self-reported suicidality.
Reported suicidality is one of the few risk factors that can change from shift to shift. But 80% of people who die by suicide had not previously expressed suicidality, and 98.3% of people who have endorsed suicidality do not die by suicide.14 While the former statistic may improve with increased assessment, the later will likely worsen.
Suicide is not a trivial matter. We admire those that study it and advocate for better interventions. We have compassion for those who have suffered the loss of a loved one to suicide. Our patients have died as a result of the human limitations surrounding suicide prevention. Recognizing the weight of suicide and making an effort to avoid minimizing its immense consequences drive our desire to be honest with ourselves, our patients and their families, and society. That includes the unfortunate truth regarding the current state of the evidence and our ability to enact change.
It is our concern that the rising fascination with repeated suicide risk assessment is misguided in its current form and serves the purpose of appeasing administrators more than reflecting a scientific understanding of the literature. More sadly, we are concerned that this “quantity-over-quality” approach is yet another barrier to practicing what may be one of the few interventions with any hope of meaningfully impacting a patient’s risk of suicide in the clinical setting – spending time connecting with our patients.
Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Compton is a member of the psychiatry faculty at University of California, San Diego. His background includes medical education, mental health advocacy, work with underserved populations, and brain cancer research. Dr. Badre and Dr. Compton have no conflicts of interest.
References
1. Joint Economic Committee. (2019). Long Term Trends in Deaths of Despair. SCP Report 4-19.
2. Kroenke K and Spitzer RL. The PHQ-9: A new depression diagnostic and severity measure. Psychiatr Ann. 2013;32(9):509-15. doi: 10.3928/0048-5713-20020901-06.
3. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Lifetime/Recent.
4. Columbia-Suicide Severity Rating Scale (C-SSRS) Full Since Last Contact.
5. Franklin JC et al. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychol Bull. 2017 Feb;143(2):187-232. doi: 10.1037/bul0000084.
6. Beautrais AL. Further suicidal behavior among medically serious suicide attempters. Suicide Life Threat Behav. 2004 Spring;34(1):1-11. doi: 10.1521/suli.34.1.1.27772.
7. Belsher BE. Prediction models for suicide attempts and deaths: A systematic review and simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642-651. doi: 10.1001/jamapsychiatry.2019.0174.
8. Carter G et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guideline for the management of deliberate self-harm. Aust N Z J Psychiatry. 2016 Oct;50(10):939-1000. doi: 10.1177/0004867416661039.
9. Fosse R et al. Predictors of suicide in the patient population admitted to a locked-door psychiatric acute ward. PLoS One. 2017 Mar 16;12(3):e0173958. doi: 10.1371/journal.pone.0173958.
10. Kessler RC et al. Suicide prediction models: A critical review of recent research with recommendations for the way forward. Mol Psychiatry. 2020 Jan;25(1):168-79. doi: 10.1038/s41380-019-0531-0.
11. Mulder R. Problems with suicide risk assessment. Aust N Z J Psychiatry. 2011 Aug;45(8):605-7. doi: 10.3109/00048674.2011.594786.
12. Pokorny AD. Prediction of suicide in psychiatric patients: Report of a prospective study. Arch Gen Psychiatry. 1983 Mar;40(3):249-57. doi: 10.1001/archpsyc.1983.01790030019002.
13. Rosen A. Detection of suicidal patients: An example of some limitations in the prediction of infrequent events. J Consult Psychol. 1954 Dec;18(6):397-403. doi: 10.1037/h0058579.
14. McHugh CM et al. (2019). Association between suicidal ideation and suicide: Meta-analyses of odds ratios, sensitivity, specificity and positive predictive value. BJPsych Open. 2019 Mar;5(2):e18. doi: 10.1192/bjo.2018.88.
New AI-enhanced bandages poised to transform wound treatment
You cut yourself. You put on a bandage. In a week or so, your wound heals.
Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.
Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years.
Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.
Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.
“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.”
Old-school bandage meets AI
Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.
Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.
Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.
Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability.
The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner.
And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.
They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.
“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
How wounds heal, or don’t
Wound healing is one of the most complex processes of the human body.
First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.
But some conditions can stall the process, often in the inflammatory stage.
In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.
“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.
“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.
Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time.
A lot can happen to a wound in a week.
“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said.
People who are housing insecure or lack access to health care are even more vulnerable to complications.
“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.”
Bandages 2.0
In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.
Tens of millions in funding has kick-started a flood of innovation since.
“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”
His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.
“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”
The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.
Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.
Across the globe, other promising developments are underway.
In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.
Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.
“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.”
Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.
Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.
“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
Timeline for clinical use
These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.
But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.
In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.
The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.
Animals treated with the bandage healed 25% faster, with 50% less scarring.
Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.
“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”
A version of this article appeared on WebMD.com.
You cut yourself. You put on a bandage. In a week or so, your wound heals.
Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.
Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years.
Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.
Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.
“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.”
Old-school bandage meets AI
Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.
Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.
Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.
Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability.
The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner.
And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.
They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.
“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
How wounds heal, or don’t
Wound healing is one of the most complex processes of the human body.
First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.
But some conditions can stall the process, often in the inflammatory stage.
In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.
“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.
“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.
Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time.
A lot can happen to a wound in a week.
“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said.
People who are housing insecure or lack access to health care are even more vulnerable to complications.
“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.”
Bandages 2.0
In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.
Tens of millions in funding has kick-started a flood of innovation since.
“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”
His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.
“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”
The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.
Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.
Across the globe, other promising developments are underway.
In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.
Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.
“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.”
Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.
Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.
“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
Timeline for clinical use
These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.
But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.
In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.
The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.
Animals treated with the bandage healed 25% faster, with 50% less scarring.
Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.
“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”
A version of this article appeared on WebMD.com.
You cut yourself. You put on a bandage. In a week or so, your wound heals.
Most people take this routine for granted. But for the more than 8.2 million Americans who have chronic wounds, it’s not so simple.
Traumatic injuries, post-surgical complications, advanced age, and chronic illnesses like diabetes and vascular disease can all disrupt the delicate healing process, leading to wounds that last months or years.
Left untreated, about 30% led to amputation. And recent studies show the risk of dying from a chronic wound complication within 5 years rivals that of most cancers.
Yet until recently, medical technology had not kept up with what experts say is a snowballing threat to public health.
“Wound care – even with all of the billions of products that are sold – still exists on kind of a medieval level,” said Geoffrey Gurtner, MD, chair of the department of surgery and professor of biomedical engineering at the University of Arizona College of Medicine. “We’re still putting on poultices and salves ... and when it comes to diagnosing infection, it’s really an art. I think we can do better.”
Old-school bandage meets AI
Dr. Gurtner is among dozens of clinicians and researchers reimagining the humble bandage, combining cutting-edge materials science with artificial intelligence and patient data to develop “smart bandages” that do far more than shield a wound.
Someday soon, these paper-thin bandages embedded with miniaturized electronics could monitor the healing process in real time, alerting the patient – or a doctor – when things go wrong. With the press of a smartphone button, that bandage could deliver medicine to fight an infection or an electrical pulse to stimulate healing.
Some “closed-loop” designs need no prompting, instead monitoring the wound and automatically giving it what it needs.
Others in development could halt a battlefield wound from hemorrhaging or kick-start healing in a blast wound, preventing longer-term disability.
The same technologies could – if the price is right – speed up healing and reduce scarring in minor cuts and scrapes, too, said Dr. Gurtner.
And unlike many cutting-edge medical innovations, these next-generation bandages could be made relatively cheaply and benefit some of the most vulnerable populations, including older adults, people with low incomes, and those in developing countries.
They could also save the health care system money, as the U.S. spends more than $28 billion annually treating chronic wounds.
“This is a condition that many patients find shameful and embarrassing, so there hasn’t been a lot of advocacy,” said Dr. Gurtner, outgoing board president of the Wound Healing Society. “It’s a relatively ignored problem afflicting an underserved population that has a huge cost. It’s a perfect storm.”
How wounds heal, or don’t
Wound healing is one of the most complex processes of the human body.
First platelets rush to the injury, prompting blood to clot. Then immune cells emit compounds called inflammatory cytokines, helping to fight off pathogens and keep infection at bay. Other compounds, including nitric oxide, spark the growth of new blood vessels and collagen to rebuild skin and connective tissue. As inflammation slows and stops, the flesh continues to reform.
But some conditions can stall the process, often in the inflammatory stage.
In people with diabetes, high glucose levels and poor circulation tend to sabotage the process. And people with nerve damage from spinal cord injuries, diabetes, or other ailments may not be able to feel it when a wound is getting worse or reinjured.
“We end up with patients going months with open wounds that are festering and infected,” said Roslyn Rivkah Isseroff, MD, professor of dermatology at the University of California Davis and head of the VA Northern California Health Care System’s wound healing clinic. “The patients are upset with the smell. These open ulcers put the patient at risk for systemic infection, like sepsis.” It can impact mental health, draining the patient’s ability to care for their wound.
“We see them once a week and send them home and say change your dressing every day, and they say, ‘I can barely move. I can’t do this,’ ” said Dr. Isseroff.
Checking for infection means removing bandages and culturing the wound. That can be painful, and results take time.
A lot can happen to a wound in a week.
“Sometimes, they come back and it’s a disaster, and they have to be admitted to the ER or even get an amputation,” Dr. Gurtner said.
People who are housing insecure or lack access to health care are even more vulnerable to complications.
“If you had the ability to say ‘there is something bad happening,’ you could do a lot to prevent this cascade and downward spiral.”
Bandages 2.0
In 2019, the Defense Advanced Research Projects Agency, the research arm of the Department of Defense, launched the Bioelectronics for Tissue Regeneration program to encourage scientists to develop a “closed-loop” bandage capable of both monitoring and hastening healing.
Tens of millions in funding has kick-started a flood of innovation since.
“It’s kind of a race to the finish,” said Marco Rolandi, PhD, associate professor of electrical and computer engineering at the University of California Santa Cruz and the principal investigator for a team including engineers, medical doctors, and computer scientists from UC Santa Cruz, UC Davis, and Tufts. “I’ve been amazed and impressed at all the work coming out.”
His team’s goal is to cut healing time in half by using (a) real-time monitoring of how a wound is healing – using indicators like temperature, pH level, oxygen, moisture, glucose, electrical activity, and certain proteins, and (b) appropriate stimulation.
“Every wound is different, so there is no one solution,” said Dr. Isseroff, the team’s clinical lead. “The idea is that it will be able to sense different parameters unique to the wound, use AI to figure out what stage it is in, and provide the right stimulus to kick it out of that stalled stage.”
The team has developed a proof-of-concept prototype: a bandage embedded with a tiny camera that takes pictures and transmits them to a computer algorithm to assess the wound’s progress. Miniaturized battery-powered actuators, or motors, automatically deliver medication.
Phase I trials in rodents went well, Dr. Rolandi said. The team is now testing the bandage on pigs.
Across the globe, other promising developments are underway.
In a scientific paper published in May, researchers at the University of Glasgow described a new “low-cost, environmentally friendly” bandage embedded with light-emitting diodes that use ultraviolet light to kill bacteria – no antibiotics needed. The fabric is stitched with a slim, flexible coil that powers the lights without a battery using wireless power transfer. In lab studies, it eradicated gram-negative bacteria (some of the nastiest bugs) in 6 hours.
Also in May, in the journal Bioactive Materials, a Penn State team detailed a bandage with medicine-injecting microneedles that can halt bleeding immediately after injury. In lab and animal tests, it reduced clotting time from 11.5 minutes to 1.3 minutes and bleeding by 90%.
“With hemorrhaging injuries, it is often the loss of blood – not the injury itself – that causes death,” said study author Amir Sheikhi, PhD, assistant professor of chemical and biomedical engineering at Penn State. “Those 10 minutes could be the difference between life and death.”
Another smart bandage, developed at Northwestern University, Chicago, harmlessly dissolves – electrodes and all – into the body after it is no longer needed, eliminating what can be a painful removal.
Guillermo Ameer, DSc, a study author reporting on the technology in Science Advances, hopes it could be made cheaply and used in developing countries.
“We’d like to create something that you could use in your home, even in a very remote village,” said Dr. Ameer, professor of biomedical engineering at Northwestern.
Timeline for clinical use
These are early days for the smart bandage, scientists say. Most studies have been in rodents and more work is needed to develop human-scale bandages, reduce cost, solve long-term data storage, and ensure material adheres well without irritating the skin.
But Dr. Gurtner is hopeful that some iteration could be used in clinical practice within a few years.
In May, he and colleagues at Stanford (Calif.) University published a paper in Nature Biotechnology describing their smart bandage. It includes a microcontroller unit, a radio antenna, biosensors, and an electrical stimulator all affixed to a rubbery, skin-like polymer (or hydrogel) about the thickness of a single coat of latex paint.
The bandage senses changes in temperature and electrical conductivity as the wound heals, and it gives electrical stimulation to accelerate that healing.
Animals treated with the bandage healed 25% faster, with 50% less scarring.
Electrical currents are already used for wound healing in clinical practice, Dr. Gurtner said. Because the stimulus is already approved and the cost to make the bandage could be low (as little as $10 to $50), he believes it could be ushered through the approval processes relatively quickly.
“Is this the ultimate embodiment of all the bells and whistles that are possible in a smart bandage? No. Not yet,” he said. “But we think it will help people. And right now, that’s good enough.”
A version of this article appeared on WebMD.com.
IQ and concussion recovery
Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.
Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.
With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.
If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.
But . Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.
However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”
While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.
In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.
While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.
Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.
With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.
If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.
But . Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.
However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”
While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.
In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.
While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Pediatric concussion is one of those rare phenomena in which we may be witnessing its emergence and clarification in a generation. When I was serving as the game doctor for our local high school football team in the 1970s, I and many other physicians had a very simplistic view of concussion. If the patient never lost conscious and had a reasonably intact short-term memory, we didn’t seriously entertain concussion as a diagnosis. “What’s the score and who is the president?” Were my favorite screening questions.
Obviously, we were underdiagnosing and mismanaging concussion. In part thanks to some high-profile athletes who suffered multiple concussions and eventually chronic traumatic encephalopathy (CTE) physicians began to realize that they should be looking more closely at children who sustained a head injury. The diagnostic criteria were expanded to include any injury that even temporarily effected brain function.
With the new appreciation for the risk of multiple concussions, the focus broadened to include the question of when is it safe for the athlete to return to competition. What signs or symptoms can the patient offer us so we can be sure his or her brain is sufficiently recovered? Here we stepped off into a deep abyss of ignorance. Fortunately, it became obvious fairly quickly that imaging studies weren’t going to help us, as they were invariably normal or at least didn’t tell us anything that wasn’t obvious on a physical exam.
If the patient had a headache, complained of dizziness, or manifested amnesia, monitoring the patient was fairly straightforward. But, in the absence of symptoms and no obvious way to determine the pace of recovery of an organ we couldn’t visualize, clinicians were pulling criteria and time tables out of thin air. Guessing that the concussed brain was in some ways like a torn muscle or overstretched tendon, “brain rest” was often suggested. So no TV, no reading, and certainly none of the cerebral challenging activity of school. Fortunately, we don’t hear much about the notion of brain rest anymore and there is at least one study that suggests that patients kept home from school recover more slowly.
But . Sometimes they describe headache or dizziness but often they complain of a vague mental unwellness. “Brain fog,” a term that has emerged in the wake of the COVID pandemic, might be an apt descriptor. Management of these slow recoverers has been a challenge.
However, two recent articles in the journal Pediatrics may provide some clarity and offer guidance in their management. In a study coming from the psychology department at Georgia State University, researchers reported that they have been able to find “no evidence of clinical meaningful differences in IQ after pediatric concussion.” In their words there is “strong evidence against reduced intelligence in the first few weeks to month after pediatric concussion.”
While their findings may simply toss the IQ onto the pile of worthless measures of healing, a companion commentary by Talin Babikian, PhD, a psychologist at the Semel Institute for Neuroscience and Human Behavior at UCLA, provides a more nuanced interpretation. He writes that if we are looking for an explanation when a patient’s recovery is taking longer than we might expect we need to look beyond some structural damage. Maybe the patient has a previously undiagnosed premorbid condition effecting his or her intellectual, cognitive, or learning abilities. Could the stall in improvement be the result of other symptoms? Here fatigue and sleep deprivation may be the culprits. Could some underlying emotional factor such as anxiety or depression be the problem? For example, I have seen patients whose fear of re-injury has prevented their return to full function. And, finally, the patient may be avoiding a “nonpreferred or challenging situation” unrelated to the injury.
In other words, the concussion may simply be the most obvious rip in a fabric that was already frayed and under stress. This kind of broad holistic (a word I usually like to avoid) thinking may be what is lacking as we struggle to understand other mysterious and chronic conditions such as Lyme disease and chronic fatigue syndrome.
While these two papers help provide some clarity in the management of pediatric concussion, what they fail to address is the bigger question of the relationship between head injury and CTE. The answers to that conundrum are enshrouded in a mix of politics and publicity that I doubt will clear in the near future.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Almonds and almond oil
Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1 .
Antiphotoaging activity
In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4
In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3
Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2
In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6
Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7
Wound healing
In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8
That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9
Eczema, hand dermatitis, and striae
In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10
Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11
Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13
Conclusion
Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.
Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].
References
1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.
2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.
3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.
4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.
5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.
6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.
7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.
8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.
9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.
10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.
11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.
12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.
13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1 .
Antiphotoaging activity
In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4
In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3
Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2
In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6
Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7
Wound healing
In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8
That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9
Eczema, hand dermatitis, and striae
In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10
Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11
Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13
Conclusion
Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.
Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].
References
1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.
2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.
3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.
4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.
5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.
6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.
7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.
8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.
9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.
10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.
11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.
12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.
13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
Almonds and almond oil are known to exhibit anti-inflammatory, antihepatotoxicity, and immunity-boosting activity.1 The seed from the deciduous almond tree (Oleum amygdalae), which is native to Iran and parts of the Levant, almonds contain copious amounts of phenols and polyphenols, fatty acids, and vitamin E, all of which are known to exert antioxidant activity.2-5 These seeds have been found to have a substantial impact on serum lipids.4 Emollient and sclerosant characteristics have also been linked to almond oil, which has been found to ameliorate complexion and skin tone.5 Significantly, in vitro and in vivo studies have shown that UVB-induced photoaging can be attenuated through the use of almond oil and almond skin extract.2 Further, in traditional Chinese Medicine, Ayurveda, and ancient Greco-Persian medicine, almond oil was used to treat cutaneous conditions, including eczema and psoriasis.1 .
Antiphotoaging activity
In 2019, Foolad and Vaughn conducted a prospective, investigator-blind, randomized controlled trial to determine the effects of almond consumption on facial sebum production and wrinkles. Participants (28 postmenopausal women with Fitzpatrick skin types I and II completed the study) consumed 20% of their daily energy intake in almonds or a calorie-matched snack over 16 weeks through the UC Davis Dermatology Clinic. Photographic analysis revealed that the almond group experienced significantly diminished wrinkle severity, compared with the control group. The investigators concluded that daily almond consumption has the potential to decrease wrinkle severity in postmenopausal women and that almonds may confer natural antiaging effects.4
In a similar investigation 2 years later, Rybak et al. reported on a prospective, randomized controlled study to ascertain the effects of almond consumption on photoaging in postmenopausal women with Fitzpatrick skin types I or II who obtained 20% of their daily energy consumption via almonds or a calorie-matched snack for 24 weeks. Results demonstrated significant effects conferred by almond consumption, with average wrinkle severity substantially diminished in the almond group at weeks 16 (by 15%) and 24 (by 16%), compared with baseline. In addition, facial pigment intensity was reduced by 20% in the almond group by week 16 and this was maintained through the end of the study. Further, sebum excretion was higher in the control group. The investigators concluded that the daily consumption of almonds may have the potential to enhance protection against photoaging, particularly in terms of facial wrinkles and pigment intensity, in postmenopausal women.3
Later in 2021, Li et al. conducted a study in 39 healthy Asian women (18-45 years old) with Fitzpatrick skin types II to IV to investigate the effects of almond consumption on UVB resistance. The researchers randomized participants to eat either 1.5 oz of almonds or 1.8 oz of pretzels daily for 12 weeks. Results showed that the minimal erythema dose was higher in the almond group as compared with the control group. No differences were observed in hydration, melanin, roughness, or sebum on facial skin. The authors concluded that daily oral almond intake may improve photoprotection by raising the minimal erythema dose.2
In a 2022 review on the cutaneous benefits of sweet almond, evening primrose, and jojoba oils, Blaak and Staib noted that all three have been used for hundreds if not thousands of years in traditional medicine to treat various conditions, including skin disorders. Further, they concluded that the longstanding uses of these oils has been borne out by contemporary data, which reveal cutaneous benefits for adult and young skin, particularly in bolstering stratum corneum integrity, recovery, and lipid ratio.6
Later that year, Sanju et al., reporting on the development and assessment of a broad-spectrum polyherbal sunscreen delivered through solid lipid nanoparticles, noted that almond oil was among the natural ingredients used because of its photoprotective characteristics. Overall, the sunscreen formulation, Safranal, was found to impart robust protection against UV radiation.7
Wound healing
In 2020, Borzou et al. conducted a single-blind randomized clinical trial to ascertain the impact of topical almond oil in preventing pressure injuries. Data collection occurred over 8 months in a hospital setting, with 108 patients randomly assigned to receive almond oil, placebo (liquid paraffin), or the control (standard of care). The researchers found that topically applied almond oil was linked to a lower incidence of pressure injuries, and they arose later in the study as compared with those injuries in the groups receiving paraffin or standard of care. Pressure injury incidence was 5.6% in the almond oil group, 13.9% in the placebo group, and 25.1% in the control group.8
That same year, Caglar et al. completed a randomized controlled trial in 90 preterm infants to assess the effects of sunflower seed oil and almond oil on the stratum corneum. Infants were randomly selected for treatment with either oil or control. A nurse researcher applied oils to the whole body except for the head and face four times daily for 5 days. Investigators determined that stratum corneum hydration was better in the oil groups as compared with control, with no difference found between sunflower seed and almond oils.9
Eczema, hand dermatitis, and striae
In 2018, Simon et al. performed a randomized, double-blind study to determine the short- and long-term effects of two emollients on pruritus and skin restoration in xerotic eczema. The emollients contained lactic acid and refined almond oil, with one also including polidocanol. Both emollients were effective in reducing the severity of itching, with skin moisture and lipid content found to have risen after the initial administration and yielding steady improvement over 2 weeks.10
Earlier that year, Zeichner et al. found that the use of an OTC sweet almond oil, rich in fatty acids and a standard-bearing treatment for eczema and psoriasis for centuries, was effective in treating hand dermatitis. Specifically, the moisturizer, which contained 7% sweet almond oil and 2% colloidal oatmeal, was identified as safe and effective in resolving moderate to severe hand dermatitis.11
Some studies have also shown almond oil to be effective against striae gravidarum. Hajhashemi et al. conducted a double-blind clinical trial in 160 nulliparous women to compare the effects of aloe vera gel and sweet almond oil on striae gravidarum in 2018. Volunteers were randomly assigned to one of three case groups (Aloe vera, sweet almond oil, or base cream) who received topical treatment on the abdomen, or the fourth group, which received no treatment. Results showed that both treatment creams were effective in decreasing erythema and the pruritus associated with striae as well as in preventing their expansion.12 Previously, Tashan and Kafkasli showed in a nonrandomized study that massage with bitter almond oil may diminish the visibility of present striae gravidarum and prevent the emergence of new striae.13
Conclusion
Almonds and almond oil have been used as food and in traditional medical practices dating back several centuries. In the last decade, intriguing results have emerged regarding the effects of almond consumption or topical almond oil administration on skin health. While much more research is necessary, the recent data seem to support the traditional uses of this tree seed for dermatologic purposes.
Dr. Baumann is a private practice dermatologist, researcher, author, and entrepreneur in Miami. She founded the division of cosmetic dermatology at the University of Miami in 1997. The third edition of her bestselling textbook, “Cosmetic Dermatology” (New York: McGraw Hill), was published in 2022. Dr. Baumann has received funding for advisory boards and/or clinical research trials from Allergan, Galderma, Johnson & Johnson, and Burt’s Bees. She is the CEO of Skin Type Solutions, a SaaS company used to generate skin care routines in office and as an e-commerce solution. Write to her at [email protected].
References
1. Ahmad Z. Complement Ther Clin Pract. 2010 Feb;16(1):10-2.
2. Li JN et al. J Cosmet Dermatol. 2021 Sep;20(9):2975-80.
3. Rybak I et al. Nutrients. 2021 Feb 27;13(3):785.
4. Foolad N et al. Phytother Res. 2019 Dec;33(12):3212-7.
5. Lin TK et al. Int J Mol Sci. 2017 Dec 27;19(1):70.
6. Blaak J, Staib P. Int J Cosmet Sci. 2022 Feb;44(1):1-9.
7. Sanju N et al. J Cosmet Dermatol. 2022 Oct;21(10):4433-46.
8. Borzou SR et al. J Wound Ostomy Continence Nurs. 2020 Jul/Aug;47(4):336-42.
9. Caglar S et al. Adv Skin Wound Care. 2020 Aug;33(8):1-6.
10. Simon D et al. Dermatol Ther. 2018 Nov;31(6):e12692.
11. Zeichner JA at al. J Drugs Dermatol. 2018 Jan 1;17(1):78-82.
12. Hajhashemi M et al. J Matern Fetal Neonatal Med. 2018 Jul;31(13):1703-8.
13. Timur Tashan S and Kafkasli A. J Clin Nurs. 2012 Jun;21(11-12):1570-6.
Skin has different daytime and nighttime needs, emerging circadian research suggests
SAN DIEGO –
“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.
Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”
The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.
In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.
And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.
“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.
In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.
“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”
Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.
“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”
Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.
“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”
Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.
SAN DIEGO –
“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.
Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”
The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.
In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.
And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.
“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.
In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.
“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”
Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.
“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”
Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.
“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”
Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.
SAN DIEGO –
“Paying attention to the circadian rhythm of the skin is every bit as important as moisturizing the skin,” Dr. Shamban, a dermatologist who practices in Santa Monica, Calif., said at the annual Masters of Aesthetics Symposium. “It is paramount to both your morning and evening skin regimen routine,” she added.
Circadian rhythms are physical, mental, and behavioral changes that follow a 24-hour cycle. “These natural processes respond primarily to light and dark and affect most living things, including animals, plants, and microbes,” she said. “The circadian system is composed of peripheral circadian oscillators in many other cells, including the skin.”
The science has been around awhile, but dermatologists didn’t understand its impact until recently, she said.
In 1729, the French astronomer Jean-Jacques d’Ortous de Mairan demonstrated that mimosa leaves, which open at dawn and close at dusk, continued this cycle even when kept in darkness. In the 1970s, Seymour Benzer and Ronald Konopka showed that mutations in an unknown gene disrupted the circadian clock of fruit flies.
And in 2017, the Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young for discovering molecular mechanisms that control circadian rhythm. Using fruit flies as a model, they isolated a gene that controls the normal daily biological rhythm.
“They showed that this gene encodes a protein that accumulates in the cell during the night and is then degraded during the day, and they identified additional protein components, exposing the mechanism governing the self-sustaining clockwork inside the cell,” said Dr. Shamban.
In humans and other mammals, the primary body clock is located in the suprachiasmatic nucleus, a cluster of approximately 10,000 neurons located on either side of the midline above the optic chiasma, about 3 cm behind the eyes. Several clock genes have been identified that regulate and control transcription and translation.
“Expression of these core clock genes inside the cell influences many signaling pathways, which allows the cells to identify the time of day and perform their appropriate function,” Dr. Shamban said. “Furthermore, phosphorylation of core clock proteins leads to degradation to keep the 24-hour cycle in sync.”
Photoreceptive molecules known as opsins also appear to play a role in regulating the skin’s clock. A systematic review of 22 articles published in 2020 found that opsins are present in keratinocytes, melanocytes, dermal fibroblasts, and hair follicle cells, and they have been shown to mediate wound healing, melanogenesis, hair growth, and skin photoaging in human and nonhuman species.
“You may wonder, why does the skin respond so nicely to light?” Dr. Shamban said. “Because it contains opsins, and light exposure through opsin-regulated pathways stimulates melanin production.”
Patients can support their skin’s clock genes by understanding that skin barrier functions such as photoprotection and sebum production are increased during the day, while skin permeability processes such as DNA repair, cell proliferation, and blood flow are enhanced at night.
“Your skin has different daytime and nighttime needs,” Dr. Shamban commented. “Simply put, daytime is defense, and nighttime is offense. I think we’ve known this intuitively, but to know that there is science supporting this idea is important.”
Dr. Shamban wrote the book “Heal Your Skin: The Breakthrough Plan for Renewal” (Wiley, 2011). She disclosed that she conducts clinical trials for many pharmaceutical and device companies.
AT MOAS 2023
CoolSculpting remains most popular procedure for noninvasive fat removal, expert says
SAN DIEGO –, some aesthetic experts wondered how consumers would embrace the fat reduction procedure going forward.
The negative publicity surrounding this case “is thought to have detracted from some of the volume of it [in terms of demand], but it looks like it’s coming back again,” Omar A. Ibrahimi, MD, PhD, medical director of the Connecticut Skin Institute, Stamford, said during a presentation on noninvasive fat removal treatment options at the annual Masters of Aesthetics Symposium.
In fact, he said, CoolSculpting accounts for an estimated 72% of noninvasive fat removal treatments performed in the United States. “By and large, there is high satisfaction with this procedure,” said Dr. Ibrahimi. “There have been about 17 million procedures done worldwide. Paradoxical adipose hyperplasia is a very rare side effect. As newer iterations of this technology have come out, I think there is an even lower incidence.”
CoolSculpting, or cryolipolysis, freezes excess fat to remove it from stubborn areas via panniculitis. The technology was developed by Dieter Manstein MD, PhD, and R. Rox Anderson, MD, at Massachusetts General Hospital and Harvard Medical School, both in Boston, and cleared by the U.S. Food and Drug Administration for noninvasive fat removal in 2010.
“If you kill a fat cell in an adult, it can’t come back,” Dr. Ibrahimi said. “When this technology first came out it was very simple. We treated an area once and were done. Now we know to treat the area multiple times, and you can treat a much larger volume in a patient during one session safely. You can bring about dramatic results, but it often takes a series of 35-minute treatment cycles and about 3 months to see clinical results. There are published studies showing that results are persisting even 10 years after treatment. This is nice, because I tell my patients, ‘if you keep up with your diet and exercise, we don’t expect the fat to come back.’ ”
Other noninvasive options for fat removal include the following:
- Ultrasound. Options include high-intensity focused ultrasound (Liposonix) and pulsed focused ultrasound (UltraShape). Dr. Ibrahimi described these devices as “very painful, and the results were very difficult to reproduce from the initial clinical studies.”
- Low-level light therapy. Early devices on the market include Zerona and UltraSlim. “Oftentimes these lacked any sort of histological analysis,” he said. “There was no obvious mechanism of action, and questionable efficacy.”
- Laser. Powered by a 1060-nm laser, SculpSure can reduce fat cells safely in 25-minute treatment sessions, Dr. Ibrahimi said. Each session is delivered with one of four available applicators and involves 4 minutes of heating and the next 21 minutes alternating between heating and cooling. “You’re trying to reach a target temperature that kills fat cells,” he explained. “The beauty of having these applicators is that you can kind of customize to the individual patient; it uses contact cooling, and it’s safe for all Fitzpatrick skin types. This device results in a 10%-12% reduction in fat, so it’s clinically significant but very modest.”
A robotic version of the technology, known as the Robotic Fat Killer, is also available. So is the EON, a touchless 1064-nm laser FDA cleared for abdominal, flank, thigh, and back fat reduction. “It adapts to the body shape of the area and individual to deliver a customized treatment,” Dr. Ibrahimi said.
- Radiofrequency. Most devices on the market, such as truSculpt and Vanquish, are powered by monopolar radiofrequency (RF) energy. “Similar to the 1060-nm laser, you can customize these treatments,” he said. “You’re treating to a target temperature. It involves 15-minute cycles, and there are clinical, histology, and ultrasound data supporting this technology.”
Dr. Ibrahimi uses truSculpt and CoolSculpting in his practice, “but sometimes you have patients who are ‘too fit’ for CoolSculpting; they don’t fit the handpiece perfectly,” he said. “That’s where having a monopolar RF or a 1060-nm laser is useful, to help you hone in on those stubborn pockets of fat.”
- Deoxycholic acid. While not a device, deoxycholic acid (Kybella), administered subcutaneously, is approved by the FDA for improving “the appearance of moderate to severe convexity or fullness associated with submental fat” in adults. “A lot of people use it off-label on the abdomen and other stubborn areas,” Dr. Ibrahimi said. “It often requires a series of treatments. That’s the biggest limiting issue with using this technology. It works well, but compared to CoolSculpting, there is a lot of swelling and bruising, which you would expect with an injectable. Managing that down time and hand holding is difficult. But if you can get patients to buy into the downtime, [it yields] pretty impressive results.”
Dr. Ibrahimi also discussed the promise of electrical muscle stimulation for strengthening, firming, and toning muscles. The technology applies an electrical current through electrodes placed on the skin, which stimulates muscles, or through an electromagnetic field.
In a published study of 45 men and women, Dr. Ibrahimi, Anne Chapas, MD, medical director of UnionDerm in New York, and colleagues evaluated the safety and efficacy of an electrical muscle stimulation system for improving muscle strength and toning of the upper extremities.
For the treatments, they used disposable contact pads to place pairs of electrodes on the biceps and on the triceps. All patients (median age 42) received 30-minute treatments twice weekly for 2 or 3 weeks, corresponding to four or six total sessions respectively, depending on the study site. Follow-ups were conducted 30 and 90 days after treatment. They used a validated dynamometer device to measure strength at baseline, at the final treatment session, and at the post-treatment 30- and 90-day visits.
“We saw about a 40% increase in strength in the biceps and about a 30% increase in strength in the triceps,” Dr. Ibrahimi said. “Interestingly, the effect got greater at 30 and 90 days, so this is something that lingers on for quite a while.” In addition to the increase in strength, the researchers and patients noted an improvement in the appearance of the arms. He predicted that this technology “is going to play a role in functional medicine and getting injured athletes back to their sports faster.”
Dr. Ibrahimi disclosed that he is a member of the Advisory Board for Accure Acne, AbbVie, Cutera (manufacturer of truSculpt), Lutronic, Blueberry Therapeutics, Cytrellis, and Quthero. He also holds stock in many device and pharmaceutical companies (none are relevant to the treatments mentioned in this story).
SAN DIEGO –, some aesthetic experts wondered how consumers would embrace the fat reduction procedure going forward.
The negative publicity surrounding this case “is thought to have detracted from some of the volume of it [in terms of demand], but it looks like it’s coming back again,” Omar A. Ibrahimi, MD, PhD, medical director of the Connecticut Skin Institute, Stamford, said during a presentation on noninvasive fat removal treatment options at the annual Masters of Aesthetics Symposium.
In fact, he said, CoolSculpting accounts for an estimated 72% of noninvasive fat removal treatments performed in the United States. “By and large, there is high satisfaction with this procedure,” said Dr. Ibrahimi. “There have been about 17 million procedures done worldwide. Paradoxical adipose hyperplasia is a very rare side effect. As newer iterations of this technology have come out, I think there is an even lower incidence.”
CoolSculpting, or cryolipolysis, freezes excess fat to remove it from stubborn areas via panniculitis. The technology was developed by Dieter Manstein MD, PhD, and R. Rox Anderson, MD, at Massachusetts General Hospital and Harvard Medical School, both in Boston, and cleared by the U.S. Food and Drug Administration for noninvasive fat removal in 2010.
“If you kill a fat cell in an adult, it can’t come back,” Dr. Ibrahimi said. “When this technology first came out it was very simple. We treated an area once and were done. Now we know to treat the area multiple times, and you can treat a much larger volume in a patient during one session safely. You can bring about dramatic results, but it often takes a series of 35-minute treatment cycles and about 3 months to see clinical results. There are published studies showing that results are persisting even 10 years after treatment. This is nice, because I tell my patients, ‘if you keep up with your diet and exercise, we don’t expect the fat to come back.’ ”
Other noninvasive options for fat removal include the following:
- Ultrasound. Options include high-intensity focused ultrasound (Liposonix) and pulsed focused ultrasound (UltraShape). Dr. Ibrahimi described these devices as “very painful, and the results were very difficult to reproduce from the initial clinical studies.”
- Low-level light therapy. Early devices on the market include Zerona and UltraSlim. “Oftentimes these lacked any sort of histological analysis,” he said. “There was no obvious mechanism of action, and questionable efficacy.”
- Laser. Powered by a 1060-nm laser, SculpSure can reduce fat cells safely in 25-minute treatment sessions, Dr. Ibrahimi said. Each session is delivered with one of four available applicators and involves 4 minutes of heating and the next 21 minutes alternating between heating and cooling. “You’re trying to reach a target temperature that kills fat cells,” he explained. “The beauty of having these applicators is that you can kind of customize to the individual patient; it uses contact cooling, and it’s safe for all Fitzpatrick skin types. This device results in a 10%-12% reduction in fat, so it’s clinically significant but very modest.”
A robotic version of the technology, known as the Robotic Fat Killer, is also available. So is the EON, a touchless 1064-nm laser FDA cleared for abdominal, flank, thigh, and back fat reduction. “It adapts to the body shape of the area and individual to deliver a customized treatment,” Dr. Ibrahimi said.
- Radiofrequency. Most devices on the market, such as truSculpt and Vanquish, are powered by monopolar radiofrequency (RF) energy. “Similar to the 1060-nm laser, you can customize these treatments,” he said. “You’re treating to a target temperature. It involves 15-minute cycles, and there are clinical, histology, and ultrasound data supporting this technology.”
Dr. Ibrahimi uses truSculpt and CoolSculpting in his practice, “but sometimes you have patients who are ‘too fit’ for CoolSculpting; they don’t fit the handpiece perfectly,” he said. “That’s where having a monopolar RF or a 1060-nm laser is useful, to help you hone in on those stubborn pockets of fat.”
- Deoxycholic acid. While not a device, deoxycholic acid (Kybella), administered subcutaneously, is approved by the FDA for improving “the appearance of moderate to severe convexity or fullness associated with submental fat” in adults. “A lot of people use it off-label on the abdomen and other stubborn areas,” Dr. Ibrahimi said. “It often requires a series of treatments. That’s the biggest limiting issue with using this technology. It works well, but compared to CoolSculpting, there is a lot of swelling and bruising, which you would expect with an injectable. Managing that down time and hand holding is difficult. But if you can get patients to buy into the downtime, [it yields] pretty impressive results.”
Dr. Ibrahimi also discussed the promise of electrical muscle stimulation for strengthening, firming, and toning muscles. The technology applies an electrical current through electrodes placed on the skin, which stimulates muscles, or through an electromagnetic field.
In a published study of 45 men and women, Dr. Ibrahimi, Anne Chapas, MD, medical director of UnionDerm in New York, and colleagues evaluated the safety and efficacy of an electrical muscle stimulation system for improving muscle strength and toning of the upper extremities.
For the treatments, they used disposable contact pads to place pairs of electrodes on the biceps and on the triceps. All patients (median age 42) received 30-minute treatments twice weekly for 2 or 3 weeks, corresponding to four or six total sessions respectively, depending on the study site. Follow-ups were conducted 30 and 90 days after treatment. They used a validated dynamometer device to measure strength at baseline, at the final treatment session, and at the post-treatment 30- and 90-day visits.
“We saw about a 40% increase in strength in the biceps and about a 30% increase in strength in the triceps,” Dr. Ibrahimi said. “Interestingly, the effect got greater at 30 and 90 days, so this is something that lingers on for quite a while.” In addition to the increase in strength, the researchers and patients noted an improvement in the appearance of the arms. He predicted that this technology “is going to play a role in functional medicine and getting injured athletes back to their sports faster.”
Dr. Ibrahimi disclosed that he is a member of the Advisory Board for Accure Acne, AbbVie, Cutera (manufacturer of truSculpt), Lutronic, Blueberry Therapeutics, Cytrellis, and Quthero. He also holds stock in many device and pharmaceutical companies (none are relevant to the treatments mentioned in this story).
SAN DIEGO –, some aesthetic experts wondered how consumers would embrace the fat reduction procedure going forward.
The negative publicity surrounding this case “is thought to have detracted from some of the volume of it [in terms of demand], but it looks like it’s coming back again,” Omar A. Ibrahimi, MD, PhD, medical director of the Connecticut Skin Institute, Stamford, said during a presentation on noninvasive fat removal treatment options at the annual Masters of Aesthetics Symposium.
In fact, he said, CoolSculpting accounts for an estimated 72% of noninvasive fat removal treatments performed in the United States. “By and large, there is high satisfaction with this procedure,” said Dr. Ibrahimi. “There have been about 17 million procedures done worldwide. Paradoxical adipose hyperplasia is a very rare side effect. As newer iterations of this technology have come out, I think there is an even lower incidence.”
CoolSculpting, or cryolipolysis, freezes excess fat to remove it from stubborn areas via panniculitis. The technology was developed by Dieter Manstein MD, PhD, and R. Rox Anderson, MD, at Massachusetts General Hospital and Harvard Medical School, both in Boston, and cleared by the U.S. Food and Drug Administration for noninvasive fat removal in 2010.
“If you kill a fat cell in an adult, it can’t come back,” Dr. Ibrahimi said. “When this technology first came out it was very simple. We treated an area once and were done. Now we know to treat the area multiple times, and you can treat a much larger volume in a patient during one session safely. You can bring about dramatic results, but it often takes a series of 35-minute treatment cycles and about 3 months to see clinical results. There are published studies showing that results are persisting even 10 years after treatment. This is nice, because I tell my patients, ‘if you keep up with your diet and exercise, we don’t expect the fat to come back.’ ”
Other noninvasive options for fat removal include the following:
- Ultrasound. Options include high-intensity focused ultrasound (Liposonix) and pulsed focused ultrasound (UltraShape). Dr. Ibrahimi described these devices as “very painful, and the results were very difficult to reproduce from the initial clinical studies.”
- Low-level light therapy. Early devices on the market include Zerona and UltraSlim. “Oftentimes these lacked any sort of histological analysis,” he said. “There was no obvious mechanism of action, and questionable efficacy.”
- Laser. Powered by a 1060-nm laser, SculpSure can reduce fat cells safely in 25-minute treatment sessions, Dr. Ibrahimi said. Each session is delivered with one of four available applicators and involves 4 minutes of heating and the next 21 minutes alternating between heating and cooling. “You’re trying to reach a target temperature that kills fat cells,” he explained. “The beauty of having these applicators is that you can kind of customize to the individual patient; it uses contact cooling, and it’s safe for all Fitzpatrick skin types. This device results in a 10%-12% reduction in fat, so it’s clinically significant but very modest.”
A robotic version of the technology, known as the Robotic Fat Killer, is also available. So is the EON, a touchless 1064-nm laser FDA cleared for abdominal, flank, thigh, and back fat reduction. “It adapts to the body shape of the area and individual to deliver a customized treatment,” Dr. Ibrahimi said.
- Radiofrequency. Most devices on the market, such as truSculpt and Vanquish, are powered by monopolar radiofrequency (RF) energy. “Similar to the 1060-nm laser, you can customize these treatments,” he said. “You’re treating to a target temperature. It involves 15-minute cycles, and there are clinical, histology, and ultrasound data supporting this technology.”
Dr. Ibrahimi uses truSculpt and CoolSculpting in his practice, “but sometimes you have patients who are ‘too fit’ for CoolSculpting; they don’t fit the handpiece perfectly,” he said. “That’s where having a monopolar RF or a 1060-nm laser is useful, to help you hone in on those stubborn pockets of fat.”
- Deoxycholic acid. While not a device, deoxycholic acid (Kybella), administered subcutaneously, is approved by the FDA for improving “the appearance of moderate to severe convexity or fullness associated with submental fat” in adults. “A lot of people use it off-label on the abdomen and other stubborn areas,” Dr. Ibrahimi said. “It often requires a series of treatments. That’s the biggest limiting issue with using this technology. It works well, but compared to CoolSculpting, there is a lot of swelling and bruising, which you would expect with an injectable. Managing that down time and hand holding is difficult. But if you can get patients to buy into the downtime, [it yields] pretty impressive results.”
Dr. Ibrahimi also discussed the promise of electrical muscle stimulation for strengthening, firming, and toning muscles. The technology applies an electrical current through electrodes placed on the skin, which stimulates muscles, or through an electromagnetic field.
In a published study of 45 men and women, Dr. Ibrahimi, Anne Chapas, MD, medical director of UnionDerm in New York, and colleagues evaluated the safety and efficacy of an electrical muscle stimulation system for improving muscle strength and toning of the upper extremities.
For the treatments, they used disposable contact pads to place pairs of electrodes on the biceps and on the triceps. All patients (median age 42) received 30-minute treatments twice weekly for 2 or 3 weeks, corresponding to four or six total sessions respectively, depending on the study site. Follow-ups were conducted 30 and 90 days after treatment. They used a validated dynamometer device to measure strength at baseline, at the final treatment session, and at the post-treatment 30- and 90-day visits.
“We saw about a 40% increase in strength in the biceps and about a 30% increase in strength in the triceps,” Dr. Ibrahimi said. “Interestingly, the effect got greater at 30 and 90 days, so this is something that lingers on for quite a while.” In addition to the increase in strength, the researchers and patients noted an improvement in the appearance of the arms. He predicted that this technology “is going to play a role in functional medicine and getting injured athletes back to their sports faster.”
Dr. Ibrahimi disclosed that he is a member of the Advisory Board for Accure Acne, AbbVie, Cutera (manufacturer of truSculpt), Lutronic, Blueberry Therapeutics, Cytrellis, and Quthero. He also holds stock in many device and pharmaceutical companies (none are relevant to the treatments mentioned in this story).
Obesity-related cardiovascular disease deaths surging
TOPLINE:
In contrast to an overall decline in cardiovascular mortality, obesity-related cardiovascular deaths have risen substantially in the past 2 decades, most prominently among Black women. wrote the authors.
METHODOLOGY:
Data from the U.S. population-level Multiple Cause of Death database were analyzed, including 281,135 deaths in 1999-2020 for which obesity was listed as a contributing factor.
TAKEAWAY:
- Overall, the crude rate of all cardiovascular deaths dropped by 17.6% across all races.
- However, age-adjusted obesity-related cardiovascular mortality tripled from 2.2/100,000 to 6.6/100,000 from 1999 to 2020, consistent across all racial groups.
- Blacks had the highest age-adjusted obesity-related cardiovascular mortality (rising from 4.2/100,000 in 1999 to 11.6/100,000 in 2000).
- Ischemic heart disease was the most common cardiovascular cause of death across all races, and hypertensive disease was second.
- Age-adjusted obesity-related cardiovascular mortality was higher among Blacks (6.7/100,000) than any other racial group, followed by American Indians or Alaskan Natives (3.8/100,000), and lowest among Asian or Pacific Islanders (0.9/100,000).
- The risk of obesity-related cardiovascular disease death rose most rapidly among American Indians and Alaskan Natives.
- Among Blacks, age-adjusted mortality was slightly higher among women than men (6.7/100,000 vs. 6.6/100,000), whereas the reverse was true for all other races (0.6-3.0/100,000 vs. 1.2-6.0/100,000).
- Blacks living in urban settings experienced higher rates of age-adjusted cardiovascular mortality than those living in rural areas (6.8/100,000 vs. 5.9/100,000), whereas the opposite was true for all other racial groups (0.9-3.5/100,000 vs. 2.2-5.4/100,000).
IN PRACTICE:
“There is need for dedicated health strategies aimed at individual communities to better understand and tackle the social determinants of obesity and to design interventions that may alleviate the population burden of both obesity and cardiovascular disease,” the authors wrote.
SOURCE:
The study, by Zahra Raisi-Estabragh, MD, PhD, Queen Mary University, London, and colleagues, was published online Sept. 6 in the Journal of the American Heart Association.
LIMITATIONS:
- Database limited to U.S. residents.
- Possible miscoding or diagnostic errors.
- Potential for residual confounding.
- No data on underlying drivers of observed trends.
DISCLOSURES:
Dr. Raisi-Estabragh has reported receiving funding from the Integrated Academic Training program of the National Institute for Health Research and a Clinical Research Training Fellowship from the British Heart Foundation. Another author has reported receiving research support from the National Heart, Lung, and Blood Institute.
A version of this article first appeared on Medscape.com.
TOPLINE:
In contrast to an overall decline in cardiovascular mortality, obesity-related cardiovascular deaths have risen substantially in the past 2 decades, most prominently among Black women. wrote the authors.
METHODOLOGY:
Data from the U.S. population-level Multiple Cause of Death database were analyzed, including 281,135 deaths in 1999-2020 for which obesity was listed as a contributing factor.
TAKEAWAY:
- Overall, the crude rate of all cardiovascular deaths dropped by 17.6% across all races.
- However, age-adjusted obesity-related cardiovascular mortality tripled from 2.2/100,000 to 6.6/100,000 from 1999 to 2020, consistent across all racial groups.
- Blacks had the highest age-adjusted obesity-related cardiovascular mortality (rising from 4.2/100,000 in 1999 to 11.6/100,000 in 2000).
- Ischemic heart disease was the most common cardiovascular cause of death across all races, and hypertensive disease was second.
- Age-adjusted obesity-related cardiovascular mortality was higher among Blacks (6.7/100,000) than any other racial group, followed by American Indians or Alaskan Natives (3.8/100,000), and lowest among Asian or Pacific Islanders (0.9/100,000).
- The risk of obesity-related cardiovascular disease death rose most rapidly among American Indians and Alaskan Natives.
- Among Blacks, age-adjusted mortality was slightly higher among women than men (6.7/100,000 vs. 6.6/100,000), whereas the reverse was true for all other races (0.6-3.0/100,000 vs. 1.2-6.0/100,000).
- Blacks living in urban settings experienced higher rates of age-adjusted cardiovascular mortality than those living in rural areas (6.8/100,000 vs. 5.9/100,000), whereas the opposite was true for all other racial groups (0.9-3.5/100,000 vs. 2.2-5.4/100,000).
IN PRACTICE:
“There is need for dedicated health strategies aimed at individual communities to better understand and tackle the social determinants of obesity and to design interventions that may alleviate the population burden of both obesity and cardiovascular disease,” the authors wrote.
SOURCE:
The study, by Zahra Raisi-Estabragh, MD, PhD, Queen Mary University, London, and colleagues, was published online Sept. 6 in the Journal of the American Heart Association.
LIMITATIONS:
- Database limited to U.S. residents.
- Possible miscoding or diagnostic errors.
- Potential for residual confounding.
- No data on underlying drivers of observed trends.
DISCLOSURES:
Dr. Raisi-Estabragh has reported receiving funding from the Integrated Academic Training program of the National Institute for Health Research and a Clinical Research Training Fellowship from the British Heart Foundation. Another author has reported receiving research support from the National Heart, Lung, and Blood Institute.
A version of this article first appeared on Medscape.com.
TOPLINE:
In contrast to an overall decline in cardiovascular mortality, obesity-related cardiovascular deaths have risen substantially in the past 2 decades, most prominently among Black women. wrote the authors.
METHODOLOGY:
Data from the U.S. population-level Multiple Cause of Death database were analyzed, including 281,135 deaths in 1999-2020 for which obesity was listed as a contributing factor.
TAKEAWAY:
- Overall, the crude rate of all cardiovascular deaths dropped by 17.6% across all races.
- However, age-adjusted obesity-related cardiovascular mortality tripled from 2.2/100,000 to 6.6/100,000 from 1999 to 2020, consistent across all racial groups.
- Blacks had the highest age-adjusted obesity-related cardiovascular mortality (rising from 4.2/100,000 in 1999 to 11.6/100,000 in 2000).
- Ischemic heart disease was the most common cardiovascular cause of death across all races, and hypertensive disease was second.
- Age-adjusted obesity-related cardiovascular mortality was higher among Blacks (6.7/100,000) than any other racial group, followed by American Indians or Alaskan Natives (3.8/100,000), and lowest among Asian or Pacific Islanders (0.9/100,000).
- The risk of obesity-related cardiovascular disease death rose most rapidly among American Indians and Alaskan Natives.
- Among Blacks, age-adjusted mortality was slightly higher among women than men (6.7/100,000 vs. 6.6/100,000), whereas the reverse was true for all other races (0.6-3.0/100,000 vs. 1.2-6.0/100,000).
- Blacks living in urban settings experienced higher rates of age-adjusted cardiovascular mortality than those living in rural areas (6.8/100,000 vs. 5.9/100,000), whereas the opposite was true for all other racial groups (0.9-3.5/100,000 vs. 2.2-5.4/100,000).
IN PRACTICE:
“There is need for dedicated health strategies aimed at individual communities to better understand and tackle the social determinants of obesity and to design interventions that may alleviate the population burden of both obesity and cardiovascular disease,” the authors wrote.
SOURCE:
The study, by Zahra Raisi-Estabragh, MD, PhD, Queen Mary University, London, and colleagues, was published online Sept. 6 in the Journal of the American Heart Association.
LIMITATIONS:
- Database limited to U.S. residents.
- Possible miscoding or diagnostic errors.
- Potential for residual confounding.
- No data on underlying drivers of observed trends.
DISCLOSURES:
Dr. Raisi-Estabragh has reported receiving funding from the Integrated Academic Training program of the National Institute for Health Research and a Clinical Research Training Fellowship from the British Heart Foundation. Another author has reported receiving research support from the National Heart, Lung, and Blood Institute.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION
Can a decrease in dopamine lead to binge eating?
In medical school, we were repeatedly advised that there is both a science and an art to the practice of medicine. In these days of doc-in-a-box online consultations for obesity, it’s tempting to think that there’s a one-size-fits-all purely scientific approach for these new weight loss medications. Yet, for every nine patients who lose weight seemingly effortlessly on this class of medication, there is always one whose body stubbornly refuses to submit.
Adam is a 58-year-old man who came to me recently because he was having difficulty losing weight. Over the past 20 years, he’d been steadily gaining weight and now, technically has morbid obesity (a term which should arguably be obsolete). His weight gain is complicated by high blood pressure, high cholesterol, and obstructive sleep apnea. His sleep apnea has caused such profound exhaustion that he no longer has the energy to work out. He also has significant ADHD, which has been left untreated because of his ability to white-knuckle it through his many daily meetings and calls. A married father of three, he is a successful portfolio manager at a high-yield bond fund.
Adam tends to eat minimally during the day, thereby baffling his colleagues with the stark contrast between his minimal caloric intake and his large belly. However, when he returns from work late at night (kids safely tucked into bed), the floodgates open. He reports polishing off pints of ice cream, scarfing down bags of cookies, inhaling trays of brownies. No carbohydrate is off limits to him once he steps off the Metro North train and crosses the threshold from work to home.
Does Adam simply lack the desire or common-sense willpower to make the necessary changes in his lifestyle or is there something more complicated at play?
I would argue that Adam’s ADHD triggered a binge-eating disorder (BED) that festered unchecked over the past 20 years. Patients with BED typically eat massive quantities of food over short periods of time – often when they’re not even hungry. Adam admitted that he would generally continue to eat well after feeling stuffed to the brim.
The answer probably lies with dopamine, a neurotransmitter produced in the reward centers of the brain that regulates how people experience pleasure and control impulses. We believe that people with ADHD have low levels of dopamine (it’s actually a bit more complicated, but this is the general idea). These low levels of dopamine lead people to self-medicate with sugars, salt, and fats to increase dopamine levels.
Lisdexamfetamine (Vyvanse) is a Food and Drug Administration–approved treatment option for both ADHD and binge eating. It raises the levels of dopamine (as well as norepinephrine) in the brain’s reward center. Often, the strong urge to binge subsides rapidly once ADHD is properly treated.
Rather than starting Adam on a semaglutide or similar agent, I opted to start him on lisdexamfetamine. When I spoke to him 1 week later, he confided that the world suddenly shifted into focus, and he was able to plan his meals throughout the day and resist the urge to binge late at night.
I may eventually add a semaglutide-like medication if his weight loss plateaus, but for now, I will focus on raising his dopamine levels to tackle the underlying cause of his weight gain.
Dr. Messer is a clinical assistant professor at the Icahn School of Medicine at Mount Sinai, New York. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In medical school, we were repeatedly advised that there is both a science and an art to the practice of medicine. In these days of doc-in-a-box online consultations for obesity, it’s tempting to think that there’s a one-size-fits-all purely scientific approach for these new weight loss medications. Yet, for every nine patients who lose weight seemingly effortlessly on this class of medication, there is always one whose body stubbornly refuses to submit.
Adam is a 58-year-old man who came to me recently because he was having difficulty losing weight. Over the past 20 years, he’d been steadily gaining weight and now, technically has morbid obesity (a term which should arguably be obsolete). His weight gain is complicated by high blood pressure, high cholesterol, and obstructive sleep apnea. His sleep apnea has caused such profound exhaustion that he no longer has the energy to work out. He also has significant ADHD, which has been left untreated because of his ability to white-knuckle it through his many daily meetings and calls. A married father of three, he is a successful portfolio manager at a high-yield bond fund.
Adam tends to eat minimally during the day, thereby baffling his colleagues with the stark contrast between his minimal caloric intake and his large belly. However, when he returns from work late at night (kids safely tucked into bed), the floodgates open. He reports polishing off pints of ice cream, scarfing down bags of cookies, inhaling trays of brownies. No carbohydrate is off limits to him once he steps off the Metro North train and crosses the threshold from work to home.
Does Adam simply lack the desire or common-sense willpower to make the necessary changes in his lifestyle or is there something more complicated at play?
I would argue that Adam’s ADHD triggered a binge-eating disorder (BED) that festered unchecked over the past 20 years. Patients with BED typically eat massive quantities of food over short periods of time – often when they’re not even hungry. Adam admitted that he would generally continue to eat well after feeling stuffed to the brim.
The answer probably lies with dopamine, a neurotransmitter produced in the reward centers of the brain that regulates how people experience pleasure and control impulses. We believe that people with ADHD have low levels of dopamine (it’s actually a bit more complicated, but this is the general idea). These low levels of dopamine lead people to self-medicate with sugars, salt, and fats to increase dopamine levels.
Lisdexamfetamine (Vyvanse) is a Food and Drug Administration–approved treatment option for both ADHD and binge eating. It raises the levels of dopamine (as well as norepinephrine) in the brain’s reward center. Often, the strong urge to binge subsides rapidly once ADHD is properly treated.
Rather than starting Adam on a semaglutide or similar agent, I opted to start him on lisdexamfetamine. When I spoke to him 1 week later, he confided that the world suddenly shifted into focus, and he was able to plan his meals throughout the day and resist the urge to binge late at night.
I may eventually add a semaglutide-like medication if his weight loss plateaus, but for now, I will focus on raising his dopamine levels to tackle the underlying cause of his weight gain.
Dr. Messer is a clinical assistant professor at the Icahn School of Medicine at Mount Sinai, New York. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In medical school, we were repeatedly advised that there is both a science and an art to the practice of medicine. In these days of doc-in-a-box online consultations for obesity, it’s tempting to think that there’s a one-size-fits-all purely scientific approach for these new weight loss medications. Yet, for every nine patients who lose weight seemingly effortlessly on this class of medication, there is always one whose body stubbornly refuses to submit.
Adam is a 58-year-old man who came to me recently because he was having difficulty losing weight. Over the past 20 years, he’d been steadily gaining weight and now, technically has morbid obesity (a term which should arguably be obsolete). His weight gain is complicated by high blood pressure, high cholesterol, and obstructive sleep apnea. His sleep apnea has caused such profound exhaustion that he no longer has the energy to work out. He also has significant ADHD, which has been left untreated because of his ability to white-knuckle it through his many daily meetings and calls. A married father of three, he is a successful portfolio manager at a high-yield bond fund.
Adam tends to eat minimally during the day, thereby baffling his colleagues with the stark contrast between his minimal caloric intake and his large belly. However, when he returns from work late at night (kids safely tucked into bed), the floodgates open. He reports polishing off pints of ice cream, scarfing down bags of cookies, inhaling trays of brownies. No carbohydrate is off limits to him once he steps off the Metro North train and crosses the threshold from work to home.
Does Adam simply lack the desire or common-sense willpower to make the necessary changes in his lifestyle or is there something more complicated at play?
I would argue that Adam’s ADHD triggered a binge-eating disorder (BED) that festered unchecked over the past 20 years. Patients with BED typically eat massive quantities of food over short periods of time – often when they’re not even hungry. Adam admitted that he would generally continue to eat well after feeling stuffed to the brim.
The answer probably lies with dopamine, a neurotransmitter produced in the reward centers of the brain that regulates how people experience pleasure and control impulses. We believe that people with ADHD have low levels of dopamine (it’s actually a bit more complicated, but this is the general idea). These low levels of dopamine lead people to self-medicate with sugars, salt, and fats to increase dopamine levels.
Lisdexamfetamine (Vyvanse) is a Food and Drug Administration–approved treatment option for both ADHD and binge eating. It raises the levels of dopamine (as well as norepinephrine) in the brain’s reward center. Often, the strong urge to binge subsides rapidly once ADHD is properly treated.
Rather than starting Adam on a semaglutide or similar agent, I opted to start him on lisdexamfetamine. When I spoke to him 1 week later, he confided that the world suddenly shifted into focus, and he was able to plan his meals throughout the day and resist the urge to binge late at night.
I may eventually add a semaglutide-like medication if his weight loss plateaus, but for now, I will focus on raising his dopamine levels to tackle the underlying cause of his weight gain.
Dr. Messer is a clinical assistant professor at the Icahn School of Medicine at Mount Sinai, New York. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Treating fractures in elderly patients: Beyond the broken bone
While half the fracture-prevention battle is getting people diagnosed with low bone density, nearly 80% of older Americans who suffer bone breaks are not tested or treated for osteoporosis. Fractures associated with aging and diminished bone mineral density exact an enormous toll on patients’ lives and cost the health care system billions of dollars annually according to Bone Health and Osteoporosis: A Report of the Surgeon General. But current gaps in patient education and bone density screening are huge.
“It’s concerning that older patients at risk for fracture are often not screened to determine their risk factors contributing to osteoporosis and patients are not educated about fracture prevention,” said Meryl S. LeBoff, MD, an endocrinologist at Brigham and Women’s Hospital, and chief of calcium and bone section, and professor of medicine, at Harvard Medical School, Boston. “Furthermore, the majority of highest-risk women and men who do have fractures are not screened and they do not receive effective, [Food and Drug Administration]–approved therapies.”
Recent guidelines
Screening with dual-energy x-ray absorptiometry (DEXA) is recommended for all women at age 65 and all men at age 70. But the occasion of a fracture in an older person who has not yet met these age thresholds should prompt a bone density assessment.
“Doctors need to stress that one in two women and one in four men over age 50 will have a fracture in their remaining lifetimes,” Dr. LeBoff said. ”Primary care doctors play a critical role in ordering timely bone densitometry for both sexes.
If an older patient has been treated for a fracture, the main goal going forward is to prevent another one, for which the risk is highest in the 2 years after the incident fracture.”
According to Kendall F. Moseley, MD, clinical director of the division of endocrinology, diabetes & metabolism at Johns Hopkins Medicine in Baltimore, “Elderly patients need to understand that a fracture at their age is like a heart attack of the bone,” she said, adding that just as cardiovascular risk factors such as high blood pressure and blood lipids are silent before a stroke or infarction, the bone thinning of old age is also silent.
Endocrinologist Jennifer J. Kelly, DO, director of the metabolic bone program and an associate professor at the University of Vermont Medical Center in Burlington, said a fracture in anyone over age 50 that appears not to have resulted from a traumatic blow, is a compelling reason to order a DEXA exam.
Nahid J. Rianon, MBBS/MD, DrPH, assistant professor of the division of geriatric medicine at the UTHealth McGovern Medical School, Houston, goes further: “Any fracture in someone age 50 and older warrants screening for osteoporosis. And if the fracture is nontraumatic, that is by definition a clinical diagnosis of osteoporosis regardless of normal results on bone density tests and they should be treated medically. There are aspects of bone that we still can’t measure in the clinical setting.”
If DEXA is not accessible, fracture risk over the next 10 years can be evaluated based on multiple patient characteristics and medical history using the online FRAX calculator.
Just a 3% risk of hip fracture on FRAX is considered an indication to begin medical osteoporosis treatment in the United States regardless of bone density test results, Dr. Rianon said.
Fracture management
Whether a senior suffers a traumatic fracture or an osteoporosis-related fragility fracture, older age can impede the healing process in some. Senescence may also increase systemic proinflammatory status, according to Clark and colleagues, writing in Current Osteoporosis Reports.
They called for research to develop more directed treatment options for the elderly population.
Dr. Rianon noted that healing may also be affected by a decrease in muscle mass, which plays a role in holding the bone in place. “But it is still controversial how changing metabolic factors affect bone healing in the elderly.”
However, countered Dr. Kelly, fractures in elderly patients are not necessarily less likely to mend – if osteoporosis is not present. “Many heal very well – it really depends more upon their overall health and medical history. Whether or not a person requires surgery depends more upon the extent of the fracture and if the bone is able to align and heal appropriately without surgery.”
Fracture sites
Spine. According to the American Academy of Orthopedic Surgeons the earliest and most frequent site of fragility fractures in the elderly is the spine. Most vertebral fracture pain improves within 3 months without specific treatment. A short period of rest, limited analgesic use, and possible back bracing may help as the fractures heal on their own. But if pain is severe and persistent, vertebral augmentation with percutaneous kyphoplasty or vertebroplasty may be an option. These procedures, however, can destabilize surrounding discs because of the greater thickness of the injected cement.
Hip. The most dangerous fractures occur in the hip. These carry at least a 20% risk of death in the first postoperative year and must be treated surgically. Those in the proximal femur, the head, or the femoral neck will usually need hip replacement, but if the break is farther down, it may be repaired with cement, screws, plates, and rods.
Distal radius. Outcomes of wrist fractures may be positive without surgical intervention, according to a recent retrospective analysis from Turkey by Yalin and colleagues. In a comparison of clinical outcomes in seniors aged 70-89 and assigned to cast immobilization or various surgical treatments for distal radius fractures, no statistically significant difference was found in patient-reported disability scores and range of motion values between casting and surgery in the first postoperative year.
Other sites. Fractures in the elderly are not uncommon in the shoulder, distal radius, cubitus, proximal humerus, and humerus. These fractures are often treated without surgery, but nevertheless signal a high risk for additional fractures.
Bone-enhancing medications
Even in the absence of diagnosed low bone density or osteoporosis, anabolic agents such as the synthetic human parathyroid hormones abaloparatide (Tymlos) and teriparatide (Forteo) may be used to help in some cases with a bad healing prognosis and may also be used for people undergoing surgeries such as a spinal fusion, but there are not clinical guidelines. “We receive referrals regularly for this treatment from our orthopedics colleagues, but it is considered an off-label use,” Dr. Kelly said.
The anabolics teriparatide and romosozumab (Evenity) have proved effective in lowering fractures in high-risk older women.
Post fracture
After recovering from a fracture, elderly people are strongly advised to make lifestyle changes to boost bone health and reduce risk of further fractures, said Willy M. Valencia, MD, a geriatrician-endocrinologist at the Cleveland Clinic. Apart from active daily living, he recommends several types of formal exercise to promote bone formation; increase muscle mass, strength, and flexibility; and improve endurance, balance, and gait. The National Institute on Aging outlines suitable exercise programs for seniors.
“These exercises will help reduce the risk of falling and to avoid more fractures,” he said. “Whether a patient has been exercising before the fracture or not, they may feel some reticence or reluctance to take up exercise afterwards because they’re afraid of having another fracture, but they should understand that their fracture risk increases if they remain sedentary. They should start slowly but they can’t be sitting all day.”
Even before it’s possible to exercise at the healing fracture site, added Dr. Rianon, its advisable to work other areas of the body. “Overall mobility is important, and exercising other parts of the body can stimulate strength and help prevent falling.”
In other postsurgical measures, a bone-friendly diet rich in calcium and vitamin D, as well as supplementation with these vital nutrients, is essential to lower the risk of falling.
Fall prevention is paramount, said Dr. Valencia. While exercise can improve, gait, balance, and endurance, logistical measures may also be necessary. Seniors may have to move to a one-floor domicile with no stairs to negotiate. At the very least, they need to fall-proof their daily lives by upgrading their eyeglasses and home lighting, eliminating obstacles and loose carpets, fixing bannisters, and installing bathroom handrails. Some may need assistive devices for walking, especially outdoors in slippery conditions.
At the end of the day, the role of the primary physician in screening for bone problems before fracture and postsurgical care is key. “Risk factors for osteoporosis and fracture risk must be added to the patient’s chart,” said Dr. Rianon. Added Dr. Moseley. “No matter how busy they are, my hope is that primary care physicians will not put patients’ bone health at the bottom of the clinical agenda.”
While half the fracture-prevention battle is getting people diagnosed with low bone density, nearly 80% of older Americans who suffer bone breaks are not tested or treated for osteoporosis. Fractures associated with aging and diminished bone mineral density exact an enormous toll on patients’ lives and cost the health care system billions of dollars annually according to Bone Health and Osteoporosis: A Report of the Surgeon General. But current gaps in patient education and bone density screening are huge.
“It’s concerning that older patients at risk for fracture are often not screened to determine their risk factors contributing to osteoporosis and patients are not educated about fracture prevention,” said Meryl S. LeBoff, MD, an endocrinologist at Brigham and Women’s Hospital, and chief of calcium and bone section, and professor of medicine, at Harvard Medical School, Boston. “Furthermore, the majority of highest-risk women and men who do have fractures are not screened and they do not receive effective, [Food and Drug Administration]–approved therapies.”
Recent guidelines
Screening with dual-energy x-ray absorptiometry (DEXA) is recommended for all women at age 65 and all men at age 70. But the occasion of a fracture in an older person who has not yet met these age thresholds should prompt a bone density assessment.
“Doctors need to stress that one in two women and one in four men over age 50 will have a fracture in their remaining lifetimes,” Dr. LeBoff said. ”Primary care doctors play a critical role in ordering timely bone densitometry for both sexes.
If an older patient has been treated for a fracture, the main goal going forward is to prevent another one, for which the risk is highest in the 2 years after the incident fracture.”
According to Kendall F. Moseley, MD, clinical director of the division of endocrinology, diabetes & metabolism at Johns Hopkins Medicine in Baltimore, “Elderly patients need to understand that a fracture at their age is like a heart attack of the bone,” she said, adding that just as cardiovascular risk factors such as high blood pressure and blood lipids are silent before a stroke or infarction, the bone thinning of old age is also silent.
Endocrinologist Jennifer J. Kelly, DO, director of the metabolic bone program and an associate professor at the University of Vermont Medical Center in Burlington, said a fracture in anyone over age 50 that appears not to have resulted from a traumatic blow, is a compelling reason to order a DEXA exam.
Nahid J. Rianon, MBBS/MD, DrPH, assistant professor of the division of geriatric medicine at the UTHealth McGovern Medical School, Houston, goes further: “Any fracture in someone age 50 and older warrants screening for osteoporosis. And if the fracture is nontraumatic, that is by definition a clinical diagnosis of osteoporosis regardless of normal results on bone density tests and they should be treated medically. There are aspects of bone that we still can’t measure in the clinical setting.”
If DEXA is not accessible, fracture risk over the next 10 years can be evaluated based on multiple patient characteristics and medical history using the online FRAX calculator.
Just a 3% risk of hip fracture on FRAX is considered an indication to begin medical osteoporosis treatment in the United States regardless of bone density test results, Dr. Rianon said.
Fracture management
Whether a senior suffers a traumatic fracture or an osteoporosis-related fragility fracture, older age can impede the healing process in some. Senescence may also increase systemic proinflammatory status, according to Clark and colleagues, writing in Current Osteoporosis Reports.
They called for research to develop more directed treatment options for the elderly population.
Dr. Rianon noted that healing may also be affected by a decrease in muscle mass, which plays a role in holding the bone in place. “But it is still controversial how changing metabolic factors affect bone healing in the elderly.”
However, countered Dr. Kelly, fractures in elderly patients are not necessarily less likely to mend – if osteoporosis is not present. “Many heal very well – it really depends more upon their overall health and medical history. Whether or not a person requires surgery depends more upon the extent of the fracture and if the bone is able to align and heal appropriately without surgery.”
Fracture sites
Spine. According to the American Academy of Orthopedic Surgeons the earliest and most frequent site of fragility fractures in the elderly is the spine. Most vertebral fracture pain improves within 3 months without specific treatment. A short period of rest, limited analgesic use, and possible back bracing may help as the fractures heal on their own. But if pain is severe and persistent, vertebral augmentation with percutaneous kyphoplasty or vertebroplasty may be an option. These procedures, however, can destabilize surrounding discs because of the greater thickness of the injected cement.
Hip. The most dangerous fractures occur in the hip. These carry at least a 20% risk of death in the first postoperative year and must be treated surgically. Those in the proximal femur, the head, or the femoral neck will usually need hip replacement, but if the break is farther down, it may be repaired with cement, screws, plates, and rods.
Distal radius. Outcomes of wrist fractures may be positive without surgical intervention, according to a recent retrospective analysis from Turkey by Yalin and colleagues. In a comparison of clinical outcomes in seniors aged 70-89 and assigned to cast immobilization or various surgical treatments for distal radius fractures, no statistically significant difference was found in patient-reported disability scores and range of motion values between casting and surgery in the first postoperative year.
Other sites. Fractures in the elderly are not uncommon in the shoulder, distal radius, cubitus, proximal humerus, and humerus. These fractures are often treated without surgery, but nevertheless signal a high risk for additional fractures.
Bone-enhancing medications
Even in the absence of diagnosed low bone density or osteoporosis, anabolic agents such as the synthetic human parathyroid hormones abaloparatide (Tymlos) and teriparatide (Forteo) may be used to help in some cases with a bad healing prognosis and may also be used for people undergoing surgeries such as a spinal fusion, but there are not clinical guidelines. “We receive referrals regularly for this treatment from our orthopedics colleagues, but it is considered an off-label use,” Dr. Kelly said.
The anabolics teriparatide and romosozumab (Evenity) have proved effective in lowering fractures in high-risk older women.
Post fracture
After recovering from a fracture, elderly people are strongly advised to make lifestyle changes to boost bone health and reduce risk of further fractures, said Willy M. Valencia, MD, a geriatrician-endocrinologist at the Cleveland Clinic. Apart from active daily living, he recommends several types of formal exercise to promote bone formation; increase muscle mass, strength, and flexibility; and improve endurance, balance, and gait. The National Institute on Aging outlines suitable exercise programs for seniors.
“These exercises will help reduce the risk of falling and to avoid more fractures,” he said. “Whether a patient has been exercising before the fracture or not, they may feel some reticence or reluctance to take up exercise afterwards because they’re afraid of having another fracture, but they should understand that their fracture risk increases if they remain sedentary. They should start slowly but they can’t be sitting all day.”
Even before it’s possible to exercise at the healing fracture site, added Dr. Rianon, its advisable to work other areas of the body. “Overall mobility is important, and exercising other parts of the body can stimulate strength and help prevent falling.”
In other postsurgical measures, a bone-friendly diet rich in calcium and vitamin D, as well as supplementation with these vital nutrients, is essential to lower the risk of falling.
Fall prevention is paramount, said Dr. Valencia. While exercise can improve, gait, balance, and endurance, logistical measures may also be necessary. Seniors may have to move to a one-floor domicile with no stairs to negotiate. At the very least, they need to fall-proof their daily lives by upgrading their eyeglasses and home lighting, eliminating obstacles and loose carpets, fixing bannisters, and installing bathroom handrails. Some may need assistive devices for walking, especially outdoors in slippery conditions.
At the end of the day, the role of the primary physician in screening for bone problems before fracture and postsurgical care is key. “Risk factors for osteoporosis and fracture risk must be added to the patient’s chart,” said Dr. Rianon. Added Dr. Moseley. “No matter how busy they are, my hope is that primary care physicians will not put patients’ bone health at the bottom of the clinical agenda.”
While half the fracture-prevention battle is getting people diagnosed with low bone density, nearly 80% of older Americans who suffer bone breaks are not tested or treated for osteoporosis. Fractures associated with aging and diminished bone mineral density exact an enormous toll on patients’ lives and cost the health care system billions of dollars annually according to Bone Health and Osteoporosis: A Report of the Surgeon General. But current gaps in patient education and bone density screening are huge.
“It’s concerning that older patients at risk for fracture are often not screened to determine their risk factors contributing to osteoporosis and patients are not educated about fracture prevention,” said Meryl S. LeBoff, MD, an endocrinologist at Brigham and Women’s Hospital, and chief of calcium and bone section, and professor of medicine, at Harvard Medical School, Boston. “Furthermore, the majority of highest-risk women and men who do have fractures are not screened and they do not receive effective, [Food and Drug Administration]–approved therapies.”
Recent guidelines
Screening with dual-energy x-ray absorptiometry (DEXA) is recommended for all women at age 65 and all men at age 70. But the occasion of a fracture in an older person who has not yet met these age thresholds should prompt a bone density assessment.
“Doctors need to stress that one in two women and one in four men over age 50 will have a fracture in their remaining lifetimes,” Dr. LeBoff said. ”Primary care doctors play a critical role in ordering timely bone densitometry for both sexes.
If an older patient has been treated for a fracture, the main goal going forward is to prevent another one, for which the risk is highest in the 2 years after the incident fracture.”
According to Kendall F. Moseley, MD, clinical director of the division of endocrinology, diabetes & metabolism at Johns Hopkins Medicine in Baltimore, “Elderly patients need to understand that a fracture at their age is like a heart attack of the bone,” she said, adding that just as cardiovascular risk factors such as high blood pressure and blood lipids are silent before a stroke or infarction, the bone thinning of old age is also silent.
Endocrinologist Jennifer J. Kelly, DO, director of the metabolic bone program and an associate professor at the University of Vermont Medical Center in Burlington, said a fracture in anyone over age 50 that appears not to have resulted from a traumatic blow, is a compelling reason to order a DEXA exam.
Nahid J. Rianon, MBBS/MD, DrPH, assistant professor of the division of geriatric medicine at the UTHealth McGovern Medical School, Houston, goes further: “Any fracture in someone age 50 and older warrants screening for osteoporosis. And if the fracture is nontraumatic, that is by definition a clinical diagnosis of osteoporosis regardless of normal results on bone density tests and they should be treated medically. There are aspects of bone that we still can’t measure in the clinical setting.”
If DEXA is not accessible, fracture risk over the next 10 years can be evaluated based on multiple patient characteristics and medical history using the online FRAX calculator.
Just a 3% risk of hip fracture on FRAX is considered an indication to begin medical osteoporosis treatment in the United States regardless of bone density test results, Dr. Rianon said.
Fracture management
Whether a senior suffers a traumatic fracture or an osteoporosis-related fragility fracture, older age can impede the healing process in some. Senescence may also increase systemic proinflammatory status, according to Clark and colleagues, writing in Current Osteoporosis Reports.
They called for research to develop more directed treatment options for the elderly population.
Dr. Rianon noted that healing may also be affected by a decrease in muscle mass, which plays a role in holding the bone in place. “But it is still controversial how changing metabolic factors affect bone healing in the elderly.”
However, countered Dr. Kelly, fractures in elderly patients are not necessarily less likely to mend – if osteoporosis is not present. “Many heal very well – it really depends more upon their overall health and medical history. Whether or not a person requires surgery depends more upon the extent of the fracture and if the bone is able to align and heal appropriately without surgery.”
Fracture sites
Spine. According to the American Academy of Orthopedic Surgeons the earliest and most frequent site of fragility fractures in the elderly is the spine. Most vertebral fracture pain improves within 3 months without specific treatment. A short period of rest, limited analgesic use, and possible back bracing may help as the fractures heal on their own. But if pain is severe and persistent, vertebral augmentation with percutaneous kyphoplasty or vertebroplasty may be an option. These procedures, however, can destabilize surrounding discs because of the greater thickness of the injected cement.
Hip. The most dangerous fractures occur in the hip. These carry at least a 20% risk of death in the first postoperative year and must be treated surgically. Those in the proximal femur, the head, or the femoral neck will usually need hip replacement, but if the break is farther down, it may be repaired with cement, screws, plates, and rods.
Distal radius. Outcomes of wrist fractures may be positive without surgical intervention, according to a recent retrospective analysis from Turkey by Yalin and colleagues. In a comparison of clinical outcomes in seniors aged 70-89 and assigned to cast immobilization or various surgical treatments for distal radius fractures, no statistically significant difference was found in patient-reported disability scores and range of motion values between casting and surgery in the first postoperative year.
Other sites. Fractures in the elderly are not uncommon in the shoulder, distal radius, cubitus, proximal humerus, and humerus. These fractures are often treated without surgery, but nevertheless signal a high risk for additional fractures.
Bone-enhancing medications
Even in the absence of diagnosed low bone density or osteoporosis, anabolic agents such as the synthetic human parathyroid hormones abaloparatide (Tymlos) and teriparatide (Forteo) may be used to help in some cases with a bad healing prognosis and may also be used for people undergoing surgeries such as a spinal fusion, but there are not clinical guidelines. “We receive referrals regularly for this treatment from our orthopedics colleagues, but it is considered an off-label use,” Dr. Kelly said.
The anabolics teriparatide and romosozumab (Evenity) have proved effective in lowering fractures in high-risk older women.
Post fracture
After recovering from a fracture, elderly people are strongly advised to make lifestyle changes to boost bone health and reduce risk of further fractures, said Willy M. Valencia, MD, a geriatrician-endocrinologist at the Cleveland Clinic. Apart from active daily living, he recommends several types of formal exercise to promote bone formation; increase muscle mass, strength, and flexibility; and improve endurance, balance, and gait. The National Institute on Aging outlines suitable exercise programs for seniors.
“These exercises will help reduce the risk of falling and to avoid more fractures,” he said. “Whether a patient has been exercising before the fracture or not, they may feel some reticence or reluctance to take up exercise afterwards because they’re afraid of having another fracture, but they should understand that their fracture risk increases if they remain sedentary. They should start slowly but they can’t be sitting all day.”
Even before it’s possible to exercise at the healing fracture site, added Dr. Rianon, its advisable to work other areas of the body. “Overall mobility is important, and exercising other parts of the body can stimulate strength and help prevent falling.”
In other postsurgical measures, a bone-friendly diet rich in calcium and vitamin D, as well as supplementation with these vital nutrients, is essential to lower the risk of falling.
Fall prevention is paramount, said Dr. Valencia. While exercise can improve, gait, balance, and endurance, logistical measures may also be necessary. Seniors may have to move to a one-floor domicile with no stairs to negotiate. At the very least, they need to fall-proof their daily lives by upgrading their eyeglasses and home lighting, eliminating obstacles and loose carpets, fixing bannisters, and installing bathroom handrails. Some may need assistive devices for walking, especially outdoors in slippery conditions.
At the end of the day, the role of the primary physician in screening for bone problems before fracture and postsurgical care is key. “Risk factors for osteoporosis and fracture risk must be added to the patient’s chart,” said Dr. Rianon. Added Dr. Moseley. “No matter how busy they are, my hope is that primary care physicians will not put patients’ bone health at the bottom of the clinical agenda.”
When does a bicarb drip make sense?
A 70-year-old woman is admitted to the intensive care unit with a pH of 7.1, an acute kidney injury (AKI), and ketonuria. She is volume depleted and her history is consistent with starvation ketosis. This LOL truly is in NAD (that’s little old lady in no acute distress, for those who haven’t read The House of God). She is clinically stable and seemingly unperturbed by the flurry of activity surrounding her admission.
Your resident is concerned by the severity of the acidosis and suggests starting an intravenous bicarbonate drip. The fellow is adamantly against it. He’s been taught that intravenous bicarbonate increases the serum pH but paradoxically causes intracellular acidosis. As the attending you elect to observe fellow autonomy – no bicarb is given. Because any debate on rounds is a “teachable moment,” you decide to review the evidence and physiology behind infusing bicarbonate.
What do the data reveal?
An excellent review published in CHEST in 2000 covers the physiologic effects of bicarbonate, specifically related to lactic acidosis, which our patient didn’t have. Aside from that difference, the review validates the fellow’s opinion. In short, It is unlikely to provoke hemodynamic or respiratory compromise outside the setting of shock or hypercapnia. Intravenous bicarbonate can lead to intracellular acidosis, hypercapnia, hypocalcemia, and a reduction in oxygen delivery via the Bohr effect. The authors concluded that because the benefits are unproven and the negative effects are real, intravenous bicarbonate should not be used to correct a metabolic acidosis.
The CHEST review hardly settles the issue, though. A survey published a few years later found a majority of intensivists and nephrologists used intravenous bicarbonate to treat metabolic acidosis while the Surviving Sepsis Campaign Guidelines for the Management of Sepsis and Septic Shock published in 2017 recommended against bicarbonate for acidosis. It wasn’t until 2018 that we reached the holy grail: a randomized controlled trial.
The BICAR-ICU study randomly assigned patients with a pH of 7.20 or less, PCO2 of 45 mm Hg or less, and sodium bicarbonate concentration of 20 mmol/L or less to receive no bicarbonate versus a sodium bicarbonate drip to maintain a pH greater than 7.30. There’s additional nuance to the trial design and even more detail in the results. To summarize, there was signal for an improvement in renal outcomes across all patients, and those with AKI saw a mortality benefit. Post–BICAR-ICU iterations of the Surviving Sepsis Campaign Guidelines have incorporated these findings by recommending intravenous bicarbonate for patients with sepsis who have AKI and a pH of 7.20 or less.
That’s not to say BICAR-ICU has settled the issue. Although it’s far and away the best we have, there were fewer than 400 total patients in their intention-to-treat analysis. It was open label, with lots of crossover. The primary outcome was negative for the entire population, with only a subgroup (albeit a prespecified one) showing benefit. Finally, the results weren’t stratified by etiology for the metabolic acidosis. There was also evidence of alkalosis and hypocalcemia in the treatment group.
Last but not least in terms of importance, in most cases when bicarbonate is being considered, wouldn’t some form of renal replacement therapy (RRT) be preferred? This point was raised by nephrologists and intensivists when we covered BICAR-ICU in a journal club at my former program. It’s also mentioned in an accompanying editorial. RRT timing is controversial, and a detailed discussion is outside the scope of this piece and beyond the limits of my current knowledge base. But I do know that the A in the A-E-I-O-U acute indications for dialysis pneumonic stands for acidosis.
Our patient had AKI, a pH of 7.20 or less, and a pCO2 well under 45 mm Hg. Does BICAR-ICU support the resident’s inclination to start a drip? Sort of. The majority of patients enrolled in BICAR-ICU were in shock or were recovering from cardiac arrest, so it’s not clear the results can be generalized to our LOL with starvation ketosis. Extrapolating from studies of diabetic ketoacidosis (DKA) seems more appropriate, and here the data are poor but equivocal. Reviews are generally negative but don’t rule out the use of intravenous bicarbonate in certain patients with DKA.
Key takeaways
Our patient survived a 24-hour ICU stay with neither cardiopulmonary decompensation nor a need for RRT. Not sure how she did out of the ICU; presumably she was discharged soon after transfer. As is always the case with anecdotal medicine, the absence of a control prevents assessment of the counterfactual. Is it possible she may have done “better” with intravenous bicarbonate? Seems unlikely to me, though I doubt there would have been demonstrable adverse effects. Perhaps next time the fellow can observe resident autonomy?
Aaron B. Holley, MD, is a professor of medicine at Uniformed Services University of the Health Sciences, Bethesda, Md., and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center. He reported conflicts of interest with Metapharm, CHEST College, and WebMD.
A version of this article first appeared on Medscape.com.
A 70-year-old woman is admitted to the intensive care unit with a pH of 7.1, an acute kidney injury (AKI), and ketonuria. She is volume depleted and her history is consistent with starvation ketosis. This LOL truly is in NAD (that’s little old lady in no acute distress, for those who haven’t read The House of God). She is clinically stable and seemingly unperturbed by the flurry of activity surrounding her admission.
Your resident is concerned by the severity of the acidosis and suggests starting an intravenous bicarbonate drip. The fellow is adamantly against it. He’s been taught that intravenous bicarbonate increases the serum pH but paradoxically causes intracellular acidosis. As the attending you elect to observe fellow autonomy – no bicarb is given. Because any debate on rounds is a “teachable moment,” you decide to review the evidence and physiology behind infusing bicarbonate.
What do the data reveal?
An excellent review published in CHEST in 2000 covers the physiologic effects of bicarbonate, specifically related to lactic acidosis, which our patient didn’t have. Aside from that difference, the review validates the fellow’s opinion. In short, It is unlikely to provoke hemodynamic or respiratory compromise outside the setting of shock or hypercapnia. Intravenous bicarbonate can lead to intracellular acidosis, hypercapnia, hypocalcemia, and a reduction in oxygen delivery via the Bohr effect. The authors concluded that because the benefits are unproven and the negative effects are real, intravenous bicarbonate should not be used to correct a metabolic acidosis.
The CHEST review hardly settles the issue, though. A survey published a few years later found a majority of intensivists and nephrologists used intravenous bicarbonate to treat metabolic acidosis while the Surviving Sepsis Campaign Guidelines for the Management of Sepsis and Septic Shock published in 2017 recommended against bicarbonate for acidosis. It wasn’t until 2018 that we reached the holy grail: a randomized controlled trial.
The BICAR-ICU study randomly assigned patients with a pH of 7.20 or less, PCO2 of 45 mm Hg or less, and sodium bicarbonate concentration of 20 mmol/L or less to receive no bicarbonate versus a sodium bicarbonate drip to maintain a pH greater than 7.30. There’s additional nuance to the trial design and even more detail in the results. To summarize, there was signal for an improvement in renal outcomes across all patients, and those with AKI saw a mortality benefit. Post–BICAR-ICU iterations of the Surviving Sepsis Campaign Guidelines have incorporated these findings by recommending intravenous bicarbonate for patients with sepsis who have AKI and a pH of 7.20 or less.
That’s not to say BICAR-ICU has settled the issue. Although it’s far and away the best we have, there were fewer than 400 total patients in their intention-to-treat analysis. It was open label, with lots of crossover. The primary outcome was negative for the entire population, with only a subgroup (albeit a prespecified one) showing benefit. Finally, the results weren’t stratified by etiology for the metabolic acidosis. There was also evidence of alkalosis and hypocalcemia in the treatment group.
Last but not least in terms of importance, in most cases when bicarbonate is being considered, wouldn’t some form of renal replacement therapy (RRT) be preferred? This point was raised by nephrologists and intensivists when we covered BICAR-ICU in a journal club at my former program. It’s also mentioned in an accompanying editorial. RRT timing is controversial, and a detailed discussion is outside the scope of this piece and beyond the limits of my current knowledge base. But I do know that the A in the A-E-I-O-U acute indications for dialysis pneumonic stands for acidosis.
Our patient had AKI, a pH of 7.20 or less, and a pCO2 well under 45 mm Hg. Does BICAR-ICU support the resident’s inclination to start a drip? Sort of. The majority of patients enrolled in BICAR-ICU were in shock or were recovering from cardiac arrest, so it’s not clear the results can be generalized to our LOL with starvation ketosis. Extrapolating from studies of diabetic ketoacidosis (DKA) seems more appropriate, and here the data are poor but equivocal. Reviews are generally negative but don’t rule out the use of intravenous bicarbonate in certain patients with DKA.
Key takeaways
Our patient survived a 24-hour ICU stay with neither cardiopulmonary decompensation nor a need for RRT. Not sure how she did out of the ICU; presumably she was discharged soon after transfer. As is always the case with anecdotal medicine, the absence of a control prevents assessment of the counterfactual. Is it possible she may have done “better” with intravenous bicarbonate? Seems unlikely to me, though I doubt there would have been demonstrable adverse effects. Perhaps next time the fellow can observe resident autonomy?
Aaron B. Holley, MD, is a professor of medicine at Uniformed Services University of the Health Sciences, Bethesda, Md., and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center. He reported conflicts of interest with Metapharm, CHEST College, and WebMD.
A version of this article first appeared on Medscape.com.
A 70-year-old woman is admitted to the intensive care unit with a pH of 7.1, an acute kidney injury (AKI), and ketonuria. She is volume depleted and her history is consistent with starvation ketosis. This LOL truly is in NAD (that’s little old lady in no acute distress, for those who haven’t read The House of God). She is clinically stable and seemingly unperturbed by the flurry of activity surrounding her admission.
Your resident is concerned by the severity of the acidosis and suggests starting an intravenous bicarbonate drip. The fellow is adamantly against it. He’s been taught that intravenous bicarbonate increases the serum pH but paradoxically causes intracellular acidosis. As the attending you elect to observe fellow autonomy – no bicarb is given. Because any debate on rounds is a “teachable moment,” you decide to review the evidence and physiology behind infusing bicarbonate.
What do the data reveal?
An excellent review published in CHEST in 2000 covers the physiologic effects of bicarbonate, specifically related to lactic acidosis, which our patient didn’t have. Aside from that difference, the review validates the fellow’s opinion. In short, It is unlikely to provoke hemodynamic or respiratory compromise outside the setting of shock or hypercapnia. Intravenous bicarbonate can lead to intracellular acidosis, hypercapnia, hypocalcemia, and a reduction in oxygen delivery via the Bohr effect. The authors concluded that because the benefits are unproven and the negative effects are real, intravenous bicarbonate should not be used to correct a metabolic acidosis.
The CHEST review hardly settles the issue, though. A survey published a few years later found a majority of intensivists and nephrologists used intravenous bicarbonate to treat metabolic acidosis while the Surviving Sepsis Campaign Guidelines for the Management of Sepsis and Septic Shock published in 2017 recommended against bicarbonate for acidosis. It wasn’t until 2018 that we reached the holy grail: a randomized controlled trial.
The BICAR-ICU study randomly assigned patients with a pH of 7.20 or less, PCO2 of 45 mm Hg or less, and sodium bicarbonate concentration of 20 mmol/L or less to receive no bicarbonate versus a sodium bicarbonate drip to maintain a pH greater than 7.30. There’s additional nuance to the trial design and even more detail in the results. To summarize, there was signal for an improvement in renal outcomes across all patients, and those with AKI saw a mortality benefit. Post–BICAR-ICU iterations of the Surviving Sepsis Campaign Guidelines have incorporated these findings by recommending intravenous bicarbonate for patients with sepsis who have AKI and a pH of 7.20 or less.
That’s not to say BICAR-ICU has settled the issue. Although it’s far and away the best we have, there were fewer than 400 total patients in their intention-to-treat analysis. It was open label, with lots of crossover. The primary outcome was negative for the entire population, with only a subgroup (albeit a prespecified one) showing benefit. Finally, the results weren’t stratified by etiology for the metabolic acidosis. There was also evidence of alkalosis and hypocalcemia in the treatment group.
Last but not least in terms of importance, in most cases when bicarbonate is being considered, wouldn’t some form of renal replacement therapy (RRT) be preferred? This point was raised by nephrologists and intensivists when we covered BICAR-ICU in a journal club at my former program. It’s also mentioned in an accompanying editorial. RRT timing is controversial, and a detailed discussion is outside the scope of this piece and beyond the limits of my current knowledge base. But I do know that the A in the A-E-I-O-U acute indications for dialysis pneumonic stands for acidosis.
Our patient had AKI, a pH of 7.20 or less, and a pCO2 well under 45 mm Hg. Does BICAR-ICU support the resident’s inclination to start a drip? Sort of. The majority of patients enrolled in BICAR-ICU were in shock or were recovering from cardiac arrest, so it’s not clear the results can be generalized to our LOL with starvation ketosis. Extrapolating from studies of diabetic ketoacidosis (DKA) seems more appropriate, and here the data are poor but equivocal. Reviews are generally negative but don’t rule out the use of intravenous bicarbonate in certain patients with DKA.
Key takeaways
Our patient survived a 24-hour ICU stay with neither cardiopulmonary decompensation nor a need for RRT. Not sure how she did out of the ICU; presumably she was discharged soon after transfer. As is always the case with anecdotal medicine, the absence of a control prevents assessment of the counterfactual. Is it possible she may have done “better” with intravenous bicarbonate? Seems unlikely to me, though I doubt there would have been demonstrable adverse effects. Perhaps next time the fellow can observe resident autonomy?
Aaron B. Holley, MD, is a professor of medicine at Uniformed Services University of the Health Sciences, Bethesda, Md., and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center. He reported conflicts of interest with Metapharm, CHEST College, and WebMD.
A version of this article first appeared on Medscape.com.