User login
Collaborative care aids seniors’ mild depression
A collaborative care model significantly mitigated mild depression in adults aged 65 and older, compared with usual care in the short term, based on data from 705 patients. The findings were published online Feb. 21.
“There is limited research about older people with mild depressive disorders who have insufficient levels of depressive symptoms to meet diagnostic criteria (called subclinical, subthreshold, or subsyndromal depression) but also reduced quality of life and function,” wrote Simon Gilbody, PhD, of the University of York (England) and colleagues. However, subthreshold depression increases the risk of a severe depressive illness, the researchers added.
Overall, patients in the collaborative care group improved from an average score of 7.8 at baseline to 5.4 after 4 months; the usual care group improved from an average of 7.8 at baseline to 6.7 at 4 months. The difference in scores persisted at 12 months in the secondary analysis (JAMA. 2017;317:728-37. doi: 10.1001/jama.2017.0130). “For populations with case-level depression, a successful treatment outcome has been defined as 5 points on the PHQ-9,” the researchers noted. “This magnitude of benefit was not observed in either group of the trial when comparing scores before and after treatment, although this result would be anticipated given the lower baseline PHQ-9 scores in populations with subthreshold depression.’
The study participants came from 32 primary care practices in northern England; the average age was 77 years, and 58% were women.
The results were limited by several factors, including the absence of a standardized interview to diagnose depression, differences in retention and attrition between groups, and the absence of long-term follow-up, “and further research is needed to assess longer-term efficacy,” the researchers said.
Neither Dr. Gilbody nor his colleagues had financial conflicts to disclose.
The CASPER trial “provides the first evidence that collaborative care may benefit patients with subthreshold depression,” Kurt Kroenke, MD, wrote in an accompanying editorial. In addition to the improvements on the Patient Health Questionnaire and the reduction in risk of progression to threshold level depression, the findings further support the use of behavioral activation, which was the core treatment in the study, he said. “Strong evidence for the effectiveness of behavioral activation was provided by the recent COBRA trial. … and behavioral activation was found to be noninferior to cognitive-behavioral therapy for the outcome of depression,” he wrote. However, more research is needed before clinicians routinely expand treatment beyond major depression to include subthreshold depression, Dr. Kroenke noted. Key factors include the variable rate of progression from subthreshold depression to major depression, the duration and context of subthreshold depression, patient preferences, and the possible role of antidepressants, he noted. However, the CASPER findings show “new evidence that collaborative care improves outcomes for at least some patients with subthreshold depression,” Dr. Kroenke said. “Patients with persistent symptoms, functional impairment, and a desire for treatment may particularly benefit,” he added (JAMA. 2017;317:702-4).
Kurt Kroenke, MD, is affiliated with the VA Health Services Research and Development Service Center for Health Communication and Information, Regenstrief Institute and Indiana University, both in Indianapolis. He had no financial conflicts to disclose.
The CASPER trial “provides the first evidence that collaborative care may benefit patients with subthreshold depression,” Kurt Kroenke, MD, wrote in an accompanying editorial. In addition to the improvements on the Patient Health Questionnaire and the reduction in risk of progression to threshold level depression, the findings further support the use of behavioral activation, which was the core treatment in the study, he said. “Strong evidence for the effectiveness of behavioral activation was provided by the recent COBRA trial. … and behavioral activation was found to be noninferior to cognitive-behavioral therapy for the outcome of depression,” he wrote. However, more research is needed before clinicians routinely expand treatment beyond major depression to include subthreshold depression, Dr. Kroenke noted. Key factors include the variable rate of progression from subthreshold depression to major depression, the duration and context of subthreshold depression, patient preferences, and the possible role of antidepressants, he noted. However, the CASPER findings show “new evidence that collaborative care improves outcomes for at least some patients with subthreshold depression,” Dr. Kroenke said. “Patients with persistent symptoms, functional impairment, and a desire for treatment may particularly benefit,” he added (JAMA. 2017;317:702-4).
Kurt Kroenke, MD, is affiliated with the VA Health Services Research and Development Service Center for Health Communication and Information, Regenstrief Institute and Indiana University, both in Indianapolis. He had no financial conflicts to disclose.
The CASPER trial “provides the first evidence that collaborative care may benefit patients with subthreshold depression,” Kurt Kroenke, MD, wrote in an accompanying editorial. In addition to the improvements on the Patient Health Questionnaire and the reduction in risk of progression to threshold level depression, the findings further support the use of behavioral activation, which was the core treatment in the study, he said. “Strong evidence for the effectiveness of behavioral activation was provided by the recent COBRA trial. … and behavioral activation was found to be noninferior to cognitive-behavioral therapy for the outcome of depression,” he wrote. However, more research is needed before clinicians routinely expand treatment beyond major depression to include subthreshold depression, Dr. Kroenke noted. Key factors include the variable rate of progression from subthreshold depression to major depression, the duration and context of subthreshold depression, patient preferences, and the possible role of antidepressants, he noted. However, the CASPER findings show “new evidence that collaborative care improves outcomes for at least some patients with subthreshold depression,” Dr. Kroenke said. “Patients with persistent symptoms, functional impairment, and a desire for treatment may particularly benefit,” he added (JAMA. 2017;317:702-4).
Kurt Kroenke, MD, is affiliated with the VA Health Services Research and Development Service Center for Health Communication and Information, Regenstrief Institute and Indiana University, both in Indianapolis. He had no financial conflicts to disclose.
A collaborative care model significantly mitigated mild depression in adults aged 65 and older, compared with usual care in the short term, based on data from 705 patients. The findings were published online Feb. 21.
“There is limited research about older people with mild depressive disorders who have insufficient levels of depressive symptoms to meet diagnostic criteria (called subclinical, subthreshold, or subsyndromal depression) but also reduced quality of life and function,” wrote Simon Gilbody, PhD, of the University of York (England) and colleagues. However, subthreshold depression increases the risk of a severe depressive illness, the researchers added.
Overall, patients in the collaborative care group improved from an average score of 7.8 at baseline to 5.4 after 4 months; the usual care group improved from an average of 7.8 at baseline to 6.7 at 4 months. The difference in scores persisted at 12 months in the secondary analysis (JAMA. 2017;317:728-37. doi: 10.1001/jama.2017.0130). “For populations with case-level depression, a successful treatment outcome has been defined as 5 points on the PHQ-9,” the researchers noted. “This magnitude of benefit was not observed in either group of the trial when comparing scores before and after treatment, although this result would be anticipated given the lower baseline PHQ-9 scores in populations with subthreshold depression.’
The study participants came from 32 primary care practices in northern England; the average age was 77 years, and 58% were women.
The results were limited by several factors, including the absence of a standardized interview to diagnose depression, differences in retention and attrition between groups, and the absence of long-term follow-up, “and further research is needed to assess longer-term efficacy,” the researchers said.
Neither Dr. Gilbody nor his colleagues had financial conflicts to disclose.
A collaborative care model significantly mitigated mild depression in adults aged 65 and older, compared with usual care in the short term, based on data from 705 patients. The findings were published online Feb. 21.
“There is limited research about older people with mild depressive disorders who have insufficient levels of depressive symptoms to meet diagnostic criteria (called subclinical, subthreshold, or subsyndromal depression) but also reduced quality of life and function,” wrote Simon Gilbody, PhD, of the University of York (England) and colleagues. However, subthreshold depression increases the risk of a severe depressive illness, the researchers added.
Overall, patients in the collaborative care group improved from an average score of 7.8 at baseline to 5.4 after 4 months; the usual care group improved from an average of 7.8 at baseline to 6.7 at 4 months. The difference in scores persisted at 12 months in the secondary analysis (JAMA. 2017;317:728-37. doi: 10.1001/jama.2017.0130). “For populations with case-level depression, a successful treatment outcome has been defined as 5 points on the PHQ-9,” the researchers noted. “This magnitude of benefit was not observed in either group of the trial when comparing scores before and after treatment, although this result would be anticipated given the lower baseline PHQ-9 scores in populations with subthreshold depression.’
The study participants came from 32 primary care practices in northern England; the average age was 77 years, and 58% were women.
The results were limited by several factors, including the absence of a standardized interview to diagnose depression, differences in retention and attrition between groups, and the absence of long-term follow-up, “and further research is needed to assess longer-term efficacy,” the researchers said.
Neither Dr. Gilbody nor his colleagues had financial conflicts to disclose.
FROM JAMA
Key clinical point: Collaborative care reduced subthreshold depression in older adults, compared with usual care after 4 months.
Major finding: Older adults with subthreshold depression who received collaborative care had lower depression scores on the Patient Health Questionnaire than those who received usual care after 4 months (average scores 5.4 and 6.7, respectively).
Data source: A randomized trial of 705 adults aged 65 years and older with subthreshold depression.
Disclosures: The researchers had no financial conflicts to disclose.
Hope on the horizon for novel antidepressants
LAS VEGAS – There remains a great unmet need for more effective and rapidly acting treatments for major depressive disorder, and research is revealing that both new and existing drugs may help, according to one expert.
One argument for additional treatment options is the current rate of suicide in the United States, which ranks as the 10th leading cause of death among persons aged 10 years and older, Gerard Sanacora, MD, PhD, said at an annual psychopharmacology update held by the Nevada Psychiatric Association. Another argument for new antidepressants stems from the results of the STAR*D trial, which found that 37% of patients with major depressive disorder who were treated with citalopram monotherapy had remission with the first treatment.
Dr. Sanacora, who is also director of the Yale Depression Research Program, said that there is a reconceptualization of how clinicians think about the pathophysiology of depression and the path to novel treatment development. A variety of novel pharmacologic and somatic treatments, with new mechanisms of action, are currently undergoing validation for treatment-resistant depression. These include glutamatergic, GABAergic, opioid, and anti‐inflammatory drugs.
Drugs that modulate GABAergic and glutamatergic neurotransmission have anxiolytic and antidepressant activities in rodent models of depression. In addition, the robust, rapid, and relatively sustained antidepressant effects of low-dose ketamine have been observed in double-blind placebo crossover trials in patients with treatment-resistant major depression (Biol Psych. 2000 Feb 15;47[4]:351-4 and Arch Gen Psych. 2006 Aug;63[8]:856-64). Currently, Dr. Sanacora said, more than 80 clinics in the United States provide ketamine therapy, yet clinicians face balancing the potential benefits of the drug with inherent limitations of the ketamine studies to date. These include the fact that the study drug blinding is ineffective; the optimal dose, route, or frequency has not been determined; the duration of effect is unknown; the long-term effectiveness and safety are unclear; and the moderators and mediators of response are unknown.
Results from a National Institutes of Mental Health–funded double-blind, placebo-controlled study examining various doses of ketamine in treatment-resistant depression are anticipated sometime this year. In 2013, a trial sponsored by Janssen Research & Development titled a Study to Evaluate the Safety and Efficacy of Intranasal Esketamine in Treatment-Resistant Depression (SYNAPSE) set out to assess the efficacy and dose response of intranasal esketamine (panel A: 28 mg, 56 mg, and 84 mg; panel B: 14 mg and 56 mg), compared with placebo, in improving depressive symptoms in participants with treatment-resistant depression. The researchers found a positive effect of esketamine vs. saline placebo with some evidence of a dose-response curve, suggesting higher doses to be more effective. Some published studies suggest that chronic ketamine use causes impairments in working memory and other cognitive effects (Addiction. 2009 Jan;104[1]:77-87 and Front Psych. 2014 Dec 4;5:149), while others have found that ketamine does not cause memory deficits when given on up to six occasions (Int J Neuropsychopharmacol. 2014 Jun 25;17[11]:1805-13 and J Psychopharmacol. 2014 Apr 3;28[6]:536-44).
Another drug being studied for major depressive disorder is the investigational agent SAGE-547, an allosteric neurosteroid modulator of both synaptic and extrasynaptic GABA receptors. Preliminary results from a double-blind, placebo-controlled phase II trial in 21 patients with postpartum depression showed that the Hamilton Rating Scale for Depression (HAM-D) total score was reduced by SAGE-547, compared with placebo, at 60 hours (P = .008).
Buprenorphine, a partial mu opioid agonist commonly used in addiction treatment, may also play a future role in helping patients with treatment-resistant depression. One randomized study of 88 patients found that those who took very low doses of buprenorphine for 2 or 4 weeks had significantly lower scores on the Beck Suicide Ideation Scale, compared with their counterparts on placebo (Am J Psychiatry. 2016 May 1;173[5]491-8).
Drugs with anti-inflammatory properties may also have a role. One study of 60 patients found that the tumor necrosis factor–alpha antagonist infliximab may benefit patients with treatment-resistant depression who have high inflammatory biomarkers at baseline (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
“Active participation in clinical research efforts is critical to the advancement of future treatment approaches,” he said.
Dr. Sanacora disclosed having received consulting fees and/or research agreements from numerous industry sources. In addition, free medication was provided to Dr. Sanacora by sanofi‐aventis for a study sponsored by the National Institutes of Health.
LAS VEGAS – There remains a great unmet need for more effective and rapidly acting treatments for major depressive disorder, and research is revealing that both new and existing drugs may help, according to one expert.
One argument for additional treatment options is the current rate of suicide in the United States, which ranks as the 10th leading cause of death among persons aged 10 years and older, Gerard Sanacora, MD, PhD, said at an annual psychopharmacology update held by the Nevada Psychiatric Association. Another argument for new antidepressants stems from the results of the STAR*D trial, which found that 37% of patients with major depressive disorder who were treated with citalopram monotherapy had remission with the first treatment.
Dr. Sanacora, who is also director of the Yale Depression Research Program, said that there is a reconceptualization of how clinicians think about the pathophysiology of depression and the path to novel treatment development. A variety of novel pharmacologic and somatic treatments, with new mechanisms of action, are currently undergoing validation for treatment-resistant depression. These include glutamatergic, GABAergic, opioid, and anti‐inflammatory drugs.
Drugs that modulate GABAergic and glutamatergic neurotransmission have anxiolytic and antidepressant activities in rodent models of depression. In addition, the robust, rapid, and relatively sustained antidepressant effects of low-dose ketamine have been observed in double-blind placebo crossover trials in patients with treatment-resistant major depression (Biol Psych. 2000 Feb 15;47[4]:351-4 and Arch Gen Psych. 2006 Aug;63[8]:856-64). Currently, Dr. Sanacora said, more than 80 clinics in the United States provide ketamine therapy, yet clinicians face balancing the potential benefits of the drug with inherent limitations of the ketamine studies to date. These include the fact that the study drug blinding is ineffective; the optimal dose, route, or frequency has not been determined; the duration of effect is unknown; the long-term effectiveness and safety are unclear; and the moderators and mediators of response are unknown.
Results from a National Institutes of Mental Health–funded double-blind, placebo-controlled study examining various doses of ketamine in treatment-resistant depression are anticipated sometime this year. In 2013, a trial sponsored by Janssen Research & Development titled a Study to Evaluate the Safety and Efficacy of Intranasal Esketamine in Treatment-Resistant Depression (SYNAPSE) set out to assess the efficacy and dose response of intranasal esketamine (panel A: 28 mg, 56 mg, and 84 mg; panel B: 14 mg and 56 mg), compared with placebo, in improving depressive symptoms in participants with treatment-resistant depression. The researchers found a positive effect of esketamine vs. saline placebo with some evidence of a dose-response curve, suggesting higher doses to be more effective. Some published studies suggest that chronic ketamine use causes impairments in working memory and other cognitive effects (Addiction. 2009 Jan;104[1]:77-87 and Front Psych. 2014 Dec 4;5:149), while others have found that ketamine does not cause memory deficits when given on up to six occasions (Int J Neuropsychopharmacol. 2014 Jun 25;17[11]:1805-13 and J Psychopharmacol. 2014 Apr 3;28[6]:536-44).
Another drug being studied for major depressive disorder is the investigational agent SAGE-547, an allosteric neurosteroid modulator of both synaptic and extrasynaptic GABA receptors. Preliminary results from a double-blind, placebo-controlled phase II trial in 21 patients with postpartum depression showed that the Hamilton Rating Scale for Depression (HAM-D) total score was reduced by SAGE-547, compared with placebo, at 60 hours (P = .008).
Buprenorphine, a partial mu opioid agonist commonly used in addiction treatment, may also play a future role in helping patients with treatment-resistant depression. One randomized study of 88 patients found that those who took very low doses of buprenorphine for 2 or 4 weeks had significantly lower scores on the Beck Suicide Ideation Scale, compared with their counterparts on placebo (Am J Psychiatry. 2016 May 1;173[5]491-8).
Drugs with anti-inflammatory properties may also have a role. One study of 60 patients found that the tumor necrosis factor–alpha antagonist infliximab may benefit patients with treatment-resistant depression who have high inflammatory biomarkers at baseline (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
“Active participation in clinical research efforts is critical to the advancement of future treatment approaches,” he said.
Dr. Sanacora disclosed having received consulting fees and/or research agreements from numerous industry sources. In addition, free medication was provided to Dr. Sanacora by sanofi‐aventis for a study sponsored by the National Institutes of Health.
LAS VEGAS – There remains a great unmet need for more effective and rapidly acting treatments for major depressive disorder, and research is revealing that both new and existing drugs may help, according to one expert.
One argument for additional treatment options is the current rate of suicide in the United States, which ranks as the 10th leading cause of death among persons aged 10 years and older, Gerard Sanacora, MD, PhD, said at an annual psychopharmacology update held by the Nevada Psychiatric Association. Another argument for new antidepressants stems from the results of the STAR*D trial, which found that 37% of patients with major depressive disorder who were treated with citalopram monotherapy had remission with the first treatment.
Dr. Sanacora, who is also director of the Yale Depression Research Program, said that there is a reconceptualization of how clinicians think about the pathophysiology of depression and the path to novel treatment development. A variety of novel pharmacologic and somatic treatments, with new mechanisms of action, are currently undergoing validation for treatment-resistant depression. These include glutamatergic, GABAergic, opioid, and anti‐inflammatory drugs.
Drugs that modulate GABAergic and glutamatergic neurotransmission have anxiolytic and antidepressant activities in rodent models of depression. In addition, the robust, rapid, and relatively sustained antidepressant effects of low-dose ketamine have been observed in double-blind placebo crossover trials in patients with treatment-resistant major depression (Biol Psych. 2000 Feb 15;47[4]:351-4 and Arch Gen Psych. 2006 Aug;63[8]:856-64). Currently, Dr. Sanacora said, more than 80 clinics in the United States provide ketamine therapy, yet clinicians face balancing the potential benefits of the drug with inherent limitations of the ketamine studies to date. These include the fact that the study drug blinding is ineffective; the optimal dose, route, or frequency has not been determined; the duration of effect is unknown; the long-term effectiveness and safety are unclear; and the moderators and mediators of response are unknown.
Results from a National Institutes of Mental Health–funded double-blind, placebo-controlled study examining various doses of ketamine in treatment-resistant depression are anticipated sometime this year. In 2013, a trial sponsored by Janssen Research & Development titled a Study to Evaluate the Safety and Efficacy of Intranasal Esketamine in Treatment-Resistant Depression (SYNAPSE) set out to assess the efficacy and dose response of intranasal esketamine (panel A: 28 mg, 56 mg, and 84 mg; panel B: 14 mg and 56 mg), compared with placebo, in improving depressive symptoms in participants with treatment-resistant depression. The researchers found a positive effect of esketamine vs. saline placebo with some evidence of a dose-response curve, suggesting higher doses to be more effective. Some published studies suggest that chronic ketamine use causes impairments in working memory and other cognitive effects (Addiction. 2009 Jan;104[1]:77-87 and Front Psych. 2014 Dec 4;5:149), while others have found that ketamine does not cause memory deficits when given on up to six occasions (Int J Neuropsychopharmacol. 2014 Jun 25;17[11]:1805-13 and J Psychopharmacol. 2014 Apr 3;28[6]:536-44).
Another drug being studied for major depressive disorder is the investigational agent SAGE-547, an allosteric neurosteroid modulator of both synaptic and extrasynaptic GABA receptors. Preliminary results from a double-blind, placebo-controlled phase II trial in 21 patients with postpartum depression showed that the Hamilton Rating Scale for Depression (HAM-D) total score was reduced by SAGE-547, compared with placebo, at 60 hours (P = .008).
Buprenorphine, a partial mu opioid agonist commonly used in addiction treatment, may also play a future role in helping patients with treatment-resistant depression. One randomized study of 88 patients found that those who took very low doses of buprenorphine for 2 or 4 weeks had significantly lower scores on the Beck Suicide Ideation Scale, compared with their counterparts on placebo (Am J Psychiatry. 2016 May 1;173[5]491-8).
Drugs with anti-inflammatory properties may also have a role. One study of 60 patients found that the tumor necrosis factor–alpha antagonist infliximab may benefit patients with treatment-resistant depression who have high inflammatory biomarkers at baseline (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
“Active participation in clinical research efforts is critical to the advancement of future treatment approaches,” he said.
Dr. Sanacora disclosed having received consulting fees and/or research agreements from numerous industry sources. In addition, free medication was provided to Dr. Sanacora by sanofi‐aventis for a study sponsored by the National Institutes of Health.
EXPERT ANALYSIS FROM THE NPA PSYCHOPHARMACOLOGY UPDATE
Light therapy eases Parkinson’s-related sleep disturbances
Light therapy significantly reduced excessive daytime sleepiness, improved sleep quality, decreased overnight awakenings, shortened sleep latency, enhanced daytime alertness and activity level, and improved motor symptoms in patients with Parkinson’s disease, according to a report published online Feb. 20 in JAMA Neurology.
The noninvasive, nonpharmacologic treatment was well tolerated, and patient adherence was excellent in a small, multicenter, randomized controlled trial. Light therapy is widely available as a treatment for several sleep and psychiatric disorders and is “relatively easy to prescribe and incorporate into a clinical practice,” said Aleksandar Videnovic, MD, of the department of neurology at Massachusetts General Hospital and the division of sleep medicine at Harvard Medical School, both in Boston, and his associates.
To assess the safety and efficacy of light therapy as a novel treatment for PD, they studied 31 adults (age range, 32-77 years) who had a mean disease duration of 6 years. These study participants were randomly assigned to use 1 hour of exposure to 10,000 lux of bright light (16 patients in the intervention group) or 1 hour of exposure to less than 300 lux of dim red light (15 control subjects) every morning and every afternoon for 2 weeks.
The study participants – 13 men and 18 women – also wore actigraphy monitors all day and all night, completed daily sleep diaries, and noted daytime sleepiness in a log every 2 hours, 3 days per week.
Bright light significantly improved excessive daytime sleepiness as measured by the Epworth Sleepiness Scale and self-reported alertness during wake time, as well as several sleep metrics such as overall sleep quality, overnight awakenings, and ease of falling asleep. All the patients in the intervention group reported being more refreshed in the mornings during the study period, as compared with baseline.
Light therapy also improved overall PD severity as measured by the Unified Parkinson’s Disease Rating Scale, particularly in scores related to activities of daily living and motor symptoms. Moreover, this effect persisted during the 2-week washout period after treatment was discontinued, Dr. Videnovic and his associates said (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5192).
The treatment was well tolerated. In the intervention group, one patient reported headache and another sleepiness, and in the control group one patient reported itchy eyes. The effects resolved spontaneously, and neither lead to treatment withdrawal.
“Based on these results, the next logical step is to optimize various parameters of light therapy (e.g., intensity, duration, and wavelength) not only for impaired sleep and alertness but also for other motor and nonmotor manifestation of PD,” the investigators wrote.
A major limitation of this study was that exposure to ambient light throughout the day was not measured. Some people in the control group received as much or even more light exposure than those assigned to bright-light therapy. “Future studies may be more strict in controlling such exposures,” Dr. Videnovic and his associates said.
This study was supported by the National Parkinson Foundation and the National Institutes of Health. Dr. Videnovic reported having no relevant financial disclosures. One of his associates reported ties to Merck, Phillips, Eisai, and Teva.
The study by Dr. Videnovic and his associates is important because it introduces a new concept into the much-studied phenomenon of sleep disturbances in Parkinson’s disease.
The authors demonstrated that chronobiological interventions can be used therapeutically in PD. Accounting for circadian physiology also sets a new standard for future studies of sleep, nighttime wakefulness, and daytime function not only in PD but, it is hoped, in other diseases as well.
Birgit Högl, MD, is with the department of neurology at the Medical University of Innsbruck (Austria). She reported receiving honoraria as a speaker, advisory board member, or consultant from UCB, Otsuka, Lundbeck, Lilly, Axovant, AbbVie, Mundipharma, Benevolent Bio, and Janssen Cilag, and travel support from Habel Medizintechnik and Vivisol. Dr. Högl made these remarks in an editorial (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5519) accompanying the report by Dr. Videnovic and his colleagues.
The study by Dr. Videnovic and his associates is important because it introduces a new concept into the much-studied phenomenon of sleep disturbances in Parkinson’s disease.
The authors demonstrated that chronobiological interventions can be used therapeutically in PD. Accounting for circadian physiology also sets a new standard for future studies of sleep, nighttime wakefulness, and daytime function not only in PD but, it is hoped, in other diseases as well.
Birgit Högl, MD, is with the department of neurology at the Medical University of Innsbruck (Austria). She reported receiving honoraria as a speaker, advisory board member, or consultant from UCB, Otsuka, Lundbeck, Lilly, Axovant, AbbVie, Mundipharma, Benevolent Bio, and Janssen Cilag, and travel support from Habel Medizintechnik and Vivisol. Dr. Högl made these remarks in an editorial (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5519) accompanying the report by Dr. Videnovic and his colleagues.
The study by Dr. Videnovic and his associates is important because it introduces a new concept into the much-studied phenomenon of sleep disturbances in Parkinson’s disease.
The authors demonstrated that chronobiological interventions can be used therapeutically in PD. Accounting for circadian physiology also sets a new standard for future studies of sleep, nighttime wakefulness, and daytime function not only in PD but, it is hoped, in other diseases as well.
Birgit Högl, MD, is with the department of neurology at the Medical University of Innsbruck (Austria). She reported receiving honoraria as a speaker, advisory board member, or consultant from UCB, Otsuka, Lundbeck, Lilly, Axovant, AbbVie, Mundipharma, Benevolent Bio, and Janssen Cilag, and travel support from Habel Medizintechnik and Vivisol. Dr. Högl made these remarks in an editorial (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5519) accompanying the report by Dr. Videnovic and his colleagues.
Light therapy significantly reduced excessive daytime sleepiness, improved sleep quality, decreased overnight awakenings, shortened sleep latency, enhanced daytime alertness and activity level, and improved motor symptoms in patients with Parkinson’s disease, according to a report published online Feb. 20 in JAMA Neurology.
The noninvasive, nonpharmacologic treatment was well tolerated, and patient adherence was excellent in a small, multicenter, randomized controlled trial. Light therapy is widely available as a treatment for several sleep and psychiatric disorders and is “relatively easy to prescribe and incorporate into a clinical practice,” said Aleksandar Videnovic, MD, of the department of neurology at Massachusetts General Hospital and the division of sleep medicine at Harvard Medical School, both in Boston, and his associates.
To assess the safety and efficacy of light therapy as a novel treatment for PD, they studied 31 adults (age range, 32-77 years) who had a mean disease duration of 6 years. These study participants were randomly assigned to use 1 hour of exposure to 10,000 lux of bright light (16 patients in the intervention group) or 1 hour of exposure to less than 300 lux of dim red light (15 control subjects) every morning and every afternoon for 2 weeks.
The study participants – 13 men and 18 women – also wore actigraphy monitors all day and all night, completed daily sleep diaries, and noted daytime sleepiness in a log every 2 hours, 3 days per week.
Bright light significantly improved excessive daytime sleepiness as measured by the Epworth Sleepiness Scale and self-reported alertness during wake time, as well as several sleep metrics such as overall sleep quality, overnight awakenings, and ease of falling asleep. All the patients in the intervention group reported being more refreshed in the mornings during the study period, as compared with baseline.
Light therapy also improved overall PD severity as measured by the Unified Parkinson’s Disease Rating Scale, particularly in scores related to activities of daily living and motor symptoms. Moreover, this effect persisted during the 2-week washout period after treatment was discontinued, Dr. Videnovic and his associates said (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5192).
The treatment was well tolerated. In the intervention group, one patient reported headache and another sleepiness, and in the control group one patient reported itchy eyes. The effects resolved spontaneously, and neither lead to treatment withdrawal.
“Based on these results, the next logical step is to optimize various parameters of light therapy (e.g., intensity, duration, and wavelength) not only for impaired sleep and alertness but also for other motor and nonmotor manifestation of PD,” the investigators wrote.
A major limitation of this study was that exposure to ambient light throughout the day was not measured. Some people in the control group received as much or even more light exposure than those assigned to bright-light therapy. “Future studies may be more strict in controlling such exposures,” Dr. Videnovic and his associates said.
This study was supported by the National Parkinson Foundation and the National Institutes of Health. Dr. Videnovic reported having no relevant financial disclosures. One of his associates reported ties to Merck, Phillips, Eisai, and Teva.
Light therapy significantly reduced excessive daytime sleepiness, improved sleep quality, decreased overnight awakenings, shortened sleep latency, enhanced daytime alertness and activity level, and improved motor symptoms in patients with Parkinson’s disease, according to a report published online Feb. 20 in JAMA Neurology.
The noninvasive, nonpharmacologic treatment was well tolerated, and patient adherence was excellent in a small, multicenter, randomized controlled trial. Light therapy is widely available as a treatment for several sleep and psychiatric disorders and is “relatively easy to prescribe and incorporate into a clinical practice,” said Aleksandar Videnovic, MD, of the department of neurology at Massachusetts General Hospital and the division of sleep medicine at Harvard Medical School, both in Boston, and his associates.
To assess the safety and efficacy of light therapy as a novel treatment for PD, they studied 31 adults (age range, 32-77 years) who had a mean disease duration of 6 years. These study participants were randomly assigned to use 1 hour of exposure to 10,000 lux of bright light (16 patients in the intervention group) or 1 hour of exposure to less than 300 lux of dim red light (15 control subjects) every morning and every afternoon for 2 weeks.
The study participants – 13 men and 18 women – also wore actigraphy monitors all day and all night, completed daily sleep diaries, and noted daytime sleepiness in a log every 2 hours, 3 days per week.
Bright light significantly improved excessive daytime sleepiness as measured by the Epworth Sleepiness Scale and self-reported alertness during wake time, as well as several sleep metrics such as overall sleep quality, overnight awakenings, and ease of falling asleep. All the patients in the intervention group reported being more refreshed in the mornings during the study period, as compared with baseline.
Light therapy also improved overall PD severity as measured by the Unified Parkinson’s Disease Rating Scale, particularly in scores related to activities of daily living and motor symptoms. Moreover, this effect persisted during the 2-week washout period after treatment was discontinued, Dr. Videnovic and his associates said (JAMA Neurol. 2017 Feb 20. doi: 10.1001/jamaneurol.2016.5192).
The treatment was well tolerated. In the intervention group, one patient reported headache and another sleepiness, and in the control group one patient reported itchy eyes. The effects resolved spontaneously, and neither lead to treatment withdrawal.
“Based on these results, the next logical step is to optimize various parameters of light therapy (e.g., intensity, duration, and wavelength) not only for impaired sleep and alertness but also for other motor and nonmotor manifestation of PD,” the investigators wrote.
A major limitation of this study was that exposure to ambient light throughout the day was not measured. Some people in the control group received as much or even more light exposure than those assigned to bright-light therapy. “Future studies may be more strict in controlling such exposures,” Dr. Videnovic and his associates said.
This study was supported by the National Parkinson Foundation and the National Institutes of Health. Dr. Videnovic reported having no relevant financial disclosures. One of his associates reported ties to Merck, Phillips, Eisai, and Teva.
FROM JAMA NEUROLOGY
Key clinical point:
Major finding: Compared with a control condition, bright light significantly improved excessive daytime sleepiness as measured by the Epworth Sleepiness Scale and self-reported alertness during wake time.
Data source: A randomized controlled trial involving 31 adults with Parkinson’s disease–related sleep disturbances.
Disclosures: This study was supported by the National Parkinson Foundation and the National Institutes of Health. Dr. Videnovic reported having no relevant financial disclosures. One of his associates reported ties to Merck, Phillips, Eisai, and Teva.
Safety of Superior Labrum Anterior and Posterior (SLAP) Repair Posterior to Biceps Tendon Is Improved With a Percutaneous Approach
Take-Home Points
- Anchors placed posterior to the biceps during SLAP repair are at risk for glenoid vault penetration and/or suprascapular nerve (SSN) injury.
- Vault penetration and SSN injury are avoided by using a Port of Wilmington (PW) portal instead of an anterior portal.
- A percutaneous PW portal is safe and passes through the rotator cuff muscle only.
Since being classified by Snyder and colleagues,1 various arthroscopic techniques have been used to repair superior labrum anterior and posterior (SLAP) tears, particularly type II tears. Despite being commonly performed, repairs of SLAP lesions remain challenging. There is high variability in the rate of good/excellent functional outcomes and athletes’ return to previous level of play after SLAP repairs.2,3 Furthermore, the rate of complications after SLAP repair is as high as 5%.4
One of the most common complications of repair of a type II SLAP tear is nerve injury.4 In particular, suprascapular nerve (SSN) injury has occurred after arthroscopic repair of SLAP tears.5,6 Three cadaveric studies have demonstrated that glenoid vault penetration is common during placement of knotted anchors for SLAP repair and that the SSN is at risk during placement of these anchors.7-9 However, 2 of the 3 studies used only an anterior portal in their evaluation of anchor placement. Safety of anchor placement posterior to the biceps tendon may be improved with a percutaneous approach using a Port of Wilmington (PW) portal.10,11 No studies have evaluated the risk of glenoid vault penetration and SSN injury with shorter knotless anchors.
We conducted a study to compare a standard anterosuperolateral (ASL) portal with a percutaneous PW portal for knotless anchors placed posterior to the biceps tendon during repair of SLAP tears. We hypothesized that anchors placed through the PW portal would be less likely to penetrate the glenoid vault and would be farther from the SSN in the event of bone penetration.
Materials and Methods
Six matched pairs of fresh human cadaveric shoulders were used in this study. Each specimen included the scapula, the clavicle, and the humerus. All 6 specimens were male, and their mean age was 41.2 years (range, 23-59 years). Shoulder arthroscopy was performed for placement of SLAP anchors, and open dissection followed.
Anchor Placement
The scapula was clamped and the shoulder placed in the lateral decubitus position with 30° of abduction, 20° of forward flexion, and neutral rotation.10 A standard posterior glenohumeral viewing portal was established and a 30° arthroscope inserted. Both shoulders of each matched pair were randomly assigned to anchor placement through either an ASL portal or a PW portal. Two anchors were placed in the superior glenoid to simulate repair of a posterior SLAP tear.11 Each was a 2.9-mm short (12.5-mm) knotless anchor (BioComposite PushLock; Arthrex) that included a polyetheretherketone (PEEK) eyelet for threading sutures before anchor placement. A drill guide was inserted according to manufacturer guidelines, and a 2.9-mm drill was used to make a bone socket 18 mm deep. The anchor eyelet was loaded with suture tape (Labral Tape; Arthrex), and the anchor and suture were inserted into the socket. The sutures were left uncut to aid in anchor visualization during open dissection. On a right shoulder, the first anchor was placed just posterior to the biceps tendon, at 11 o’clock, and the second anchor about 1 cm posterior to the first, at 10 o’clock. All anchors were placed by an arthroscopy fellowship–trained shoulder surgeon. Before placement, anchor location was confirmed by another arthroscopy fellowship–trained shoulder surgeon.
The ASL portal was created, with an 18-gauge spinal needle and an outside-in technique, about 1 cm lateral to the anterolateral corner of the acromion.
In the opposite shoulder, the PW portal was created, with a percutaneous technique, about 1 cm anterior and 1 cm lateral to the posterolateral corner of the acromion. An 18-gauge spinal needle was inserted to allow a 45° angle of approach to the posterosuperior glenoid.11
Cadaveric Dissection
After anchor placement, another shoulder surgeon performed the dissection. Skin, subcutaneous tissue, deltoid, and clavicle were removed. In the percutaneous specimens, PW portal location relative to rotator cuff was recorded before cuff removal. After overlying soft tissues were removed from a specimen, the anchors were examined for glenoid vault penetration. In the setting of vault penetration, digital calipers were used to measure the shortest distance from anchor to SSN.
Results
In the ASL portal group, 8 (66.7%) of 12 anchors (4/6 at 11 o’clock, 4/6 at 10 o’clock) penetrated the medial glenoid vault.
In the PW portal group, 2 (16.7%) of 12 anchors (1/6 at 11 o’clock, 1/6 at 10 o’clock, both from a single specimen) penetrated the medial glenoid vault. Actually, in each case the eyelet and not the anchor penetrated the vault. In the penetration cases, distance to SSN was 20 mm for the 11 o’clock anchor and 8 mm for the 10 o’clock anchor (Table). Of the 6 portals, 3 passed through the supraspinatus muscle, 2 through the infraspinatus musculotendinous junction, and 1 through the infraspinatus muscle.
Discussion
Our study findings support the hypothesis that SLAP repair anchors placed posterior to the biceps tendon are more likely to remain in bone with use of a percutaneous approach relative to an ASL approach. Our findings also support the growing body of evidence that such anchors placed with an anterior approach increase the risk for SSN injury.
Three other cadaveric studies have evaluated anchor placement for SLAP repair. Chan and colleagues7 evaluated drill penetration during bone socket preparation for SLAP repair in 21 matched pairs of formalin-embalmed cadavers. A 20-mm drill was used for correspondence to a 14.5-mm anchor, though no anchors were inserted, and sockets were created in an open manner. Through a mimicked ASL portal, 1 socket was made anterior to the biceps tendon, at 1 o’clock; then, through a mimicked PW portal, 2 sockets were made posterior to the tendon, at 11 o’clock and 9 to 10 o’clock. Glenoid vault penetration occurred in 29% of the 42 anterior sockets, but only 1 anchor (2.4%) touched the SSN. Penetration did not occur with the 11 o’clock anchors. The 9 to 10 o’clock anchor was at highest risk for SSN injury (9.5%, 4 cases). The study was limited by lack of anchor placement and open creation of bone sockets in embalmed cadavers.
Koh and colleagues8 evaluated arthroscopic placement of anterior SLAP anchors in 6 matched pairs of fresh-frozen cadavers. Through an ASL portal, each 14.5-mm knotted anchor was placed anterior to the biceps tendon, at 1 o’clock. As in the study by Chan and colleagues,7 drill depth was 20 mm. Notably, anchors were seated 2 mm beyond manufacturer recommendations, and the cadavers were of Asian origin, likely indicating smaller glenoids compared to specimens from North America or Europe. All 12 anchors penetrated the glenoid vault; mean distance to SSN was 3.1 mm.
Morgan and colleagues9 compared anterior and ASL portals created for SLAP repairs in 10 matched-pair cadavers. Anchors were placed at 1 o’clock, 11 o’clock, and 10 o’clock. As in the studies by Chan and colleagues7 and Koh and colleagues,8 14.5-mm knotted anchors were used. One anterior anchor (10%) placed through an ASL portal penetrated the cortex by 1 mm, and 2 anterior anchors (20%) placed through anterior portals penetrated the cortex (1 was completely out of the bone). Overall, 65% of 11 o’clock anchors and 100% of 10 o’clock anchors violated the glenoid vault. With the 11 o’clock anchors, mean distance to SSN was 6 mm for ASL portals and 4.2 mm for anterior portals; with the 10 o’clock anchors, mean distance to SSN was 8 mm for ASL portals and 2.1 mm for anterior portals.
Overall, the results of these 3 studies suggest that, with use of ASL portals, placement of SLAP anchors anterior to the biceps tendon is safe. Using the same portals, however, anchors placed posterior to the tendon are at higher risk for glenoid vault penetration. Supporting these findings are our study’s penetration rates: 66.7% for anchors placed through ASL portals and 16.7% for anchors placed through percutaneous PW portals. The different rates are not surprising given that the coracoid process projects anterior to the glenoid and provides additional bone stock for placement of anchors anteriorly vs posteriorly. Therefore, with percutaneous PW portals, the approach angle directs the anchor toward the bone of the coracoid base. Furthermore, the SSN passes nearest the posterior aspect of the glenoid. In a study by Shishido and Kikuchi,12 the distance from the posterior rim of the glenoid to the SSN was 18 mm, and from the superior rim was 29 mm. Therefore, anchors placed with an anterior approach naturally are directed toward the SSN.
In addition to portal placement and approach angle, anchor length likely affects the risks for glenoid vault penetration and SSN injury.
One limitation of this study was the small number of cadavers, all of which were male. Female cadavers and cadavers of other ethnic origins likely have smaller glenoid vaults, and thus their inclusion would have altered our results. This issue was well described in studies mentioned in this article, and our goal was simply to compare ASL portals with percutaneous PW portals, so we think it does not change the fact that the risks for glenoid vault penetration and SSN injury are reduced with use of PW portals for anchors placed posterior to the biceps tendon.
Conclusion
This study was the first to examine glenoid vault penetration and SSN proximity with short anchors for SLAP repair. The risk for glenoid vault penetration during repair of SLAP tears posterior to the biceps tendon was reduced by anchor placement with a percutaneous posterior approach. The percutaneous posterior approach also directs the anchor away from the SSN.
Am J Orthop. 2017;46(1):E60-E64. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.
1. Snyder SJ, Banas MP, Karzel RP. An analysis of 140 injuries to the superior glenoid labrum. J Shoulder Elbow Surg. 1995;4(4):243-248.
2. Denard PJ, Lädermann A, Burkhart SS. Long-term outcome after arthroscopic repair of type II SLAP lesions: results according to age and workers’ compensation status. Arthroscopy. 2012;28(4):451-457.
3. Gorantla K, Gill C, Wright RW. The outcome of type II SLAP repair: a systematic review. Arthroscopy. 2010;26(4):537-545.
4. Weber SC, Martin DF, Seiler JG 3rd, Harrast JJ. Superior labrum anterior and posterior lesions of the shoulder: incidence rates, complications, and outcomes as reported by American Board of Orthopedic Surgery. Part II candidates. Am J Sports Med. 2012;40(7):1538-1543.
5. Kim SH, Koh YG, Sung CH, Moon HK, Park YS. Iatrogenic suprascapular nerve injury after repair of type II SLAP lesion. Arthroscopy. 2010;26(7):1005-1008.
6. Yoo JC, Lee YS, Ahn JH, Park JH, Kang HJ, Koh KH. Isolated suprascapular nerve injury below the spinoglenoid notch after SLAP repair. J Shoulder Elbow Surg. 2009;18(4):e27-e29.
7. Chan H, Beaupre LA, Bouliane MJ. Injury of the suprascapular nerve during arthroscopic repair of superior labral tears: an anatomic study. J Shoulder Elbow Surg. 2010;19(5):709-715.
8. Koh KH, Park WH, Lim TK, Yoo JC. Medial perforation of the glenoid neck following SLAP repair places the suprascapular nerve at risk: a cadaveric study. J Shoulder Elbow Surg. 2011;20(2):245-250.
9. Morgan RT, Henn RF 3rd, Paryavi E, Dreese J. Injury to the suprascapular nerve during superior labrum anterior and posterior repair: is a rotator interval portal safer than an anterosuperior portal? Arthroscopy. 2014;30(11):1418-1423.
10. Lo IK, Lind CC, Burkhart SS. Glenohumeral arthroscopy portals established using an outside-in technique: neurovascular anatomy at risk. Arthroscopy. 2004;20(6):596-602.
11. Morgan CD, Burkhart SS, Palmeri M, Gillespie M. Type II SLAP lesions: three subtypes and their relationships to superior instability and rotator cuff tears. Arthroscopy. 1998;14(6):553-565.
12. Shishido H, Kikuchi S. Injury of the suprascapular nerve in shoulder surgery: an anatomic study. J Shoulder Elbow Surg. 2001;10(4):372-376.
13. Uggen C, Wei A, Glousman RE, et al. Biomechanical comparison of knotless anchor repair versus simple suture repair for type II SLAP lesions. Arthroscopy. 2009;25(10):1085-1092.
14. Kim SH, Crater RB, Hargens AR. Movement-induced knot migration after anterior stabilization in the shoulder. Arthroscopy. 2013;29(3):485-490.
Take-Home Points
- Anchors placed posterior to the biceps during SLAP repair are at risk for glenoid vault penetration and/or suprascapular nerve (SSN) injury.
- Vault penetration and SSN injury are avoided by using a Port of Wilmington (PW) portal instead of an anterior portal.
- A percutaneous PW portal is safe and passes through the rotator cuff muscle only.
Since being classified by Snyder and colleagues,1 various arthroscopic techniques have been used to repair superior labrum anterior and posterior (SLAP) tears, particularly type II tears. Despite being commonly performed, repairs of SLAP lesions remain challenging. There is high variability in the rate of good/excellent functional outcomes and athletes’ return to previous level of play after SLAP repairs.2,3 Furthermore, the rate of complications after SLAP repair is as high as 5%.4
One of the most common complications of repair of a type II SLAP tear is nerve injury.4 In particular, suprascapular nerve (SSN) injury has occurred after arthroscopic repair of SLAP tears.5,6 Three cadaveric studies have demonstrated that glenoid vault penetration is common during placement of knotted anchors for SLAP repair and that the SSN is at risk during placement of these anchors.7-9 However, 2 of the 3 studies used only an anterior portal in their evaluation of anchor placement. Safety of anchor placement posterior to the biceps tendon may be improved with a percutaneous approach using a Port of Wilmington (PW) portal.10,11 No studies have evaluated the risk of glenoid vault penetration and SSN injury with shorter knotless anchors.
We conducted a study to compare a standard anterosuperolateral (ASL) portal with a percutaneous PW portal for knotless anchors placed posterior to the biceps tendon during repair of SLAP tears. We hypothesized that anchors placed through the PW portal would be less likely to penetrate the glenoid vault and would be farther from the SSN in the event of bone penetration.
Materials and Methods
Six matched pairs of fresh human cadaveric shoulders were used in this study. Each specimen included the scapula, the clavicle, and the humerus. All 6 specimens were male, and their mean age was 41.2 years (range, 23-59 years). Shoulder arthroscopy was performed for placement of SLAP anchors, and open dissection followed.
Anchor Placement
The scapula was clamped and the shoulder placed in the lateral decubitus position with 30° of abduction, 20° of forward flexion, and neutral rotation.10 A standard posterior glenohumeral viewing portal was established and a 30° arthroscope inserted. Both shoulders of each matched pair were randomly assigned to anchor placement through either an ASL portal or a PW portal. Two anchors were placed in the superior glenoid to simulate repair of a posterior SLAP tear.11 Each was a 2.9-mm short (12.5-mm) knotless anchor (BioComposite PushLock; Arthrex) that included a polyetheretherketone (PEEK) eyelet for threading sutures before anchor placement. A drill guide was inserted according to manufacturer guidelines, and a 2.9-mm drill was used to make a bone socket 18 mm deep. The anchor eyelet was loaded with suture tape (Labral Tape; Arthrex), and the anchor and suture were inserted into the socket. The sutures were left uncut to aid in anchor visualization during open dissection. On a right shoulder, the first anchor was placed just posterior to the biceps tendon, at 11 o’clock, and the second anchor about 1 cm posterior to the first, at 10 o’clock. All anchors were placed by an arthroscopy fellowship–trained shoulder surgeon. Before placement, anchor location was confirmed by another arthroscopy fellowship–trained shoulder surgeon.
The ASL portal was created, with an 18-gauge spinal needle and an outside-in technique, about 1 cm lateral to the anterolateral corner of the acromion.
In the opposite shoulder, the PW portal was created, with a percutaneous technique, about 1 cm anterior and 1 cm lateral to the posterolateral corner of the acromion. An 18-gauge spinal needle was inserted to allow a 45° angle of approach to the posterosuperior glenoid.11
Cadaveric Dissection
After anchor placement, another shoulder surgeon performed the dissection. Skin, subcutaneous tissue, deltoid, and clavicle were removed. In the percutaneous specimens, PW portal location relative to rotator cuff was recorded before cuff removal. After overlying soft tissues were removed from a specimen, the anchors were examined for glenoid vault penetration. In the setting of vault penetration, digital calipers were used to measure the shortest distance from anchor to SSN.
Results
In the ASL portal group, 8 (66.7%) of 12 anchors (4/6 at 11 o’clock, 4/6 at 10 o’clock) penetrated the medial glenoid vault.
In the PW portal group, 2 (16.7%) of 12 anchors (1/6 at 11 o’clock, 1/6 at 10 o’clock, both from a single specimen) penetrated the medial glenoid vault. Actually, in each case the eyelet and not the anchor penetrated the vault. In the penetration cases, distance to SSN was 20 mm for the 11 o’clock anchor and 8 mm for the 10 o’clock anchor (Table). Of the 6 portals, 3 passed through the supraspinatus muscle, 2 through the infraspinatus musculotendinous junction, and 1 through the infraspinatus muscle.
Discussion
Our study findings support the hypothesis that SLAP repair anchors placed posterior to the biceps tendon are more likely to remain in bone with use of a percutaneous approach relative to an ASL approach. Our findings also support the growing body of evidence that such anchors placed with an anterior approach increase the risk for SSN injury.
Three other cadaveric studies have evaluated anchor placement for SLAP repair. Chan and colleagues7 evaluated drill penetration during bone socket preparation for SLAP repair in 21 matched pairs of formalin-embalmed cadavers. A 20-mm drill was used for correspondence to a 14.5-mm anchor, though no anchors were inserted, and sockets were created in an open manner. Through a mimicked ASL portal, 1 socket was made anterior to the biceps tendon, at 1 o’clock; then, through a mimicked PW portal, 2 sockets were made posterior to the tendon, at 11 o’clock and 9 to 10 o’clock. Glenoid vault penetration occurred in 29% of the 42 anterior sockets, but only 1 anchor (2.4%) touched the SSN. Penetration did not occur with the 11 o’clock anchors. The 9 to 10 o’clock anchor was at highest risk for SSN injury (9.5%, 4 cases). The study was limited by lack of anchor placement and open creation of bone sockets in embalmed cadavers.
Koh and colleagues8 evaluated arthroscopic placement of anterior SLAP anchors in 6 matched pairs of fresh-frozen cadavers. Through an ASL portal, each 14.5-mm knotted anchor was placed anterior to the biceps tendon, at 1 o’clock. As in the study by Chan and colleagues,7 drill depth was 20 mm. Notably, anchors were seated 2 mm beyond manufacturer recommendations, and the cadavers were of Asian origin, likely indicating smaller glenoids compared to specimens from North America or Europe. All 12 anchors penetrated the glenoid vault; mean distance to SSN was 3.1 mm.
Morgan and colleagues9 compared anterior and ASL portals created for SLAP repairs in 10 matched-pair cadavers. Anchors were placed at 1 o’clock, 11 o’clock, and 10 o’clock. As in the studies by Chan and colleagues7 and Koh and colleagues,8 14.5-mm knotted anchors were used. One anterior anchor (10%) placed through an ASL portal penetrated the cortex by 1 mm, and 2 anterior anchors (20%) placed through anterior portals penetrated the cortex (1 was completely out of the bone). Overall, 65% of 11 o’clock anchors and 100% of 10 o’clock anchors violated the glenoid vault. With the 11 o’clock anchors, mean distance to SSN was 6 mm for ASL portals and 4.2 mm for anterior portals; with the 10 o’clock anchors, mean distance to SSN was 8 mm for ASL portals and 2.1 mm for anterior portals.
Overall, the results of these 3 studies suggest that, with use of ASL portals, placement of SLAP anchors anterior to the biceps tendon is safe. Using the same portals, however, anchors placed posterior to the tendon are at higher risk for glenoid vault penetration. Supporting these findings are our study’s penetration rates: 66.7% for anchors placed through ASL portals and 16.7% for anchors placed through percutaneous PW portals. The different rates are not surprising given that the coracoid process projects anterior to the glenoid and provides additional bone stock for placement of anchors anteriorly vs posteriorly. Therefore, with percutaneous PW portals, the approach angle directs the anchor toward the bone of the coracoid base. Furthermore, the SSN passes nearest the posterior aspect of the glenoid. In a study by Shishido and Kikuchi,12 the distance from the posterior rim of the glenoid to the SSN was 18 mm, and from the superior rim was 29 mm. Therefore, anchors placed with an anterior approach naturally are directed toward the SSN.
In addition to portal placement and approach angle, anchor length likely affects the risks for glenoid vault penetration and SSN injury.
One limitation of this study was the small number of cadavers, all of which were male. Female cadavers and cadavers of other ethnic origins likely have smaller glenoid vaults, and thus their inclusion would have altered our results. This issue was well described in studies mentioned in this article, and our goal was simply to compare ASL portals with percutaneous PW portals, so we think it does not change the fact that the risks for glenoid vault penetration and SSN injury are reduced with use of PW portals for anchors placed posterior to the biceps tendon.
Conclusion
This study was the first to examine glenoid vault penetration and SSN proximity with short anchors for SLAP repair. The risk for glenoid vault penetration during repair of SLAP tears posterior to the biceps tendon was reduced by anchor placement with a percutaneous posterior approach. The percutaneous posterior approach also directs the anchor away from the SSN.
Am J Orthop. 2017;46(1):E60-E64. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.
Take-Home Points
- Anchors placed posterior to the biceps during SLAP repair are at risk for glenoid vault penetration and/or suprascapular nerve (SSN) injury.
- Vault penetration and SSN injury are avoided by using a Port of Wilmington (PW) portal instead of an anterior portal.
- A percutaneous PW portal is safe and passes through the rotator cuff muscle only.
Since being classified by Snyder and colleagues,1 various arthroscopic techniques have been used to repair superior labrum anterior and posterior (SLAP) tears, particularly type II tears. Despite being commonly performed, repairs of SLAP lesions remain challenging. There is high variability in the rate of good/excellent functional outcomes and athletes’ return to previous level of play after SLAP repairs.2,3 Furthermore, the rate of complications after SLAP repair is as high as 5%.4
One of the most common complications of repair of a type II SLAP tear is nerve injury.4 In particular, suprascapular nerve (SSN) injury has occurred after arthroscopic repair of SLAP tears.5,6 Three cadaveric studies have demonstrated that glenoid vault penetration is common during placement of knotted anchors for SLAP repair and that the SSN is at risk during placement of these anchors.7-9 However, 2 of the 3 studies used only an anterior portal in their evaluation of anchor placement. Safety of anchor placement posterior to the biceps tendon may be improved with a percutaneous approach using a Port of Wilmington (PW) portal.10,11 No studies have evaluated the risk of glenoid vault penetration and SSN injury with shorter knotless anchors.
We conducted a study to compare a standard anterosuperolateral (ASL) portal with a percutaneous PW portal for knotless anchors placed posterior to the biceps tendon during repair of SLAP tears. We hypothesized that anchors placed through the PW portal would be less likely to penetrate the glenoid vault and would be farther from the SSN in the event of bone penetration.
Materials and Methods
Six matched pairs of fresh human cadaveric shoulders were used in this study. Each specimen included the scapula, the clavicle, and the humerus. All 6 specimens were male, and their mean age was 41.2 years (range, 23-59 years). Shoulder arthroscopy was performed for placement of SLAP anchors, and open dissection followed.
Anchor Placement
The scapula was clamped and the shoulder placed in the lateral decubitus position with 30° of abduction, 20° of forward flexion, and neutral rotation.10 A standard posterior glenohumeral viewing portal was established and a 30° arthroscope inserted. Both shoulders of each matched pair were randomly assigned to anchor placement through either an ASL portal or a PW portal. Two anchors were placed in the superior glenoid to simulate repair of a posterior SLAP tear.11 Each was a 2.9-mm short (12.5-mm) knotless anchor (BioComposite PushLock; Arthrex) that included a polyetheretherketone (PEEK) eyelet for threading sutures before anchor placement. A drill guide was inserted according to manufacturer guidelines, and a 2.9-mm drill was used to make a bone socket 18 mm deep. The anchor eyelet was loaded with suture tape (Labral Tape; Arthrex), and the anchor and suture were inserted into the socket. The sutures were left uncut to aid in anchor visualization during open dissection. On a right shoulder, the first anchor was placed just posterior to the biceps tendon, at 11 o’clock, and the second anchor about 1 cm posterior to the first, at 10 o’clock. All anchors were placed by an arthroscopy fellowship–trained shoulder surgeon. Before placement, anchor location was confirmed by another arthroscopy fellowship–trained shoulder surgeon.
The ASL portal was created, with an 18-gauge spinal needle and an outside-in technique, about 1 cm lateral to the anterolateral corner of the acromion.
In the opposite shoulder, the PW portal was created, with a percutaneous technique, about 1 cm anterior and 1 cm lateral to the posterolateral corner of the acromion. An 18-gauge spinal needle was inserted to allow a 45° angle of approach to the posterosuperior glenoid.11
Cadaveric Dissection
After anchor placement, another shoulder surgeon performed the dissection. Skin, subcutaneous tissue, deltoid, and clavicle were removed. In the percutaneous specimens, PW portal location relative to rotator cuff was recorded before cuff removal. After overlying soft tissues were removed from a specimen, the anchors were examined for glenoid vault penetration. In the setting of vault penetration, digital calipers were used to measure the shortest distance from anchor to SSN.
Results
In the ASL portal group, 8 (66.7%) of 12 anchors (4/6 at 11 o’clock, 4/6 at 10 o’clock) penetrated the medial glenoid vault.
In the PW portal group, 2 (16.7%) of 12 anchors (1/6 at 11 o’clock, 1/6 at 10 o’clock, both from a single specimen) penetrated the medial glenoid vault. Actually, in each case the eyelet and not the anchor penetrated the vault. In the penetration cases, distance to SSN was 20 mm for the 11 o’clock anchor and 8 mm for the 10 o’clock anchor (Table). Of the 6 portals, 3 passed through the supraspinatus muscle, 2 through the infraspinatus musculotendinous junction, and 1 through the infraspinatus muscle.
Discussion
Our study findings support the hypothesis that SLAP repair anchors placed posterior to the biceps tendon are more likely to remain in bone with use of a percutaneous approach relative to an ASL approach. Our findings also support the growing body of evidence that such anchors placed with an anterior approach increase the risk for SSN injury.
Three other cadaveric studies have evaluated anchor placement for SLAP repair. Chan and colleagues7 evaluated drill penetration during bone socket preparation for SLAP repair in 21 matched pairs of formalin-embalmed cadavers. A 20-mm drill was used for correspondence to a 14.5-mm anchor, though no anchors were inserted, and sockets were created in an open manner. Through a mimicked ASL portal, 1 socket was made anterior to the biceps tendon, at 1 o’clock; then, through a mimicked PW portal, 2 sockets were made posterior to the tendon, at 11 o’clock and 9 to 10 o’clock. Glenoid vault penetration occurred in 29% of the 42 anterior sockets, but only 1 anchor (2.4%) touched the SSN. Penetration did not occur with the 11 o’clock anchors. The 9 to 10 o’clock anchor was at highest risk for SSN injury (9.5%, 4 cases). The study was limited by lack of anchor placement and open creation of bone sockets in embalmed cadavers.
Koh and colleagues8 evaluated arthroscopic placement of anterior SLAP anchors in 6 matched pairs of fresh-frozen cadavers. Through an ASL portal, each 14.5-mm knotted anchor was placed anterior to the biceps tendon, at 1 o’clock. As in the study by Chan and colleagues,7 drill depth was 20 mm. Notably, anchors were seated 2 mm beyond manufacturer recommendations, and the cadavers were of Asian origin, likely indicating smaller glenoids compared to specimens from North America or Europe. All 12 anchors penetrated the glenoid vault; mean distance to SSN was 3.1 mm.
Morgan and colleagues9 compared anterior and ASL portals created for SLAP repairs in 10 matched-pair cadavers. Anchors were placed at 1 o’clock, 11 o’clock, and 10 o’clock. As in the studies by Chan and colleagues7 and Koh and colleagues,8 14.5-mm knotted anchors were used. One anterior anchor (10%) placed through an ASL portal penetrated the cortex by 1 mm, and 2 anterior anchors (20%) placed through anterior portals penetrated the cortex (1 was completely out of the bone). Overall, 65% of 11 o’clock anchors and 100% of 10 o’clock anchors violated the glenoid vault. With the 11 o’clock anchors, mean distance to SSN was 6 mm for ASL portals and 4.2 mm for anterior portals; with the 10 o’clock anchors, mean distance to SSN was 8 mm for ASL portals and 2.1 mm for anterior portals.
Overall, the results of these 3 studies suggest that, with use of ASL portals, placement of SLAP anchors anterior to the biceps tendon is safe. Using the same portals, however, anchors placed posterior to the tendon are at higher risk for glenoid vault penetration. Supporting these findings are our study’s penetration rates: 66.7% for anchors placed through ASL portals and 16.7% for anchors placed through percutaneous PW portals. The different rates are not surprising given that the coracoid process projects anterior to the glenoid and provides additional bone stock for placement of anchors anteriorly vs posteriorly. Therefore, with percutaneous PW portals, the approach angle directs the anchor toward the bone of the coracoid base. Furthermore, the SSN passes nearest the posterior aspect of the glenoid. In a study by Shishido and Kikuchi,12 the distance from the posterior rim of the glenoid to the SSN was 18 mm, and from the superior rim was 29 mm. Therefore, anchors placed with an anterior approach naturally are directed toward the SSN.
In addition to portal placement and approach angle, anchor length likely affects the risks for glenoid vault penetration and SSN injury.
One limitation of this study was the small number of cadavers, all of which were male. Female cadavers and cadavers of other ethnic origins likely have smaller glenoid vaults, and thus their inclusion would have altered our results. This issue was well described in studies mentioned in this article, and our goal was simply to compare ASL portals with percutaneous PW portals, so we think it does not change the fact that the risks for glenoid vault penetration and SSN injury are reduced with use of PW portals for anchors placed posterior to the biceps tendon.
Conclusion
This study was the first to examine glenoid vault penetration and SSN proximity with short anchors for SLAP repair. The risk for glenoid vault penetration during repair of SLAP tears posterior to the biceps tendon was reduced by anchor placement with a percutaneous posterior approach. The percutaneous posterior approach also directs the anchor away from the SSN.
Am J Orthop. 2017;46(1):E60-E64. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.
1. Snyder SJ, Banas MP, Karzel RP. An analysis of 140 injuries to the superior glenoid labrum. J Shoulder Elbow Surg. 1995;4(4):243-248.
2. Denard PJ, Lädermann A, Burkhart SS. Long-term outcome after arthroscopic repair of type II SLAP lesions: results according to age and workers’ compensation status. Arthroscopy. 2012;28(4):451-457.
3. Gorantla K, Gill C, Wright RW. The outcome of type II SLAP repair: a systematic review. Arthroscopy. 2010;26(4):537-545.
4. Weber SC, Martin DF, Seiler JG 3rd, Harrast JJ. Superior labrum anterior and posterior lesions of the shoulder: incidence rates, complications, and outcomes as reported by American Board of Orthopedic Surgery. Part II candidates. Am J Sports Med. 2012;40(7):1538-1543.
5. Kim SH, Koh YG, Sung CH, Moon HK, Park YS. Iatrogenic suprascapular nerve injury after repair of type II SLAP lesion. Arthroscopy. 2010;26(7):1005-1008.
6. Yoo JC, Lee YS, Ahn JH, Park JH, Kang HJ, Koh KH. Isolated suprascapular nerve injury below the spinoglenoid notch after SLAP repair. J Shoulder Elbow Surg. 2009;18(4):e27-e29.
7. Chan H, Beaupre LA, Bouliane MJ. Injury of the suprascapular nerve during arthroscopic repair of superior labral tears: an anatomic study. J Shoulder Elbow Surg. 2010;19(5):709-715.
8. Koh KH, Park WH, Lim TK, Yoo JC. Medial perforation of the glenoid neck following SLAP repair places the suprascapular nerve at risk: a cadaveric study. J Shoulder Elbow Surg. 2011;20(2):245-250.
9. Morgan RT, Henn RF 3rd, Paryavi E, Dreese J. Injury to the suprascapular nerve during superior labrum anterior and posterior repair: is a rotator interval portal safer than an anterosuperior portal? Arthroscopy. 2014;30(11):1418-1423.
10. Lo IK, Lind CC, Burkhart SS. Glenohumeral arthroscopy portals established using an outside-in technique: neurovascular anatomy at risk. Arthroscopy. 2004;20(6):596-602.
11. Morgan CD, Burkhart SS, Palmeri M, Gillespie M. Type II SLAP lesions: three subtypes and their relationships to superior instability and rotator cuff tears. Arthroscopy. 1998;14(6):553-565.
12. Shishido H, Kikuchi S. Injury of the suprascapular nerve in shoulder surgery: an anatomic study. J Shoulder Elbow Surg. 2001;10(4):372-376.
13. Uggen C, Wei A, Glousman RE, et al. Biomechanical comparison of knotless anchor repair versus simple suture repair for type II SLAP lesions. Arthroscopy. 2009;25(10):1085-1092.
14. Kim SH, Crater RB, Hargens AR. Movement-induced knot migration after anterior stabilization in the shoulder. Arthroscopy. 2013;29(3):485-490.
1. Snyder SJ, Banas MP, Karzel RP. An analysis of 140 injuries to the superior glenoid labrum. J Shoulder Elbow Surg. 1995;4(4):243-248.
2. Denard PJ, Lädermann A, Burkhart SS. Long-term outcome after arthroscopic repair of type II SLAP lesions: results according to age and workers’ compensation status. Arthroscopy. 2012;28(4):451-457.
3. Gorantla K, Gill C, Wright RW. The outcome of type II SLAP repair: a systematic review. Arthroscopy. 2010;26(4):537-545.
4. Weber SC, Martin DF, Seiler JG 3rd, Harrast JJ. Superior labrum anterior and posterior lesions of the shoulder: incidence rates, complications, and outcomes as reported by American Board of Orthopedic Surgery. Part II candidates. Am J Sports Med. 2012;40(7):1538-1543.
5. Kim SH, Koh YG, Sung CH, Moon HK, Park YS. Iatrogenic suprascapular nerve injury after repair of type II SLAP lesion. Arthroscopy. 2010;26(7):1005-1008.
6. Yoo JC, Lee YS, Ahn JH, Park JH, Kang HJ, Koh KH. Isolated suprascapular nerve injury below the spinoglenoid notch after SLAP repair. J Shoulder Elbow Surg. 2009;18(4):e27-e29.
7. Chan H, Beaupre LA, Bouliane MJ. Injury of the suprascapular nerve during arthroscopic repair of superior labral tears: an anatomic study. J Shoulder Elbow Surg. 2010;19(5):709-715.
8. Koh KH, Park WH, Lim TK, Yoo JC. Medial perforation of the glenoid neck following SLAP repair places the suprascapular nerve at risk: a cadaveric study. J Shoulder Elbow Surg. 2011;20(2):245-250.
9. Morgan RT, Henn RF 3rd, Paryavi E, Dreese J. Injury to the suprascapular nerve during superior labrum anterior and posterior repair: is a rotator interval portal safer than an anterosuperior portal? Arthroscopy. 2014;30(11):1418-1423.
10. Lo IK, Lind CC, Burkhart SS. Glenohumeral arthroscopy portals established using an outside-in technique: neurovascular anatomy at risk. Arthroscopy. 2004;20(6):596-602.
11. Morgan CD, Burkhart SS, Palmeri M, Gillespie M. Type II SLAP lesions: three subtypes and their relationships to superior instability and rotator cuff tears. Arthroscopy. 1998;14(6):553-565.
12. Shishido H, Kikuchi S. Injury of the suprascapular nerve in shoulder surgery: an anatomic study. J Shoulder Elbow Surg. 2001;10(4):372-376.
13. Uggen C, Wei A, Glousman RE, et al. Biomechanical comparison of knotless anchor repair versus simple suture repair for type II SLAP lesions. Arthroscopy. 2009;25(10):1085-1092.
14. Kim SH, Crater RB, Hargens AR. Movement-induced knot migration after anterior stabilization in the shoulder. Arthroscopy. 2013;29(3):485-490.
Medication for life
Some areas of psychiatry would benefit from more controversy. One of them is the prescription of antidepressants to young people dealing with romantic disappointments.
I have seen many young men and women given an antidepressant for the very painful, but ordinary, romantic break-ups characteristic of this phase of life, who then become habituated to the drug. They take the medication indefinitely, their brains accommodate neurophysiologically to the presence of the chemical, and they become unable to discontinue it without intolerable withdrawal symptoms that look like an underlying illness. A parallel phenomenon occurs not infrequently with the use of amphetamines (and other stimulants) for attention-deficit hyperactivity disorder that is at times mistakenly diagnosed in this age group.
Antidepressants for early romantic disappointments
Mr. A, now in his 30s, became sullen and withdrawn at age 16 after a girl refused his romantic approaches. His well-intentioned parents took him to a psychiatrist, who, after a brief evaluation, prescribed fluoxetine. Mr. A is now well adjusted and happily married but unable to get off fluoxetine. Even when it is carefully tapered, 2 or 3 months after it is discontinued, he becomes anxious and depressed. This is an iatrogenic problem. It is not related to goings-on in his mind or his life; rather it is the result of his brain’s accommodation to a medication, producing a serious withdrawal syndrome.
His original psychiatrist made only a descriptive diagnosis. He did not inquire about what was going on in Mr. A’s mind and thus could not make a dynamic diagnosis (that is, a diagnosis of a patient’s central emotional conflicts, ability to function in relation to other people, strengths, and weaknesses). Mr. A, like many adolescents, had a lot of anxiety and guilt about sexual and romantic involvement, and potential success. He defended against his anxiety and guilt by assuring himself life would never work out for him. When the girl he admired rebuffed him, he immediately concluded this would perpetually be his fate, so the girl’s refusal was particularly painful. Mr. A feels that had this dynamic been discussed with him at the time, he may well not have needed medication at all.
Ms. B, like Mr. A, was prescribed antidepressants for depressive reactions to early romantic disappointments. Likewise, she self-punitively convinced herself, despite easily attracting men’s attentions, that these disappointments meant a lifetime alone. Ms. B has a family history of depression (although neither of her brothers struggles with it), and she felt that she needed the medications to help negotiate difficult periods. But should she have been on them for extended periods of time? Therapeutic attention to her emotional conflicts helped her to form lasting relationships, marry, and have children. Unable to get off the medications, she had to deal with the risks of their use during pregnancy, which she then subjected to the same sort of guilty self-accusations as she previously had used to limit her romantic prospects.
Ms. C came to me on three medications – one for each of her significant romantic break-ups. She, too, was depressively self-diminishing, beginning therapy by letting me know all the things she could think of that might make me think less of her. Understanding some of the reasons for her self-deprecation helped her toward better romantic relationships but did not give her the courage to get off her medications. Pregnancy, however, led her to promptly and successfully discontinue an antidepressant and a mood stabilizer (she has never had any symptoms suggestive of manic depression). She remained on a low dose of a selective serotonin reuptake inhibitor, had an uneventful pregnancy, and then fell in love with a charming baby.
Principles for consideration
• Psychiatrists (and other mental health professionals and primary care physicians treating mental illness) should always make a dynamic, and not merely a descriptive, diagnosis. Even with a more clearly biologically driven problem, such as bipolar disorder, the patient’s personality and conflicts matter.
• Psychiatrists should be very judicious about prescribing medications in adolescence and young adulthood, especially for difficulties adapting to the typical events of those phases of life. Expert psychotherapy should be the first choice in these instances.
• Medication, when necessary, should be prescribed for as limited a time as possible. It is important for young people to advance their own development, not feel needlessly beholden to medications, not get iatrogenically dependent on them, and not feel that they have “diseases” they don’t have.
Amphetamines for misdiagnosed ADHD
When Ms. D’s family moved to a new house, she, her brother, and her sister, each attended a new school. Unlike her siblings, Ms. D, who was in high school, had a difficult adjustment. Her grades fell. She was taken to a psychiatrist who diagnosed ADHD and prescribed amphetamines. The psychiatrist paid little attention to her prior lack of difficulty in school or her struggles making new friends. Nor did the psychiatrist learn that Ms. D had to ward off the seductive advances of an older teacher (although Ms. D would likely not have been immediately forthcoming about this at the time).
When Ms. D came to me as a college student, for troubles with anger, anxiety, and some depression, she was religiously taking 70 mg of amphetamines daily. After I learned a bit about her and raised the question of whether she actually had ADHD, and whether it might make sense to consider tapering the amphetamines, she was appalled and looked like a toddler who was afraid I was about to steal her candy. Helping her to get off the unneeded medication was a multiyear process.
First, she had to recognize that it was prescribed to treat a problem she probably didn’t have, and second, that it was failing to help her with the problems she did have. As we attended to some of her actual emotional conflicts, she became willing to experiment with lower doses. She was able to see that her work was little changed as the dose was lowered, and that her difficulties with school had more to do with feelings toward classmates and teachers than with the presence or absence of amphetamines. After a protracted struggle, finally off the medication, she felt in charge of her life and no longer believed there was something inherently wrong with her mind or her brain.
Mr. E was the only son in a high-powered academic family. His older sisters were all intellectual standouts. Early in high school, he received his first B as a grade in a course. He was taken to a pediatrician, diagnosed with ADHD, and put on stimulants. Like Ms. D, he came to believe that he needed them. In college, he began to develop some magical aspects to his thinking, a potential side effect of the stimulants. It was very difficult to help him see either that he had a problem with his thinking or that it might be attributable to the medication.
Principles to consider
• If the ADHD wasn’t there in elementary school or before, it is unlikely that an adolescent or young adult has new-onset ADHD. A new or newly amplified conflict is occurring in the person’s mind and life A dynamic diagnosis, as always, is essential.
• When medication is prescribed for actual ADHD, as with anything else, the question of how long it will be taken must be asked. For life? Until other means of adaptation are accomplished? Until adequate outcome studies of long-term use of the medication are performed?
Helping patients to get off unneeded, or no longer needed, medications can be a difficult task. Their emotional attachments to the medications can be intense and varied. For some, the prescription is a sign of being loved and cared for. For others, it represents a certification of a deficit, appeases guilt about success, and/or attests to the need for special consideration. Insofar as the medication has been helpful, it may have come to be regarded as a dearly loved friend, or even a part of the self.
When medication has been helpful, there is also, of course, concern about the potential return of the difficulties for which it was prescribed. Few patients are told at the time of first prescription that there is potential risk of habituation and return of, or potential exaggeration of, symptoms with discontinuation. This type of discussion is more difficult to have in situations in which a prescription is urgently needed and the patient is reluctant, but is still not often done in those instances in which a prescription is more optional than essential. The picture is seldom simple.
These few comments only scratch the surface of the difficulties doctors and patients face in helping patients to discontinue their medications. Residency programs pay a lot of attention to helping trainees learn to prescribe medications; rarely do they sufficiently educate residents how to help patients discontinue them. The fact that so many residencies currently pay limited attention to interventions apart from medication contributes further to the difficulty.
Medications have saved the life of many a psychiatric patient. Some patients need medication for life. But some end up on medication for life, even in some instances when the medication may not have been needed in the first place. Although it is often a difficult task, we need to do a better job of distinguishing which patients are which.
Dr. Blum is a psychiatrist and psychoanalyst in private practice in Philadelphia. He teaches in the departments of anthropology and psychiatry at the University of Pennsylvania and at the Psychoanalytic Center of Philadelphia.
Some areas of psychiatry would benefit from more controversy. One of them is the prescription of antidepressants to young people dealing with romantic disappointments.
I have seen many young men and women given an antidepressant for the very painful, but ordinary, romantic break-ups characteristic of this phase of life, who then become habituated to the drug. They take the medication indefinitely, their brains accommodate neurophysiologically to the presence of the chemical, and they become unable to discontinue it without intolerable withdrawal symptoms that look like an underlying illness. A parallel phenomenon occurs not infrequently with the use of amphetamines (and other stimulants) for attention-deficit hyperactivity disorder that is at times mistakenly diagnosed in this age group.
Antidepressants for early romantic disappointments
Mr. A, now in his 30s, became sullen and withdrawn at age 16 after a girl refused his romantic approaches. His well-intentioned parents took him to a psychiatrist, who, after a brief evaluation, prescribed fluoxetine. Mr. A is now well adjusted and happily married but unable to get off fluoxetine. Even when it is carefully tapered, 2 or 3 months after it is discontinued, he becomes anxious and depressed. This is an iatrogenic problem. It is not related to goings-on in his mind or his life; rather it is the result of his brain’s accommodation to a medication, producing a serious withdrawal syndrome.
His original psychiatrist made only a descriptive diagnosis. He did not inquire about what was going on in Mr. A’s mind and thus could not make a dynamic diagnosis (that is, a diagnosis of a patient’s central emotional conflicts, ability to function in relation to other people, strengths, and weaknesses). Mr. A, like many adolescents, had a lot of anxiety and guilt about sexual and romantic involvement, and potential success. He defended against his anxiety and guilt by assuring himself life would never work out for him. When the girl he admired rebuffed him, he immediately concluded this would perpetually be his fate, so the girl’s refusal was particularly painful. Mr. A feels that had this dynamic been discussed with him at the time, he may well not have needed medication at all.
Ms. B, like Mr. A, was prescribed antidepressants for depressive reactions to early romantic disappointments. Likewise, she self-punitively convinced herself, despite easily attracting men’s attentions, that these disappointments meant a lifetime alone. Ms. B has a family history of depression (although neither of her brothers struggles with it), and she felt that she needed the medications to help negotiate difficult periods. But should she have been on them for extended periods of time? Therapeutic attention to her emotional conflicts helped her to form lasting relationships, marry, and have children. Unable to get off the medications, she had to deal with the risks of their use during pregnancy, which she then subjected to the same sort of guilty self-accusations as she previously had used to limit her romantic prospects.
Ms. C came to me on three medications – one for each of her significant romantic break-ups. She, too, was depressively self-diminishing, beginning therapy by letting me know all the things she could think of that might make me think less of her. Understanding some of the reasons for her self-deprecation helped her toward better romantic relationships but did not give her the courage to get off her medications. Pregnancy, however, led her to promptly and successfully discontinue an antidepressant and a mood stabilizer (she has never had any symptoms suggestive of manic depression). She remained on a low dose of a selective serotonin reuptake inhibitor, had an uneventful pregnancy, and then fell in love with a charming baby.
Principles for consideration
• Psychiatrists (and other mental health professionals and primary care physicians treating mental illness) should always make a dynamic, and not merely a descriptive, diagnosis. Even with a more clearly biologically driven problem, such as bipolar disorder, the patient’s personality and conflicts matter.
• Psychiatrists should be very judicious about prescribing medications in adolescence and young adulthood, especially for difficulties adapting to the typical events of those phases of life. Expert psychotherapy should be the first choice in these instances.
• Medication, when necessary, should be prescribed for as limited a time as possible. It is important for young people to advance their own development, not feel needlessly beholden to medications, not get iatrogenically dependent on them, and not feel that they have “diseases” they don’t have.
Amphetamines for misdiagnosed ADHD
When Ms. D’s family moved to a new house, she, her brother, and her sister, each attended a new school. Unlike her siblings, Ms. D, who was in high school, had a difficult adjustment. Her grades fell. She was taken to a psychiatrist who diagnosed ADHD and prescribed amphetamines. The psychiatrist paid little attention to her prior lack of difficulty in school or her struggles making new friends. Nor did the psychiatrist learn that Ms. D had to ward off the seductive advances of an older teacher (although Ms. D would likely not have been immediately forthcoming about this at the time).
When Ms. D came to me as a college student, for troubles with anger, anxiety, and some depression, she was religiously taking 70 mg of amphetamines daily. After I learned a bit about her and raised the question of whether she actually had ADHD, and whether it might make sense to consider tapering the amphetamines, she was appalled and looked like a toddler who was afraid I was about to steal her candy. Helping her to get off the unneeded medication was a multiyear process.
First, she had to recognize that it was prescribed to treat a problem she probably didn’t have, and second, that it was failing to help her with the problems she did have. As we attended to some of her actual emotional conflicts, she became willing to experiment with lower doses. She was able to see that her work was little changed as the dose was lowered, and that her difficulties with school had more to do with feelings toward classmates and teachers than with the presence or absence of amphetamines. After a protracted struggle, finally off the medication, she felt in charge of her life and no longer believed there was something inherently wrong with her mind or her brain.
Mr. E was the only son in a high-powered academic family. His older sisters were all intellectual standouts. Early in high school, he received his first B as a grade in a course. He was taken to a pediatrician, diagnosed with ADHD, and put on stimulants. Like Ms. D, he came to believe that he needed them. In college, he began to develop some magical aspects to his thinking, a potential side effect of the stimulants. It was very difficult to help him see either that he had a problem with his thinking or that it might be attributable to the medication.
Principles to consider
• If the ADHD wasn’t there in elementary school or before, it is unlikely that an adolescent or young adult has new-onset ADHD. A new or newly amplified conflict is occurring in the person’s mind and life A dynamic diagnosis, as always, is essential.
• When medication is prescribed for actual ADHD, as with anything else, the question of how long it will be taken must be asked. For life? Until other means of adaptation are accomplished? Until adequate outcome studies of long-term use of the medication are performed?
Helping patients to get off unneeded, or no longer needed, medications can be a difficult task. Their emotional attachments to the medications can be intense and varied. For some, the prescription is a sign of being loved and cared for. For others, it represents a certification of a deficit, appeases guilt about success, and/or attests to the need for special consideration. Insofar as the medication has been helpful, it may have come to be regarded as a dearly loved friend, or even a part of the self.
When medication has been helpful, there is also, of course, concern about the potential return of the difficulties for which it was prescribed. Few patients are told at the time of first prescription that there is potential risk of habituation and return of, or potential exaggeration of, symptoms with discontinuation. This type of discussion is more difficult to have in situations in which a prescription is urgently needed and the patient is reluctant, but is still not often done in those instances in which a prescription is more optional than essential. The picture is seldom simple.
These few comments only scratch the surface of the difficulties doctors and patients face in helping patients to discontinue their medications. Residency programs pay a lot of attention to helping trainees learn to prescribe medications; rarely do they sufficiently educate residents how to help patients discontinue them. The fact that so many residencies currently pay limited attention to interventions apart from medication contributes further to the difficulty.
Medications have saved the life of many a psychiatric patient. Some patients need medication for life. But some end up on medication for life, even in some instances when the medication may not have been needed in the first place. Although it is often a difficult task, we need to do a better job of distinguishing which patients are which.
Dr. Blum is a psychiatrist and psychoanalyst in private practice in Philadelphia. He teaches in the departments of anthropology and psychiatry at the University of Pennsylvania and at the Psychoanalytic Center of Philadelphia.
Some areas of psychiatry would benefit from more controversy. One of them is the prescription of antidepressants to young people dealing with romantic disappointments.
I have seen many young men and women given an antidepressant for the very painful, but ordinary, romantic break-ups characteristic of this phase of life, who then become habituated to the drug. They take the medication indefinitely, their brains accommodate neurophysiologically to the presence of the chemical, and they become unable to discontinue it without intolerable withdrawal symptoms that look like an underlying illness. A parallel phenomenon occurs not infrequently with the use of amphetamines (and other stimulants) for attention-deficit hyperactivity disorder that is at times mistakenly diagnosed in this age group.
Antidepressants for early romantic disappointments
Mr. A, now in his 30s, became sullen and withdrawn at age 16 after a girl refused his romantic approaches. His well-intentioned parents took him to a psychiatrist, who, after a brief evaluation, prescribed fluoxetine. Mr. A is now well adjusted and happily married but unable to get off fluoxetine. Even when it is carefully tapered, 2 or 3 months after it is discontinued, he becomes anxious and depressed. This is an iatrogenic problem. It is not related to goings-on in his mind or his life; rather it is the result of his brain’s accommodation to a medication, producing a serious withdrawal syndrome.
His original psychiatrist made only a descriptive diagnosis. He did not inquire about what was going on in Mr. A’s mind and thus could not make a dynamic diagnosis (that is, a diagnosis of a patient’s central emotional conflicts, ability to function in relation to other people, strengths, and weaknesses). Mr. A, like many adolescents, had a lot of anxiety and guilt about sexual and romantic involvement, and potential success. He defended against his anxiety and guilt by assuring himself life would never work out for him. When the girl he admired rebuffed him, he immediately concluded this would perpetually be his fate, so the girl’s refusal was particularly painful. Mr. A feels that had this dynamic been discussed with him at the time, he may well not have needed medication at all.
Ms. B, like Mr. A, was prescribed antidepressants for depressive reactions to early romantic disappointments. Likewise, she self-punitively convinced herself, despite easily attracting men’s attentions, that these disappointments meant a lifetime alone. Ms. B has a family history of depression (although neither of her brothers struggles with it), and she felt that she needed the medications to help negotiate difficult periods. But should she have been on them for extended periods of time? Therapeutic attention to her emotional conflicts helped her to form lasting relationships, marry, and have children. Unable to get off the medications, she had to deal with the risks of their use during pregnancy, which she then subjected to the same sort of guilty self-accusations as she previously had used to limit her romantic prospects.
Ms. C came to me on three medications – one for each of her significant romantic break-ups. She, too, was depressively self-diminishing, beginning therapy by letting me know all the things she could think of that might make me think less of her. Understanding some of the reasons for her self-deprecation helped her toward better romantic relationships but did not give her the courage to get off her medications. Pregnancy, however, led her to promptly and successfully discontinue an antidepressant and a mood stabilizer (she has never had any symptoms suggestive of manic depression). She remained on a low dose of a selective serotonin reuptake inhibitor, had an uneventful pregnancy, and then fell in love with a charming baby.
Principles for consideration
• Psychiatrists (and other mental health professionals and primary care physicians treating mental illness) should always make a dynamic, and not merely a descriptive, diagnosis. Even with a more clearly biologically driven problem, such as bipolar disorder, the patient’s personality and conflicts matter.
• Psychiatrists should be very judicious about prescribing medications in adolescence and young adulthood, especially for difficulties adapting to the typical events of those phases of life. Expert psychotherapy should be the first choice in these instances.
• Medication, when necessary, should be prescribed for as limited a time as possible. It is important for young people to advance their own development, not feel needlessly beholden to medications, not get iatrogenically dependent on them, and not feel that they have “diseases” they don’t have.
Amphetamines for misdiagnosed ADHD
When Ms. D’s family moved to a new house, she, her brother, and her sister, each attended a new school. Unlike her siblings, Ms. D, who was in high school, had a difficult adjustment. Her grades fell. She was taken to a psychiatrist who diagnosed ADHD and prescribed amphetamines. The psychiatrist paid little attention to her prior lack of difficulty in school or her struggles making new friends. Nor did the psychiatrist learn that Ms. D had to ward off the seductive advances of an older teacher (although Ms. D would likely not have been immediately forthcoming about this at the time).
When Ms. D came to me as a college student, for troubles with anger, anxiety, and some depression, she was religiously taking 70 mg of amphetamines daily. After I learned a bit about her and raised the question of whether she actually had ADHD, and whether it might make sense to consider tapering the amphetamines, she was appalled and looked like a toddler who was afraid I was about to steal her candy. Helping her to get off the unneeded medication was a multiyear process.
First, she had to recognize that it was prescribed to treat a problem she probably didn’t have, and second, that it was failing to help her with the problems she did have. As we attended to some of her actual emotional conflicts, she became willing to experiment with lower doses. She was able to see that her work was little changed as the dose was lowered, and that her difficulties with school had more to do with feelings toward classmates and teachers than with the presence or absence of amphetamines. After a protracted struggle, finally off the medication, she felt in charge of her life and no longer believed there was something inherently wrong with her mind or her brain.
Mr. E was the only son in a high-powered academic family. His older sisters were all intellectual standouts. Early in high school, he received his first B as a grade in a course. He was taken to a pediatrician, diagnosed with ADHD, and put on stimulants. Like Ms. D, he came to believe that he needed them. In college, he began to develop some magical aspects to his thinking, a potential side effect of the stimulants. It was very difficult to help him see either that he had a problem with his thinking or that it might be attributable to the medication.
Principles to consider
• If the ADHD wasn’t there in elementary school or before, it is unlikely that an adolescent or young adult has new-onset ADHD. A new or newly amplified conflict is occurring in the person’s mind and life A dynamic diagnosis, as always, is essential.
• When medication is prescribed for actual ADHD, as with anything else, the question of how long it will be taken must be asked. For life? Until other means of adaptation are accomplished? Until adequate outcome studies of long-term use of the medication are performed?
Helping patients to get off unneeded, or no longer needed, medications can be a difficult task. Their emotional attachments to the medications can be intense and varied. For some, the prescription is a sign of being loved and cared for. For others, it represents a certification of a deficit, appeases guilt about success, and/or attests to the need for special consideration. Insofar as the medication has been helpful, it may have come to be regarded as a dearly loved friend, or even a part of the self.
When medication has been helpful, there is also, of course, concern about the potential return of the difficulties for which it was prescribed. Few patients are told at the time of first prescription that there is potential risk of habituation and return of, or potential exaggeration of, symptoms with discontinuation. This type of discussion is more difficult to have in situations in which a prescription is urgently needed and the patient is reluctant, but is still not often done in those instances in which a prescription is more optional than essential. The picture is seldom simple.
These few comments only scratch the surface of the difficulties doctors and patients face in helping patients to discontinue their medications. Residency programs pay a lot of attention to helping trainees learn to prescribe medications; rarely do they sufficiently educate residents how to help patients discontinue them. The fact that so many residencies currently pay limited attention to interventions apart from medication contributes further to the difficulty.
Medications have saved the life of many a psychiatric patient. Some patients need medication for life. But some end up on medication for life, even in some instances when the medication may not have been needed in the first place. Although it is often a difficult task, we need to do a better job of distinguishing which patients are which.
Dr. Blum is a psychiatrist and psychoanalyst in private practice in Philadelphia. He teaches in the departments of anthropology and psychiatry at the University of Pennsylvania and at the Psychoanalytic Center of Philadelphia.
Aortic repair in Loeys-Dietz syndrome requires close follow-up
The knowledge about Loeys-Dietz syndrome has evolved quickly since Hal Dietz, MD, and Bart Loeys, MD, at Johns Hopkins University, Baltimore, first reported on it in 2005. Now, another team of Johns Hopkins investigators have reported that an aggressive approach with aortic root replacement coupled with valve-sparing whenever possible produces favorable results, but that clinicians must follow these patients closely with cardiovascular imaging.
“Growing experience with Loeys-Dietz syndrome has confirmed early impressions of its aggressive nature and proclivity toward aortic catastrophe,” Nishant D. Patel, MD, and his coauthors said in the February issue of the Journal of Thoracic and Cardiovascular Surgery (2017;153:406-12). They reported on results of all 79 patients with Loeys-Dietz syndrome (LDS) who had cardiovascular surgery at Johns Hopkins. There were two (3%) deaths during surgery and eight (10%) late deaths.
Patients with LDS are at risk for dissection early when the aortic root reaches 4 cm. Despite what they termed “favorable” outcomes of surgery, Dr. Patel and his coauthors acknowledged that reintervention rates for this population are high – 19 patients (24%) had subsequent operations. That suggests cardiac surgeons must closely monitor these patients. “Meticulous follow-up with cardiovascular surveillance imaging remains important for management, particularly as clinical LDS subtypes are characterized and more tailored treatment is developed,” Dr. Patel and his coauthors reported.
They advise echocardiography every 3 to 6 months for the first year after surgery and then every 6 to 12 months afterward. Full-body imaging should occur at least every 2 years.
“In particular, patients with type B dissections should be monitored aggressively for aneurysm growth,” Dr. Patel and his coauthors said. They recommend imaging at seven to 14 days after dissection, then repeat imaging at 1, 3, 6, and 12 months, and then yearly thereafter.
They noted that four LDS subtypes have been identified. Although those with LDS1 and 2 subtypes are prone to aortic rupture at an earlier age and at smaller aortic diameters than other connective tissue disorders, the medical and surgical management for all subtypes are similar, Dr. Patel and his coauthors indicated.
“Certain congenital heart defects are more common among patients with LDS, compared with the normal population, including patent ductus arteriosus and mitral valve prolapse/insufficiency,” they said. Genotype is one factor that determines the need for surgery in LDS patients, Dr. Patel and his coauthors said. Others are growth rate, aortic valve function, family history, and severity of noncardiac phenotype.
The 79 patients in the study were divided almost evenly between gender, and the average age at first operation was 24.9 years; 38 were children younger than 18 years and 20 had a previous sternotomy.
Aortic root replacement represented the predominant operation in the group, accounting for 65 operations (82.3%), of which 52 (80%) were valve-sparing procedures and the remainder were composite valve-graft procedures. The other procedures the researchers performed were nine aortic arch replacements (11.4%), three open thoracoabdominal repairs (3.8%) and two ascending aorta replacements (2.5%).
“Valve-sparing root replacement has become a safe and reliable option for appropriately selected younger patients with LDS,” Dr. Patel and his coauthors wrote. Five patients needed a second operation on the aortic valve or root; three of them had a Florida sleeve procedure. “Based on these initial outcomes with the Florida sleeve at our institution, we have abandoned this procedure in favor of conventional valve-sparing root replacement,” Dr. Patel and his coauthors stated.
Dr. Patel and his coauthors had no financial relationships to disclose.
This report by Dr. Patel and his coauthors confirms the need for close surveillance of individuals with Loeys-Dietz syndrome who have had aortic operations, John S. Ikonomidis, MD, PhD, of the Medical University of South Carolina, Charleston, said in his invited commentary (J Thorac Cardiovasc Surg. 2017;153:413-4).
Dr. Ikonomidis noted this study is important because of its population size. “This is probably the largest single-center surgical report of this kind in the world,” he said.
The study highlighted a number of issues germane to LDS patients who have cardiovascular surgery, among them a critical need for genetic testing to help cardiac surgeons determine the disease genotype and what operation to perform, Dr. Ikonomidis said.
But Dr. Ikonomidis also pointed out the variation in aortic root size in the study patients. The smallest root in the series was 2 cm and 21 of 65 patients with a maximum root diameter smaller than 4 cm had root surgery. “This is a testament to the fact that surgical decision making in this population is dependent not just on the known genotype and aortic dimensions, but also on the rate of growth, aortic valve function, severity of noncardiac phenotype, and family history,” Dr. Ikonomidis said.
Dr. Ikonomidis had no financial relationships to disclose.
This report by Dr. Patel and his coauthors confirms the need for close surveillance of individuals with Loeys-Dietz syndrome who have had aortic operations, John S. Ikonomidis, MD, PhD, of the Medical University of South Carolina, Charleston, said in his invited commentary (J Thorac Cardiovasc Surg. 2017;153:413-4).
Dr. Ikonomidis noted this study is important because of its population size. “This is probably the largest single-center surgical report of this kind in the world,” he said.
The study highlighted a number of issues germane to LDS patients who have cardiovascular surgery, among them a critical need for genetic testing to help cardiac surgeons determine the disease genotype and what operation to perform, Dr. Ikonomidis said.
But Dr. Ikonomidis also pointed out the variation in aortic root size in the study patients. The smallest root in the series was 2 cm and 21 of 65 patients with a maximum root diameter smaller than 4 cm had root surgery. “This is a testament to the fact that surgical decision making in this population is dependent not just on the known genotype and aortic dimensions, but also on the rate of growth, aortic valve function, severity of noncardiac phenotype, and family history,” Dr. Ikonomidis said.
Dr. Ikonomidis had no financial relationships to disclose.
This report by Dr. Patel and his coauthors confirms the need for close surveillance of individuals with Loeys-Dietz syndrome who have had aortic operations, John S. Ikonomidis, MD, PhD, of the Medical University of South Carolina, Charleston, said in his invited commentary (J Thorac Cardiovasc Surg. 2017;153:413-4).
Dr. Ikonomidis noted this study is important because of its population size. “This is probably the largest single-center surgical report of this kind in the world,” he said.
The study highlighted a number of issues germane to LDS patients who have cardiovascular surgery, among them a critical need for genetic testing to help cardiac surgeons determine the disease genotype and what operation to perform, Dr. Ikonomidis said.
But Dr. Ikonomidis also pointed out the variation in aortic root size in the study patients. The smallest root in the series was 2 cm and 21 of 65 patients with a maximum root diameter smaller than 4 cm had root surgery. “This is a testament to the fact that surgical decision making in this population is dependent not just on the known genotype and aortic dimensions, but also on the rate of growth, aortic valve function, severity of noncardiac phenotype, and family history,” Dr. Ikonomidis said.
Dr. Ikonomidis had no financial relationships to disclose.
The knowledge about Loeys-Dietz syndrome has evolved quickly since Hal Dietz, MD, and Bart Loeys, MD, at Johns Hopkins University, Baltimore, first reported on it in 2005. Now, another team of Johns Hopkins investigators have reported that an aggressive approach with aortic root replacement coupled with valve-sparing whenever possible produces favorable results, but that clinicians must follow these patients closely with cardiovascular imaging.
“Growing experience with Loeys-Dietz syndrome has confirmed early impressions of its aggressive nature and proclivity toward aortic catastrophe,” Nishant D. Patel, MD, and his coauthors said in the February issue of the Journal of Thoracic and Cardiovascular Surgery (2017;153:406-12). They reported on results of all 79 patients with Loeys-Dietz syndrome (LDS) who had cardiovascular surgery at Johns Hopkins. There were two (3%) deaths during surgery and eight (10%) late deaths.
Patients with LDS are at risk for dissection early when the aortic root reaches 4 cm. Despite what they termed “favorable” outcomes of surgery, Dr. Patel and his coauthors acknowledged that reintervention rates for this population are high – 19 patients (24%) had subsequent operations. That suggests cardiac surgeons must closely monitor these patients. “Meticulous follow-up with cardiovascular surveillance imaging remains important for management, particularly as clinical LDS subtypes are characterized and more tailored treatment is developed,” Dr. Patel and his coauthors reported.
They advise echocardiography every 3 to 6 months for the first year after surgery and then every 6 to 12 months afterward. Full-body imaging should occur at least every 2 years.
“In particular, patients with type B dissections should be monitored aggressively for aneurysm growth,” Dr. Patel and his coauthors said. They recommend imaging at seven to 14 days after dissection, then repeat imaging at 1, 3, 6, and 12 months, and then yearly thereafter.
They noted that four LDS subtypes have been identified. Although those with LDS1 and 2 subtypes are prone to aortic rupture at an earlier age and at smaller aortic diameters than other connective tissue disorders, the medical and surgical management for all subtypes are similar, Dr. Patel and his coauthors indicated.
“Certain congenital heart defects are more common among patients with LDS, compared with the normal population, including patent ductus arteriosus and mitral valve prolapse/insufficiency,” they said. Genotype is one factor that determines the need for surgery in LDS patients, Dr. Patel and his coauthors said. Others are growth rate, aortic valve function, family history, and severity of noncardiac phenotype.
The 79 patients in the study were divided almost evenly between gender, and the average age at first operation was 24.9 years; 38 were children younger than 18 years and 20 had a previous sternotomy.
Aortic root replacement represented the predominant operation in the group, accounting for 65 operations (82.3%), of which 52 (80%) were valve-sparing procedures and the remainder were composite valve-graft procedures. The other procedures the researchers performed were nine aortic arch replacements (11.4%), three open thoracoabdominal repairs (3.8%) and two ascending aorta replacements (2.5%).
“Valve-sparing root replacement has become a safe and reliable option for appropriately selected younger patients with LDS,” Dr. Patel and his coauthors wrote. Five patients needed a second operation on the aortic valve or root; three of them had a Florida sleeve procedure. “Based on these initial outcomes with the Florida sleeve at our institution, we have abandoned this procedure in favor of conventional valve-sparing root replacement,” Dr. Patel and his coauthors stated.
Dr. Patel and his coauthors had no financial relationships to disclose.
The knowledge about Loeys-Dietz syndrome has evolved quickly since Hal Dietz, MD, and Bart Loeys, MD, at Johns Hopkins University, Baltimore, first reported on it in 2005. Now, another team of Johns Hopkins investigators have reported that an aggressive approach with aortic root replacement coupled with valve-sparing whenever possible produces favorable results, but that clinicians must follow these patients closely with cardiovascular imaging.
“Growing experience with Loeys-Dietz syndrome has confirmed early impressions of its aggressive nature and proclivity toward aortic catastrophe,” Nishant D. Patel, MD, and his coauthors said in the February issue of the Journal of Thoracic and Cardiovascular Surgery (2017;153:406-12). They reported on results of all 79 patients with Loeys-Dietz syndrome (LDS) who had cardiovascular surgery at Johns Hopkins. There were two (3%) deaths during surgery and eight (10%) late deaths.
Patients with LDS are at risk for dissection early when the aortic root reaches 4 cm. Despite what they termed “favorable” outcomes of surgery, Dr. Patel and his coauthors acknowledged that reintervention rates for this population are high – 19 patients (24%) had subsequent operations. That suggests cardiac surgeons must closely monitor these patients. “Meticulous follow-up with cardiovascular surveillance imaging remains important for management, particularly as clinical LDS subtypes are characterized and more tailored treatment is developed,” Dr. Patel and his coauthors reported.
They advise echocardiography every 3 to 6 months for the first year after surgery and then every 6 to 12 months afterward. Full-body imaging should occur at least every 2 years.
“In particular, patients with type B dissections should be monitored aggressively for aneurysm growth,” Dr. Patel and his coauthors said. They recommend imaging at seven to 14 days after dissection, then repeat imaging at 1, 3, 6, and 12 months, and then yearly thereafter.
They noted that four LDS subtypes have been identified. Although those with LDS1 and 2 subtypes are prone to aortic rupture at an earlier age and at smaller aortic diameters than other connective tissue disorders, the medical and surgical management for all subtypes are similar, Dr. Patel and his coauthors indicated.
“Certain congenital heart defects are more common among patients with LDS, compared with the normal population, including patent ductus arteriosus and mitral valve prolapse/insufficiency,” they said. Genotype is one factor that determines the need for surgery in LDS patients, Dr. Patel and his coauthors said. Others are growth rate, aortic valve function, family history, and severity of noncardiac phenotype.
The 79 patients in the study were divided almost evenly between gender, and the average age at first operation was 24.9 years; 38 were children younger than 18 years and 20 had a previous sternotomy.
Aortic root replacement represented the predominant operation in the group, accounting for 65 operations (82.3%), of which 52 (80%) were valve-sparing procedures and the remainder were composite valve-graft procedures. The other procedures the researchers performed were nine aortic arch replacements (11.4%), three open thoracoabdominal repairs (3.8%) and two ascending aorta replacements (2.5%).
“Valve-sparing root replacement has become a safe and reliable option for appropriately selected younger patients with LDS,” Dr. Patel and his coauthors wrote. Five patients needed a second operation on the aortic valve or root; three of them had a Florida sleeve procedure. “Based on these initial outcomes with the Florida sleeve at our institution, we have abandoned this procedure in favor of conventional valve-sparing root replacement,” Dr. Patel and his coauthors stated.
Dr. Patel and his coauthors had no financial relationships to disclose.
Key clinical point: Outcomes for aortic surgery in Loeys-Dietz syndrome are favorable, but reintervention rates are high.
Major finding: Patients require close postoperative follow-up with cardiovascular imaging.
Data source: Retrospective review of 79 patients who had cardiovascular surgery for LDS over 26 years at Johns Hopkins University.
Disclosure: Dr. Patel and his coauthors reported having no relevant financial disclosures.
Oral contraceptive use confers long-term cancer protection
New findings from a cohort study with more than 4 decades of follow-up show that, while women who have ever used combined oral contraceptives see an increased risk of breast and cervical cancer, the risk disappears within about 5 years after stopping, but a protective effect against colorectal, endometrial, and ovarian cancer persists for more than 30 years.
The findings provide an update to the General Practitioners’ Oral Contraception Study of a United Kingdom cohort recruited in the late 1960s.
The mean age was 70.2 years, most were white, and the mean follow-up was 40.7 years. Women who had used the pill did so a mean 3.66 years and used older, higher-estrogen formulations.
Compared with never users, users of oral contraception had a nonsignificant 4% reduced risk of any cancer. The incidence rate ratio for breast cancer was similar between ever users and nonusers (IRR 1.04; 99% CI, 0.91-1.17). Women who had used OCs saw significant reductions in colorectal (IRR, 0.81; 99% CI, 0.66-0.99), endometrial (IRR, 0.66; 99% CI, 0.48-0.89), ovarian (IRR, 0.67; 99% CI, 0.50-0.89), and lymphatic and hematopoietic cancers (IRR, 0.74; 0.58-0.94), compared with never users.
Lung cancer incidence was increased among ever users of OCs, but only in women who smoked at the time of recruitment.
“There was no evidence of new cancer risks appearing later in life among women who had used oral contraceptives,” the researchers wrote. “Thus, the overall balance of cancer risk among past users of oral contraceptives was neutral with the increased risks counterbalanced by the endometrial, ovarian, and colorectal cancer benefits that persist at least 30 years.”
The results, the researchers wrote, “provide strong evidence that most women do not expose themselves to long-term cancer harm if they choose to use oral contraception, indeed many are likely to be protected.”
The study was funded by the Royal College of General Practitioners, Medical Research Council, Imperial Cancer Research Fund, British Heart Foundation, Schering AG, Schering Health Care, Wyeth Ayerst International, Ortho Cilag, and Searle. The researchers reported having no conflicts of interest.
New findings from a cohort study with more than 4 decades of follow-up show that, while women who have ever used combined oral contraceptives see an increased risk of breast and cervical cancer, the risk disappears within about 5 years after stopping, but a protective effect against colorectal, endometrial, and ovarian cancer persists for more than 30 years.
The findings provide an update to the General Practitioners’ Oral Contraception Study of a United Kingdom cohort recruited in the late 1960s.
The mean age was 70.2 years, most were white, and the mean follow-up was 40.7 years. Women who had used the pill did so a mean 3.66 years and used older, higher-estrogen formulations.
Compared with never users, users of oral contraception had a nonsignificant 4% reduced risk of any cancer. The incidence rate ratio for breast cancer was similar between ever users and nonusers (IRR 1.04; 99% CI, 0.91-1.17). Women who had used OCs saw significant reductions in colorectal (IRR, 0.81; 99% CI, 0.66-0.99), endometrial (IRR, 0.66; 99% CI, 0.48-0.89), ovarian (IRR, 0.67; 99% CI, 0.50-0.89), and lymphatic and hematopoietic cancers (IRR, 0.74; 0.58-0.94), compared with never users.
Lung cancer incidence was increased among ever users of OCs, but only in women who smoked at the time of recruitment.
“There was no evidence of new cancer risks appearing later in life among women who had used oral contraceptives,” the researchers wrote. “Thus, the overall balance of cancer risk among past users of oral contraceptives was neutral with the increased risks counterbalanced by the endometrial, ovarian, and colorectal cancer benefits that persist at least 30 years.”
The results, the researchers wrote, “provide strong evidence that most women do not expose themselves to long-term cancer harm if they choose to use oral contraception, indeed many are likely to be protected.”
The study was funded by the Royal College of General Practitioners, Medical Research Council, Imperial Cancer Research Fund, British Heart Foundation, Schering AG, Schering Health Care, Wyeth Ayerst International, Ortho Cilag, and Searle. The researchers reported having no conflicts of interest.
New findings from a cohort study with more than 4 decades of follow-up show that, while women who have ever used combined oral contraceptives see an increased risk of breast and cervical cancer, the risk disappears within about 5 years after stopping, but a protective effect against colorectal, endometrial, and ovarian cancer persists for more than 30 years.
The findings provide an update to the General Practitioners’ Oral Contraception Study of a United Kingdom cohort recruited in the late 1960s.
The mean age was 70.2 years, most were white, and the mean follow-up was 40.7 years. Women who had used the pill did so a mean 3.66 years and used older, higher-estrogen formulations.
Compared with never users, users of oral contraception had a nonsignificant 4% reduced risk of any cancer. The incidence rate ratio for breast cancer was similar between ever users and nonusers (IRR 1.04; 99% CI, 0.91-1.17). Women who had used OCs saw significant reductions in colorectal (IRR, 0.81; 99% CI, 0.66-0.99), endometrial (IRR, 0.66; 99% CI, 0.48-0.89), ovarian (IRR, 0.67; 99% CI, 0.50-0.89), and lymphatic and hematopoietic cancers (IRR, 0.74; 0.58-0.94), compared with never users.
Lung cancer incidence was increased among ever users of OCs, but only in women who smoked at the time of recruitment.
“There was no evidence of new cancer risks appearing later in life among women who had used oral contraceptives,” the researchers wrote. “Thus, the overall balance of cancer risk among past users of oral contraceptives was neutral with the increased risks counterbalanced by the endometrial, ovarian, and colorectal cancer benefits that persist at least 30 years.”
The results, the researchers wrote, “provide strong evidence that most women do not expose themselves to long-term cancer harm if they choose to use oral contraception, indeed many are likely to be protected.”
The study was funded by the Royal College of General Practitioners, Medical Research Council, Imperial Cancer Research Fund, British Heart Foundation, Schering AG, Schering Health Care, Wyeth Ayerst International, Ortho Cilag, and Searle. The researchers reported having no conflicts of interest.
FROM THE AMERICAN JOURNAL OF OBSTETRICS AND GYNECOLOGY
Key clinical point:
Major finding: At about 40 years of follow-up, women who had ever used combined OCs saw reduced incidence of colorectal (IRR, 0.81), endometrial (IRR, 0.66), ovarian (IRR, 0.67), and lymphatic and hematopoietic cancer (IRR, 0.74), compared with never users.
Data source: A prospective cohort study originally enrolling 46,000 women who were followed for up to 44 years.
Disclosures: The study was funded by the Royal College of General Practitioners, Medical Research Council, Imperial Cancer Research Fund, British Heart Foundation, Schering AG, Schering Health Care, Wyeth Ayerst International, Ortho Cilag, and Searle. The researchers reported having no conflicts of interest.
Intraoperative PTH spikes may mean multigland disease
SAN DIEGO – Intraoperative spikes of parathyroid hormone don’t predict a failed parathyroidectomy, according to a retrospective study of patients who had the surgery for hyperparathyroidism.
They should, however, raise the suspicion of multigland disease, Richard Teo said at the Association for Academic Surgery/Society of University Surgeons Academic Surgical Congress.
“Significantly more patients with intraoperative spikes didn’t achieve this drop, and they had a higher rate of multigland disease requiring bilateral neck exploration,” he said. “But although spikes did increase the suspicion of multigland disease, they did not affect the operative success rate in this study.”
He presented a retrospective analysis of 683 patients who underwent parathyroidectomy for hyperparathyroidism. These patients were largely female (76%). Those who had the intraoperative spikes were older (60 vs. 58 years) and had higher preoperative calcium than patients without spikes. There were no differences in parathyroid hormone (PTH) or creatinine levels.
Operative success – described as normocalcemia at least 6 months after surgery – occurred in 98% of the entire group. The operative failure rate was 0.9%, and the recurrence rate was 1%. About 5% of the entire group had multigland disease.
Intraoperative PTH spikes occurred in 224 patients (33%). Compared with those without spikes, patients with spikes were significantly less likely to achieve the PTH decrease of 50% or greater at 10 minutes after gland excision (70% vs. 90%).
Bilateral neck explorations were significantly more common among those with spikes (10% vs. 5%), as was multigland disease (8% vs. 3%). There was no significant difference in operative time (54 vs. 59 minutes).
Postoperative outcomes were similar. At last follow-up, calcium levels were identical (9.3 mg/dL) in the group with and the group without a spike in PTH. In addition, the PTH levels were not significantly different (47 vs. 57 pg/mL).
Operative success was achieved in 98% of both groups, with a 2% failure rate in both groups. Recurrence was slightly, though not significantly, less in the spike group (0.4% vs. 1.3%).
“We were able to show that intraoperative PTH spikes don’t predict a poor outcome of parathyroidectomy,” Mr. Teo said. “We also feel this study reaffirms the clinical utility of the 50% or greater intraoperative PTH drop as a predictor of the successful removal of all hypersecreting parathyroid tissue during parathyroidectomy guided by intraoperative PTH monitoring.”
He had no financial disclosures.
[email protected]
On Twitter @alz_gal
SAN DIEGO – Intraoperative spikes of parathyroid hormone don’t predict a failed parathyroidectomy, according to a retrospective study of patients who had the surgery for hyperparathyroidism.
They should, however, raise the suspicion of multigland disease, Richard Teo said at the Association for Academic Surgery/Society of University Surgeons Academic Surgical Congress.
“Significantly more patients with intraoperative spikes didn’t achieve this drop, and they had a higher rate of multigland disease requiring bilateral neck exploration,” he said. “But although spikes did increase the suspicion of multigland disease, they did not affect the operative success rate in this study.”
He presented a retrospective analysis of 683 patients who underwent parathyroidectomy for hyperparathyroidism. These patients were largely female (76%). Those who had the intraoperative spikes were older (60 vs. 58 years) and had higher preoperative calcium than patients without spikes. There were no differences in parathyroid hormone (PTH) or creatinine levels.
Operative success – described as normocalcemia at least 6 months after surgery – occurred in 98% of the entire group. The operative failure rate was 0.9%, and the recurrence rate was 1%. About 5% of the entire group had multigland disease.
Intraoperative PTH spikes occurred in 224 patients (33%). Compared with those without spikes, patients with spikes were significantly less likely to achieve the PTH decrease of 50% or greater at 10 minutes after gland excision (70% vs. 90%).
Bilateral neck explorations were significantly more common among those with spikes (10% vs. 5%), as was multigland disease (8% vs. 3%). There was no significant difference in operative time (54 vs. 59 minutes).
Postoperative outcomes were similar. At last follow-up, calcium levels were identical (9.3 mg/dL) in the group with and the group without a spike in PTH. In addition, the PTH levels were not significantly different (47 vs. 57 pg/mL).
Operative success was achieved in 98% of both groups, with a 2% failure rate in both groups. Recurrence was slightly, though not significantly, less in the spike group (0.4% vs. 1.3%).
“We were able to show that intraoperative PTH spikes don’t predict a poor outcome of parathyroidectomy,” Mr. Teo said. “We also feel this study reaffirms the clinical utility of the 50% or greater intraoperative PTH drop as a predictor of the successful removal of all hypersecreting parathyroid tissue during parathyroidectomy guided by intraoperative PTH monitoring.”
He had no financial disclosures.
[email protected]
On Twitter @alz_gal
SAN DIEGO – Intraoperative spikes of parathyroid hormone don’t predict a failed parathyroidectomy, according to a retrospective study of patients who had the surgery for hyperparathyroidism.
They should, however, raise the suspicion of multigland disease, Richard Teo said at the Association for Academic Surgery/Society of University Surgeons Academic Surgical Congress.
“Significantly more patients with intraoperative spikes didn’t achieve this drop, and they had a higher rate of multigland disease requiring bilateral neck exploration,” he said. “But although spikes did increase the suspicion of multigland disease, they did not affect the operative success rate in this study.”
He presented a retrospective analysis of 683 patients who underwent parathyroidectomy for hyperparathyroidism. These patients were largely female (76%). Those who had the intraoperative spikes were older (60 vs. 58 years) and had higher preoperative calcium than patients without spikes. There were no differences in parathyroid hormone (PTH) or creatinine levels.
Operative success – described as normocalcemia at least 6 months after surgery – occurred in 98% of the entire group. The operative failure rate was 0.9%, and the recurrence rate was 1%. About 5% of the entire group had multigland disease.
Intraoperative PTH spikes occurred in 224 patients (33%). Compared with those without spikes, patients with spikes were significantly less likely to achieve the PTH decrease of 50% or greater at 10 minutes after gland excision (70% vs. 90%).
Bilateral neck explorations were significantly more common among those with spikes (10% vs. 5%), as was multigland disease (8% vs. 3%). There was no significant difference in operative time (54 vs. 59 minutes).
Postoperative outcomes were similar. At last follow-up, calcium levels were identical (9.3 mg/dL) in the group with and the group without a spike in PTH. In addition, the PTH levels were not significantly different (47 vs. 57 pg/mL).
Operative success was achieved in 98% of both groups, with a 2% failure rate in both groups. Recurrence was slightly, though not significantly, less in the spike group (0.4% vs. 1.3%).
“We were able to show that intraoperative PTH spikes don’t predict a poor outcome of parathyroidectomy,” Mr. Teo said. “We also feel this study reaffirms the clinical utility of the 50% or greater intraoperative PTH drop as a predictor of the successful removal of all hypersecreting parathyroid tissue during parathyroidectomy guided by intraoperative PTH monitoring.”
He had no financial disclosures.
[email protected]
On Twitter @alz_gal
Key clinical point: Major finding: Intraoperative PTH spikes occurred in 33% of parathyroidectomy patients, and 8% of patients with spikes had multigland disease.
Data source: The retrospective study comprised 683 patients.
Disclosures: He had no financial disclosures.
HHS Funds More Health Centers
The HHS has announced more than $50 million in funding for 75 health centers in 23 states, Puerto Rico, and the Federated States of Micronesia.
One in 13 people nationwide depend on a Health Resources and Services Administration (HRSA)-funded health center for preventive and primary health care needs. Among the special populations served are nearly 2 million homeless patients, 910,172 agricultural workers, and 305,520 veterans.
Health centers are community based and patient directed, delivering comprehensive, culturally competent primary care. They also often link to pharmacy, mental health, substance abuse, and oral health services in areas where economic, geographic, or cultural barriers limit access to affordable health care services.
Although the health centers serve patients who are often sicker and more at risk than is the general population, the quality of care “equals and often surpasses” that provided by other primary care providers, HRSA says. For example, > 93% of HRSA-funded health centers met or exceeded at least 1 goal of Healthy People 2020 for clinical performance in 2015. And > 68% of health centers are recognized by national accrediting organizations as Patient-Centered Medical Homes, an advanced model of team-based primary care.
The health centers, which started 50 years ago with just 2, have expanded to > 9,800 clinic sites. Between 2008 -2015, HRSA-supported centers increased by 27%, and the number of patients increased by 42% to more than 7 million more patients. In 2015 alone, HRSA funded nearly 430 new center sites. Health centers already provide care to more than 24 million people; this new funding will extend care to about 240,000 additional patients.
The HHS has announced more than $50 million in funding for 75 health centers in 23 states, Puerto Rico, and the Federated States of Micronesia.
One in 13 people nationwide depend on a Health Resources and Services Administration (HRSA)-funded health center for preventive and primary health care needs. Among the special populations served are nearly 2 million homeless patients, 910,172 agricultural workers, and 305,520 veterans.
Health centers are community based and patient directed, delivering comprehensive, culturally competent primary care. They also often link to pharmacy, mental health, substance abuse, and oral health services in areas where economic, geographic, or cultural barriers limit access to affordable health care services.
Although the health centers serve patients who are often sicker and more at risk than is the general population, the quality of care “equals and often surpasses” that provided by other primary care providers, HRSA says. For example, > 93% of HRSA-funded health centers met or exceeded at least 1 goal of Healthy People 2020 for clinical performance in 2015. And > 68% of health centers are recognized by national accrediting organizations as Patient-Centered Medical Homes, an advanced model of team-based primary care.
The health centers, which started 50 years ago with just 2, have expanded to > 9,800 clinic sites. Between 2008 -2015, HRSA-supported centers increased by 27%, and the number of patients increased by 42% to more than 7 million more patients. In 2015 alone, HRSA funded nearly 430 new center sites. Health centers already provide care to more than 24 million people; this new funding will extend care to about 240,000 additional patients.
The HHS has announced more than $50 million in funding for 75 health centers in 23 states, Puerto Rico, and the Federated States of Micronesia.
One in 13 people nationwide depend on a Health Resources and Services Administration (HRSA)-funded health center for preventive and primary health care needs. Among the special populations served are nearly 2 million homeless patients, 910,172 agricultural workers, and 305,520 veterans.
Health centers are community based and patient directed, delivering comprehensive, culturally competent primary care. They also often link to pharmacy, mental health, substance abuse, and oral health services in areas where economic, geographic, or cultural barriers limit access to affordable health care services.
Although the health centers serve patients who are often sicker and more at risk than is the general population, the quality of care “equals and often surpasses” that provided by other primary care providers, HRSA says. For example, > 93% of HRSA-funded health centers met or exceeded at least 1 goal of Healthy People 2020 for clinical performance in 2015. And > 68% of health centers are recognized by national accrediting organizations as Patient-Centered Medical Homes, an advanced model of team-based primary care.
The health centers, which started 50 years ago with just 2, have expanded to > 9,800 clinic sites. Between 2008 -2015, HRSA-supported centers increased by 27%, and the number of patients increased by 42% to more than 7 million more patients. In 2015 alone, HRSA funded nearly 430 new center sites. Health centers already provide care to more than 24 million people; this new funding will extend care to about 240,000 additional patients.
50 years of child psychiatry, developmental-behavioral pediatrics
The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.
While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.
The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.
The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.
The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.
In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.
The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.
The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.
The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.
Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.
The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.
This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.
And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].
The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.
While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.
The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.
The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.
The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.
In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.
The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.
The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.
The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.
Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.
The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.
This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.
And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].
The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.
While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.
The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.
The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.
The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.
In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.
The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.
The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.
The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.
Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.
The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.
This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.
And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].