User login
Antidepressants highly effective against binge-eating disorder
People with binge-eating disorder have the greatest chance of achieving normal eating habits and alleviating symptoms associated with the disorder by taking second-generation antidepressants, topiramate, and lisdexamfetamine and engaging in cognitive-behavioral therapy, an analysis of several studies showed.
The findings should be used to “address other treatments, combinations of treatments, and comparisons between treatments; treatment for postbariatric surgery patients and children; and the course of these illnesses,” according to the report, released as part of the Comparative Effectiveness Review No. 160 by the Agency for Healthcare Research and Quality.
The authors of the report examined a total of 52 randomized controlled trials and 15 observational studies collected through searches of MEDLINE, EMBASE, the Cochrane Library, Academic OneFile, and the Cumulative Index to Nursing and Allied Health Literature databases, with 48 of the included studies specifically concerning binge-eating disorder (BED). English-language studies up through Jan. 19, 2015, were included for analysis, and the investigators specifically looked for studies of individuals who met DSM-IV or DSM-5 criteria for BED and studies of postbariatric surgery patients, including children, experiencing loss-of-control (LOC) eating habits.
Each study was evaluated based on a set of 15 “key questions” to determine the effectiveness and harms of the treatments involved. The key questions used by the investigators sought to determine the evidence of effectiveness and harms of BED treatments; LOC eating among bariatric surgery patients; and the effectiveness of any LOC treatments based on age, sex, race, ethnicity, initial body mass index, duration of illness, and coexisting conditions. In addition, similar questions were used to ascertain the effectiveness of treatments on pediatric patients.
“Broadly, we included pharmacological, psychological, behavioral, and combination interventions,” the report stated. “We considered physical and psychological health outcomes in four major categories: binge behavior (binge eating or LOC eating); binge-eating–related psychopathology (e.g., weight and shape concerns, dietary restraint); physical health functioning (i.e., weight and other indexes of metabolic health, e.g., diabetes); and general psychopathology (e.g., depression, anxiety).”
Antidepressants were found to be more effective than placebos across the studies included in the survey, specifically second-generation antidepressants, and were 1.67 times more likely to help BED patients achieve abstinence than placebos used in these trials; 41% of subjects receiving antidepressants ultimately achieved abstinence, compared with 23% on placebos.
With topiramate, binge eating generally decreased to as little as one episode per week, and a higher portion of subjects (58%) achieved abstinence than those on placebo (28%). In addition, topiramate was found to decrease “obsessive thoughts and compulsions related to binge eating” by nearly 30%, versus 23% in subjects taking placebos.
Studies involving lisdexamfetamine showed abstinence achieved in 40% of subjects, far higher than the 15% on placebos, and a likelihood of achieving abstinence 2.61 times higher than for those in the placebo cohorts. Binge-eating episodes per week also decreased, and were, on average, anywhere from 1.7 to 1.3 fewer than those in subjects taking placebo. Subjects receiving cognitive-behavioral therapy – whether led by a therapist or self-led, though the former was found to have stronger evidence of effectiveness than the latter – had an average of 2.3 fewer binge-eating episodes per week, and subjects involved with therapy were 4.95 times more likely to achieve abstinence than those who were not receiving therapy.
“Findings about BED treatment interventions are likely to be applicable to all adults age 18 and older with the disorder, but chiefly to overweight or obese women,” the report stated. “We cannot comment on the applicability of treatment findings for specific subgroups of adults (even among women) or whether findings extend to BED patients diagnosed based on DSM-5 criteria.”
The authors also noted that the findings are unclear with respect to adolescents with BED or members of ethnic groups, and children with loss-of-control eating or who have undergone bariatric surgery.
“A convention for reporting and analyzing” outcomes is necessary for the findings of this study to take on real-world applications that can be beneficial to clinicians and their patients in the near future, the authors concluded. However, more multisite randomized, controlled trials are needed.
People with binge-eating disorder have the greatest chance of achieving normal eating habits and alleviating symptoms associated with the disorder by taking second-generation antidepressants, topiramate, and lisdexamfetamine and engaging in cognitive-behavioral therapy, an analysis of several studies showed.
The findings should be used to “address other treatments, combinations of treatments, and comparisons between treatments; treatment for postbariatric surgery patients and children; and the course of these illnesses,” according to the report, released as part of the Comparative Effectiveness Review No. 160 by the Agency for Healthcare Research and Quality.
The authors of the report examined a total of 52 randomized controlled trials and 15 observational studies collected through searches of MEDLINE, EMBASE, the Cochrane Library, Academic OneFile, and the Cumulative Index to Nursing and Allied Health Literature databases, with 48 of the included studies specifically concerning binge-eating disorder (BED). English-language studies up through Jan. 19, 2015, were included for analysis, and the investigators specifically looked for studies of individuals who met DSM-IV or DSM-5 criteria for BED and studies of postbariatric surgery patients, including children, experiencing loss-of-control (LOC) eating habits.
Each study was evaluated based on a set of 15 “key questions” to determine the effectiveness and harms of the treatments involved. The key questions used by the investigators sought to determine the evidence of effectiveness and harms of BED treatments; LOC eating among bariatric surgery patients; and the effectiveness of any LOC treatments based on age, sex, race, ethnicity, initial body mass index, duration of illness, and coexisting conditions. In addition, similar questions were used to ascertain the effectiveness of treatments on pediatric patients.
“Broadly, we included pharmacological, psychological, behavioral, and combination interventions,” the report stated. “We considered physical and psychological health outcomes in four major categories: binge behavior (binge eating or LOC eating); binge-eating–related psychopathology (e.g., weight and shape concerns, dietary restraint); physical health functioning (i.e., weight and other indexes of metabolic health, e.g., diabetes); and general psychopathology (e.g., depression, anxiety).”
Antidepressants were found to be more effective than placebos across the studies included in the survey, specifically second-generation antidepressants, and were 1.67 times more likely to help BED patients achieve abstinence than placebos used in these trials; 41% of subjects receiving antidepressants ultimately achieved abstinence, compared with 23% on placebos.
With topiramate, binge eating generally decreased to as little as one episode per week, and a higher portion of subjects (58%) achieved abstinence than those on placebo (28%). In addition, topiramate was found to decrease “obsessive thoughts and compulsions related to binge eating” by nearly 30%, versus 23% in subjects taking placebos.
Studies involving lisdexamfetamine showed abstinence achieved in 40% of subjects, far higher than the 15% on placebos, and a likelihood of achieving abstinence 2.61 times higher than for those in the placebo cohorts. Binge-eating episodes per week also decreased, and were, on average, anywhere from 1.7 to 1.3 fewer than those in subjects taking placebo. Subjects receiving cognitive-behavioral therapy – whether led by a therapist or self-led, though the former was found to have stronger evidence of effectiveness than the latter – had an average of 2.3 fewer binge-eating episodes per week, and subjects involved with therapy were 4.95 times more likely to achieve abstinence than those who were not receiving therapy.
“Findings about BED treatment interventions are likely to be applicable to all adults age 18 and older with the disorder, but chiefly to overweight or obese women,” the report stated. “We cannot comment on the applicability of treatment findings for specific subgroups of adults (even among women) or whether findings extend to BED patients diagnosed based on DSM-5 criteria.”
The authors also noted that the findings are unclear with respect to adolescents with BED or members of ethnic groups, and children with loss-of-control eating or who have undergone bariatric surgery.
“A convention for reporting and analyzing” outcomes is necessary for the findings of this study to take on real-world applications that can be beneficial to clinicians and their patients in the near future, the authors concluded. However, more multisite randomized, controlled trials are needed.
People with binge-eating disorder have the greatest chance of achieving normal eating habits and alleviating symptoms associated with the disorder by taking second-generation antidepressants, topiramate, and lisdexamfetamine and engaging in cognitive-behavioral therapy, an analysis of several studies showed.
The findings should be used to “address other treatments, combinations of treatments, and comparisons between treatments; treatment for postbariatric surgery patients and children; and the course of these illnesses,” according to the report, released as part of the Comparative Effectiveness Review No. 160 by the Agency for Healthcare Research and Quality.
The authors of the report examined a total of 52 randomized controlled trials and 15 observational studies collected through searches of MEDLINE, EMBASE, the Cochrane Library, Academic OneFile, and the Cumulative Index to Nursing and Allied Health Literature databases, with 48 of the included studies specifically concerning binge-eating disorder (BED). English-language studies up through Jan. 19, 2015, were included for analysis, and the investigators specifically looked for studies of individuals who met DSM-IV or DSM-5 criteria for BED and studies of postbariatric surgery patients, including children, experiencing loss-of-control (LOC) eating habits.
Each study was evaluated based on a set of 15 “key questions” to determine the effectiveness and harms of the treatments involved. The key questions used by the investigators sought to determine the evidence of effectiveness and harms of BED treatments; LOC eating among bariatric surgery patients; and the effectiveness of any LOC treatments based on age, sex, race, ethnicity, initial body mass index, duration of illness, and coexisting conditions. In addition, similar questions were used to ascertain the effectiveness of treatments on pediatric patients.
“Broadly, we included pharmacological, psychological, behavioral, and combination interventions,” the report stated. “We considered physical and psychological health outcomes in four major categories: binge behavior (binge eating or LOC eating); binge-eating–related psychopathology (e.g., weight and shape concerns, dietary restraint); physical health functioning (i.e., weight and other indexes of metabolic health, e.g., diabetes); and general psychopathology (e.g., depression, anxiety).”
Antidepressants were found to be more effective than placebos across the studies included in the survey, specifically second-generation antidepressants, and were 1.67 times more likely to help BED patients achieve abstinence than placebos used in these trials; 41% of subjects receiving antidepressants ultimately achieved abstinence, compared with 23% on placebos.
With topiramate, binge eating generally decreased to as little as one episode per week, and a higher portion of subjects (58%) achieved abstinence than those on placebo (28%). In addition, topiramate was found to decrease “obsessive thoughts and compulsions related to binge eating” by nearly 30%, versus 23% in subjects taking placebos.
Studies involving lisdexamfetamine showed abstinence achieved in 40% of subjects, far higher than the 15% on placebos, and a likelihood of achieving abstinence 2.61 times higher than for those in the placebo cohorts. Binge-eating episodes per week also decreased, and were, on average, anywhere from 1.7 to 1.3 fewer than those in subjects taking placebo. Subjects receiving cognitive-behavioral therapy – whether led by a therapist or self-led, though the former was found to have stronger evidence of effectiveness than the latter – had an average of 2.3 fewer binge-eating episodes per week, and subjects involved with therapy were 4.95 times more likely to achieve abstinence than those who were not receiving therapy.
“Findings about BED treatment interventions are likely to be applicable to all adults age 18 and older with the disorder, but chiefly to overweight or obese women,” the report stated. “We cannot comment on the applicability of treatment findings for specific subgroups of adults (even among women) or whether findings extend to BED patients diagnosed based on DSM-5 criteria.”
The authors also noted that the findings are unclear with respect to adolescents with BED or members of ethnic groups, and children with loss-of-control eating or who have undergone bariatric surgery.
“A convention for reporting and analyzing” outcomes is necessary for the findings of this study to take on real-world applications that can be beneficial to clinicians and their patients in the near future, the authors concluded. However, more multisite randomized, controlled trials are needed.
Antidepressants may increase later onset of mania, bipolar
People diagnosed with unipolar depression have a higher chance of developing mania or bipolar disorder if they’ve previously been treated with antidepressants, a new study shows (BMJ Open. 2015 Dec 15. doi: 10.1136/bmjopen-2015-008341).
“Our findings demonstrate a significant association between antidepressant therapy in patients with unipolar depression and an increased incidence of mania,” Dr. Rashmi Patel of King’s College, London, and his associates reported in the study. Moreover, the association remains significant after adjusting for both age and gender, they wrote.
Dr. Patel and his associates conducted a retrospective cohort study on 21,012 individuals aged 16 to 65 years – all of whom were diagnosed with depression and had no previous diagnosis of mania or bipolar disorder between April 1, 2006, and March 31, 2013 – from the South London and Maudsley National Health Service Foundation Trust. Clinical data on subjects’ medical history, mental state examinations, diagnostic formulations, and management plans were collected. Subjects also were classified as having had “prior antidepressant therapy” if there was “documentation of antidepressant treatment prior to the date of diagnosis of depression.” Follow-ups occurred through March 31, 2014, and the primary outcome was a diagnosis of mania or bipolar disorder during that period.
Results showed an incidence rate of 10.9 per 1,000 person-years of mania or bipolar disorder across the entire study population. The lowest incidence, 8.3 per 1,000 person-years, was in the 56-65 years age cohort, while those in the 26-35 years age cohort had the highest incidence rate – 12.3 per 1,000 person-years (P = .004).
Subjects with prior antidepressant use saw significant increases in incidence rates of mania or bipolar disorder, depending on which antidepressant they were taking. Those on tricyclics (4.7% of subjects with previous antidepressant treatment) had a 13.1 per 1,000 person-years incidence rate, while those taking trazodone (0.8%) had a 19.1 per 1,000 person-years incidence rate (P = .09 and P = .03, respectively). The most commonly used antidepressants were selective serotonin reuptake inhibitors (35.5%), which yielded an incidence rate of 13.2 per 1,000 person-years.
“The association of antidepressant therapy with mania demonstrated in the present and previous studies highlights the importance of considering whether an individual who presents with depression could be at risk of future episodes of mania,” the authors concluded. They concluded that the findings reinforce the “ongoing need to develop better ways to predict future risk of mania in people with no prior history of bipolar disorder who present with an episode of depression.”
The study was supported by the U.K. Medical Research Council Clinical Research Training Fellowship. Neither Dr. Patel nor his associates reported relevant financial disclosures.
People diagnosed with unipolar depression have a higher chance of developing mania or bipolar disorder if they’ve previously been treated with antidepressants, a new study shows (BMJ Open. 2015 Dec 15. doi: 10.1136/bmjopen-2015-008341).
“Our findings demonstrate a significant association between antidepressant therapy in patients with unipolar depression and an increased incidence of mania,” Dr. Rashmi Patel of King’s College, London, and his associates reported in the study. Moreover, the association remains significant after adjusting for both age and gender, they wrote.
Dr. Patel and his associates conducted a retrospective cohort study on 21,012 individuals aged 16 to 65 years – all of whom were diagnosed with depression and had no previous diagnosis of mania or bipolar disorder between April 1, 2006, and March 31, 2013 – from the South London and Maudsley National Health Service Foundation Trust. Clinical data on subjects’ medical history, mental state examinations, diagnostic formulations, and management plans were collected. Subjects also were classified as having had “prior antidepressant therapy” if there was “documentation of antidepressant treatment prior to the date of diagnosis of depression.” Follow-ups occurred through March 31, 2014, and the primary outcome was a diagnosis of mania or bipolar disorder during that period.
Results showed an incidence rate of 10.9 per 1,000 person-years of mania or bipolar disorder across the entire study population. The lowest incidence, 8.3 per 1,000 person-years, was in the 56-65 years age cohort, while those in the 26-35 years age cohort had the highest incidence rate – 12.3 per 1,000 person-years (P = .004).
Subjects with prior antidepressant use saw significant increases in incidence rates of mania or bipolar disorder, depending on which antidepressant they were taking. Those on tricyclics (4.7% of subjects with previous antidepressant treatment) had a 13.1 per 1,000 person-years incidence rate, while those taking trazodone (0.8%) had a 19.1 per 1,000 person-years incidence rate (P = .09 and P = .03, respectively). The most commonly used antidepressants were selective serotonin reuptake inhibitors (35.5%), which yielded an incidence rate of 13.2 per 1,000 person-years.
“The association of antidepressant therapy with mania demonstrated in the present and previous studies highlights the importance of considering whether an individual who presents with depression could be at risk of future episodes of mania,” the authors concluded. They concluded that the findings reinforce the “ongoing need to develop better ways to predict future risk of mania in people with no prior history of bipolar disorder who present with an episode of depression.”
The study was supported by the U.K. Medical Research Council Clinical Research Training Fellowship. Neither Dr. Patel nor his associates reported relevant financial disclosures.
People diagnosed with unipolar depression have a higher chance of developing mania or bipolar disorder if they’ve previously been treated with antidepressants, a new study shows (BMJ Open. 2015 Dec 15. doi: 10.1136/bmjopen-2015-008341).
“Our findings demonstrate a significant association between antidepressant therapy in patients with unipolar depression and an increased incidence of mania,” Dr. Rashmi Patel of King’s College, London, and his associates reported in the study. Moreover, the association remains significant after adjusting for both age and gender, they wrote.
Dr. Patel and his associates conducted a retrospective cohort study on 21,012 individuals aged 16 to 65 years – all of whom were diagnosed with depression and had no previous diagnosis of mania or bipolar disorder between April 1, 2006, and March 31, 2013 – from the South London and Maudsley National Health Service Foundation Trust. Clinical data on subjects’ medical history, mental state examinations, diagnostic formulations, and management plans were collected. Subjects also were classified as having had “prior antidepressant therapy” if there was “documentation of antidepressant treatment prior to the date of diagnosis of depression.” Follow-ups occurred through March 31, 2014, and the primary outcome was a diagnosis of mania or bipolar disorder during that period.
Results showed an incidence rate of 10.9 per 1,000 person-years of mania or bipolar disorder across the entire study population. The lowest incidence, 8.3 per 1,000 person-years, was in the 56-65 years age cohort, while those in the 26-35 years age cohort had the highest incidence rate – 12.3 per 1,000 person-years (P = .004).
Subjects with prior antidepressant use saw significant increases in incidence rates of mania or bipolar disorder, depending on which antidepressant they were taking. Those on tricyclics (4.7% of subjects with previous antidepressant treatment) had a 13.1 per 1,000 person-years incidence rate, while those taking trazodone (0.8%) had a 19.1 per 1,000 person-years incidence rate (P = .09 and P = .03, respectively). The most commonly used antidepressants were selective serotonin reuptake inhibitors (35.5%), which yielded an incidence rate of 13.2 per 1,000 person-years.
“The association of antidepressant therapy with mania demonstrated in the present and previous studies highlights the importance of considering whether an individual who presents with depression could be at risk of future episodes of mania,” the authors concluded. They concluded that the findings reinforce the “ongoing need to develop better ways to predict future risk of mania in people with no prior history of bipolar disorder who present with an episode of depression.”
The study was supported by the U.K. Medical Research Council Clinical Research Training Fellowship. Neither Dr. Patel nor his associates reported relevant financial disclosures.
FROM BMJ OPEN
Key clinical point: Antidepressant use in patients can heighten the subsequent risk of developing mania or bipolar disorder.
Major finding: The overall incidence rate of mania/bipolar disorder was 10.9 per 1,000 person-years, but those numbers increased to 13.1-19.1 per 1,000 person-years when factoring in prior antidepressant treatment.
Data source: Retrospective cohort study of 21,012 adults with unipolar depression between April 1, 2006 and March 31, 2013.
Disclosures: The study was supported by the U.K. Medical Research Council Clinical Research Training Fellowship. Neither Dr. Patel nor his associates reported relevant financial disclosures.
Biomarkers beat DSM categories for capturing nuances in psychosis
Three biotypes surpass traditional diagnostic categories when it comes to identifying subgroups of psychosis, a study showed.
“Classification and treatment of brain diseases subsumed by psychiatry rely on clinical phenomenology, despite the call for alternatives,” wrote Brett A. Clementz, Ph.D., of the University of Georgia, Athens, and his coinvestigators. “There is overlap in susceptibility genes and phenotypes across bipolar disorder with psychosis and schizophrenia, and considerable similarity between different psychotic disorders on symptoms, illness course, cognition, psychophysiology, and neurobiology [while] drug treatments for these conditions overlap extensively” (Am J Psychiatry. 2015. doi: 10.1176/appi.ajp.2015.14091200).
The researchers recruited 711 people from Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium sites (probands). All subjects had diagnoses of schizophrenia, schizoaffective disorder, or bipolar disorder with psychosis, and underwent interviews and laboratory data collection at the time of enrollment. In addition, 883 first-degree relatives of the 711 initial enrollees also were clinically evaluated, along with a cohort of 278 people deemed “demographically comparable [and] healthy” by investigators.
Biotypes for all individuals enrolled in the study were determined through laboratory tasks designed to “assess brain function at the neurocogntive/perceptual level.” These tasks consisted of the Brief Assessment of Cognition in Schizophrenia (BACS), pro- and antisaccade tasks, stop signal tasks, auditory paired stimuli and oddball evoked brain responses, and MRI acquisition and voxel-based morphometry.
The Structured Clinical Interview for DSM-IV and the Structured Interview for DSM-IV Personality Disorders were used for interviewing enrollees. Data compiled from these tests underwent multivariate taxometric analyses to compare biomarker variance across the three cohorts, in order to determine what, if any, heterogeneity exists in psychosis biotypes.
According to the results, diagnoses made with the clinical DSM guidelines yielded a single-severity continuum showing schizophrenia to be the most severe, followed by schizoaffective disorder and bipolar psychosis. However, biotypes showed significant variation, with investigators noting that “the three biotypes had distinctive patterns of abnormality across biomarkers that were neither entirely nor efficiently captured by a severity continuum.”
Larger separations were seen in biotype cohorts than in the DSM, specifically among probands. Among probands, group separation from healthy subjects was –2.58, –1.94, and –0.35 for biotype 1, 2, and 3 respectively for the BACS. Separation was –0.99, –0.78, and –0.05 for the stop signal task, and 3.32, 1.90, and 1.19 for the antisaccade errors. On the other hand, DSM diagnostics revealed group differences of –1.01, –1.51, and –1.83 for bipolar disorder psychosis, schizoaffective disorders, and schizophrenia, respectively, for the BACS test. Separation was –0.41, –0.61, and –0.55 for the stop signal task, and 1.36, 1.66, and 2.45 for antisaccade errors.
“Each biotype included all DSM psychosis categories, but probands diagnosed with schizophrenia were more numerous in biotype 1 (although 20% had bipolar disorder with psychosis), and probands diagnosed with bipolar disorder with psychosis were more numerous in biotype 3 (although 32% had schizophrenia), respectively,” the investigators noted.
The authors added that “when considered across proband and relative data, the biotype subgroups were superior to DSM diagnostic classes in between-group separations on external validating measures, illustrating the former scheme’s superiority for capturing neurobiological distinctiveness.”
Investigators noted that their approach did not use social functioning, brain structure, and characteristics of biological relatives in the creation of biotypes, which could have led to stronger results. Also, trial participants were mostly already on medication, classified as chronically psychotic, and tested at least once previously. The trial also had no replication sample for this sample population.
The study was supported by grants from the National Institute of Mental Health. Dr. Clementz did not report any relevant financial disclosures. Dr. Matcheri S. Keshavan reported receiving a grant from Sunovion and serving as a consultant to Forum Pharmaceuticals. Dr. Carol A. Tamminga also reported potential conflicts.
Three biotypes surpass traditional diagnostic categories when it comes to identifying subgroups of psychosis, a study showed.
“Classification and treatment of brain diseases subsumed by psychiatry rely on clinical phenomenology, despite the call for alternatives,” wrote Brett A. Clementz, Ph.D., of the University of Georgia, Athens, and his coinvestigators. “There is overlap in susceptibility genes and phenotypes across bipolar disorder with psychosis and schizophrenia, and considerable similarity between different psychotic disorders on symptoms, illness course, cognition, psychophysiology, and neurobiology [while] drug treatments for these conditions overlap extensively” (Am J Psychiatry. 2015. doi: 10.1176/appi.ajp.2015.14091200).
The researchers recruited 711 people from Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium sites (probands). All subjects had diagnoses of schizophrenia, schizoaffective disorder, or bipolar disorder with psychosis, and underwent interviews and laboratory data collection at the time of enrollment. In addition, 883 first-degree relatives of the 711 initial enrollees also were clinically evaluated, along with a cohort of 278 people deemed “demographically comparable [and] healthy” by investigators.
Biotypes for all individuals enrolled in the study were determined through laboratory tasks designed to “assess brain function at the neurocogntive/perceptual level.” These tasks consisted of the Brief Assessment of Cognition in Schizophrenia (BACS), pro- and antisaccade tasks, stop signal tasks, auditory paired stimuli and oddball evoked brain responses, and MRI acquisition and voxel-based morphometry.
The Structured Clinical Interview for DSM-IV and the Structured Interview for DSM-IV Personality Disorders were used for interviewing enrollees. Data compiled from these tests underwent multivariate taxometric analyses to compare biomarker variance across the three cohorts, in order to determine what, if any, heterogeneity exists in psychosis biotypes.
According to the results, diagnoses made with the clinical DSM guidelines yielded a single-severity continuum showing schizophrenia to be the most severe, followed by schizoaffective disorder and bipolar psychosis. However, biotypes showed significant variation, with investigators noting that “the three biotypes had distinctive patterns of abnormality across biomarkers that were neither entirely nor efficiently captured by a severity continuum.”
Larger separations were seen in biotype cohorts than in the DSM, specifically among probands. Among probands, group separation from healthy subjects was –2.58, –1.94, and –0.35 for biotype 1, 2, and 3 respectively for the BACS. Separation was –0.99, –0.78, and –0.05 for the stop signal task, and 3.32, 1.90, and 1.19 for the antisaccade errors. On the other hand, DSM diagnostics revealed group differences of –1.01, –1.51, and –1.83 for bipolar disorder psychosis, schizoaffective disorders, and schizophrenia, respectively, for the BACS test. Separation was –0.41, –0.61, and –0.55 for the stop signal task, and 1.36, 1.66, and 2.45 for antisaccade errors.
“Each biotype included all DSM psychosis categories, but probands diagnosed with schizophrenia were more numerous in biotype 1 (although 20% had bipolar disorder with psychosis), and probands diagnosed with bipolar disorder with psychosis were more numerous in biotype 3 (although 32% had schizophrenia), respectively,” the investigators noted.
The authors added that “when considered across proband and relative data, the biotype subgroups were superior to DSM diagnostic classes in between-group separations on external validating measures, illustrating the former scheme’s superiority for capturing neurobiological distinctiveness.”
Investigators noted that their approach did not use social functioning, brain structure, and characteristics of biological relatives in the creation of biotypes, which could have led to stronger results. Also, trial participants were mostly already on medication, classified as chronically psychotic, and tested at least once previously. The trial also had no replication sample for this sample population.
The study was supported by grants from the National Institute of Mental Health. Dr. Clementz did not report any relevant financial disclosures. Dr. Matcheri S. Keshavan reported receiving a grant from Sunovion and serving as a consultant to Forum Pharmaceuticals. Dr. Carol A. Tamminga also reported potential conflicts.
Three biotypes surpass traditional diagnostic categories when it comes to identifying subgroups of psychosis, a study showed.
“Classification and treatment of brain diseases subsumed by psychiatry rely on clinical phenomenology, despite the call for alternatives,” wrote Brett A. Clementz, Ph.D., of the University of Georgia, Athens, and his coinvestigators. “There is overlap in susceptibility genes and phenotypes across bipolar disorder with psychosis and schizophrenia, and considerable similarity between different psychotic disorders on symptoms, illness course, cognition, psychophysiology, and neurobiology [while] drug treatments for these conditions overlap extensively” (Am J Psychiatry. 2015. doi: 10.1176/appi.ajp.2015.14091200).
The researchers recruited 711 people from Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium sites (probands). All subjects had diagnoses of schizophrenia, schizoaffective disorder, or bipolar disorder with psychosis, and underwent interviews and laboratory data collection at the time of enrollment. In addition, 883 first-degree relatives of the 711 initial enrollees also were clinically evaluated, along with a cohort of 278 people deemed “demographically comparable [and] healthy” by investigators.
Biotypes for all individuals enrolled in the study were determined through laboratory tasks designed to “assess brain function at the neurocogntive/perceptual level.” These tasks consisted of the Brief Assessment of Cognition in Schizophrenia (BACS), pro- and antisaccade tasks, stop signal tasks, auditory paired stimuli and oddball evoked brain responses, and MRI acquisition and voxel-based morphometry.
The Structured Clinical Interview for DSM-IV and the Structured Interview for DSM-IV Personality Disorders were used for interviewing enrollees. Data compiled from these tests underwent multivariate taxometric analyses to compare biomarker variance across the three cohorts, in order to determine what, if any, heterogeneity exists in psychosis biotypes.
According to the results, diagnoses made with the clinical DSM guidelines yielded a single-severity continuum showing schizophrenia to be the most severe, followed by schizoaffective disorder and bipolar psychosis. However, biotypes showed significant variation, with investigators noting that “the three biotypes had distinctive patterns of abnormality across biomarkers that were neither entirely nor efficiently captured by a severity continuum.”
Larger separations were seen in biotype cohorts than in the DSM, specifically among probands. Among probands, group separation from healthy subjects was –2.58, –1.94, and –0.35 for biotype 1, 2, and 3 respectively for the BACS. Separation was –0.99, –0.78, and –0.05 for the stop signal task, and 3.32, 1.90, and 1.19 for the antisaccade errors. On the other hand, DSM diagnostics revealed group differences of –1.01, –1.51, and –1.83 for bipolar disorder psychosis, schizoaffective disorders, and schizophrenia, respectively, for the BACS test. Separation was –0.41, –0.61, and –0.55 for the stop signal task, and 1.36, 1.66, and 2.45 for antisaccade errors.
“Each biotype included all DSM psychosis categories, but probands diagnosed with schizophrenia were more numerous in biotype 1 (although 20% had bipolar disorder with psychosis), and probands diagnosed with bipolar disorder with psychosis were more numerous in biotype 3 (although 32% had schizophrenia), respectively,” the investigators noted.
The authors added that “when considered across proband and relative data, the biotype subgroups were superior to DSM diagnostic classes in between-group separations on external validating measures, illustrating the former scheme’s superiority for capturing neurobiological distinctiveness.”
Investigators noted that their approach did not use social functioning, brain structure, and characteristics of biological relatives in the creation of biotypes, which could have led to stronger results. Also, trial participants were mostly already on medication, classified as chronically psychotic, and tested at least once previously. The trial also had no replication sample for this sample population.
The study was supported by grants from the National Institute of Mental Health. Dr. Clementz did not report any relevant financial disclosures. Dr. Matcheri S. Keshavan reported receiving a grant from Sunovion and serving as a consultant to Forum Pharmaceuticals. Dr. Carol A. Tamminga also reported potential conflicts.
FROM THE AMERICAN JOURNAL OF PSYCHIATRY
Key clinical point: Brain scans capture gray matter volume differences among people with schizophrenia, schizoaffective disorder, and bipolar disorder that are missed by DSM diagnoses.
Major finding: Individuals with schizophrenia, schizoaffective disorder, and bipolar disorder with psychosis were compared with first-degree relatives and “demographically comparable healthy subjects” for biomarker variance; three psychosis variants were identified that were neurobiologically distinct and did not conform to accepted diagnostic boundaries.
Data source: A prospective cohort study of 711 individuals with schizophrenia, schizoaffective disorder, and bipolar disorder with psychosis, along with 883 first-degree relatives and 278 “demographically comparable healthy subjects.”
Disclosures: The study was supported by grants from the National Institute of Mental Health. Dr. Clementz did not report any relevant financial disclosures. Dr. Matcheri S. Keshavan reported receiving a grant from Sunovion and serving as a consultant to Forum Pharmaceuticals. Dr. Carol A. Tamminga also reported potential conflicts.
FOTS: Minimally invasive esophagectomy viable option to reduce morbidity, mortality
BOSTON – New and innovative methodologies for conducting minimally invasive esophagectomy (MIE) offer significantly lower rates of morbidity and mortality than those normally associated with the procedure, as presented by Dr. James D. Luketich at the Focus on Thoracic Surgery: Technical Challenges and Complications meeting of the American Association for Thoracic Surgery.
While Dr. Luketich spent the bulk of his oral presentation going over the specifics of performing MIE, the accompanying literature of his presentation delved into four key studies – performed and published over the last 12 years – which show the efficacy of MIE over the more traditional approaches to esophagectomy.
“There are several different approaches to esophagectomy in general [but] the technique has evolved partly because the tumors have evolved in the United States,” explained Dr. Luketich, chairman of cardiothoracic surgery at the University of Pittsburgh. “We started off laparoscopic [and] thoracoscopic. In my opinion, that was kind of a bad idea [and] we gave that up pretty early on [...] we’re chest surgeons, we put a thoracoscope in, and we loved it.”
However, as Dr. Luketich explained, the increasing lack of experience from new general surgery residents and attendings caused esophagectomy to become the more attractive option, as it was a procedure that everyone had experience with. This began a search for an effective but minimally invasive approach, which has slowly been cultivated and refined over the years.
Luketich discussed the outcome of his 2003 study assessing 222 consecutive patients who have undergone MIE at the University of Pittsburgh. In that study, patients had lower mortality rates (1.4%) and shorter hospital stays (7 days) than those with “most open series” invasive esophagectomy procedures, with a quality of life score 19 months post operation that was similar to preoperative scores and population norms.
The success of this trial led to the development of the intergroup ECOG 222 trial to determine MIE’s viability in a multicenter setting. Out of 104 patients eligible for MIE, 95 underwent the procedure. Median length of stay in intensive care units was 2 days, and hospital stay was 9 days, with a 2.1% 30-day mortality rate. At 35.8 months (the median follow-up time), the estimated 3-year overall survival was 58.4%.
Similar work was done in 2012, also headed by Dr. Luketich. In this trial, outcomes were evaluated in 1,033 consecutive MIE patients in order to assess the differences between “the modified McKeown minimally invasive approach (videothoracoscopic surgery, laparoscopy, neck anastomosis [MIE-neck]) with our current approach [and] a modified Ivor Lewis approach (laparoscopy, videothoracoscopic surgery, chest anastomosis [MIE-chest]).” MIE-neck was performed on 481 (48%) subjects and MIE-chest on 530 (52%) subjects.
Both procedures had similar median length of stay in hospital (8 days) and in the intensive care unit (2 days), with slightly lower rates of recurrent nerve injury in the MIE-chest cohort and mortality rate of 0.9%. The median number of lymph nodes resected was 21, and total operative mortality was 1.68%, leading investigators to conclude that MIE was the “preferred approach” for resection (P less than .001).
Dr. Luketich also briefly discussed the findings of a 2012 study by Dr. S.S. Biere – an open-label, randomized controlled trial at five study centers spread across three countries from June 2009 through March 2011. Fifty-six patients were randomized into cohorts receiving open esophagectomy, and 59 received MIE; all patients were aged 18-75 years and had resectable cancer of the esophagus or gastroesophageal junction.
Results showed a statistically significant decrease in postoperative pneumonia in the MIE cohort (9% vs. 29%; relative risk 0.35, P = 0.005), compared with open esophagectomy in the first two weeks after surgery and lower postoperative pulmonary infection in the entire hospital stay. MIE patients also experienced shorter hospital stays (11 vs. 14 days), higher short-term quality of life scores at 6 weeks post surgery, lower postoperative pain scores, lower operative blood loss, and lower rates of early morbidity.
Dr. Luketich disclosed having a “shareholder relationship” with Express Scripts and Intuitive Surgical.
BOSTON – New and innovative methodologies for conducting minimally invasive esophagectomy (MIE) offer significantly lower rates of morbidity and mortality than those normally associated with the procedure, as presented by Dr. James D. Luketich at the Focus on Thoracic Surgery: Technical Challenges and Complications meeting of the American Association for Thoracic Surgery.
While Dr. Luketich spent the bulk of his oral presentation going over the specifics of performing MIE, the accompanying literature of his presentation delved into four key studies – performed and published over the last 12 years – which show the efficacy of MIE over the more traditional approaches to esophagectomy.
“There are several different approaches to esophagectomy in general [but] the technique has evolved partly because the tumors have evolved in the United States,” explained Dr. Luketich, chairman of cardiothoracic surgery at the University of Pittsburgh. “We started off laparoscopic [and] thoracoscopic. In my opinion, that was kind of a bad idea [and] we gave that up pretty early on [...] we’re chest surgeons, we put a thoracoscope in, and we loved it.”
However, as Dr. Luketich explained, the increasing lack of experience from new general surgery residents and attendings caused esophagectomy to become the more attractive option, as it was a procedure that everyone had experience with. This began a search for an effective but minimally invasive approach, which has slowly been cultivated and refined over the years.
Luketich discussed the outcome of his 2003 study assessing 222 consecutive patients who have undergone MIE at the University of Pittsburgh. In that study, patients had lower mortality rates (1.4%) and shorter hospital stays (7 days) than those with “most open series” invasive esophagectomy procedures, with a quality of life score 19 months post operation that was similar to preoperative scores and population norms.
The success of this trial led to the development of the intergroup ECOG 222 trial to determine MIE’s viability in a multicenter setting. Out of 104 patients eligible for MIE, 95 underwent the procedure. Median length of stay in intensive care units was 2 days, and hospital stay was 9 days, with a 2.1% 30-day mortality rate. At 35.8 months (the median follow-up time), the estimated 3-year overall survival was 58.4%.
Similar work was done in 2012, also headed by Dr. Luketich. In this trial, outcomes were evaluated in 1,033 consecutive MIE patients in order to assess the differences between “the modified McKeown minimally invasive approach (videothoracoscopic surgery, laparoscopy, neck anastomosis [MIE-neck]) with our current approach [and] a modified Ivor Lewis approach (laparoscopy, videothoracoscopic surgery, chest anastomosis [MIE-chest]).” MIE-neck was performed on 481 (48%) subjects and MIE-chest on 530 (52%) subjects.
Both procedures had similar median length of stay in hospital (8 days) and in the intensive care unit (2 days), with slightly lower rates of recurrent nerve injury in the MIE-chest cohort and mortality rate of 0.9%. The median number of lymph nodes resected was 21, and total operative mortality was 1.68%, leading investigators to conclude that MIE was the “preferred approach” for resection (P less than .001).
Dr. Luketich also briefly discussed the findings of a 2012 study by Dr. S.S. Biere – an open-label, randomized controlled trial at five study centers spread across three countries from June 2009 through March 2011. Fifty-six patients were randomized into cohorts receiving open esophagectomy, and 59 received MIE; all patients were aged 18-75 years and had resectable cancer of the esophagus or gastroesophageal junction.
Results showed a statistically significant decrease in postoperative pneumonia in the MIE cohort (9% vs. 29%; relative risk 0.35, P = 0.005), compared with open esophagectomy in the first two weeks after surgery and lower postoperative pulmonary infection in the entire hospital stay. MIE patients also experienced shorter hospital stays (11 vs. 14 days), higher short-term quality of life scores at 6 weeks post surgery, lower postoperative pain scores, lower operative blood loss, and lower rates of early morbidity.
Dr. Luketich disclosed having a “shareholder relationship” with Express Scripts and Intuitive Surgical.
BOSTON – New and innovative methodologies for conducting minimally invasive esophagectomy (MIE) offer significantly lower rates of morbidity and mortality than those normally associated with the procedure, as presented by Dr. James D. Luketich at the Focus on Thoracic Surgery: Technical Challenges and Complications meeting of the American Association for Thoracic Surgery.
While Dr. Luketich spent the bulk of his oral presentation going over the specifics of performing MIE, the accompanying literature of his presentation delved into four key studies – performed and published over the last 12 years – which show the efficacy of MIE over the more traditional approaches to esophagectomy.
“There are several different approaches to esophagectomy in general [but] the technique has evolved partly because the tumors have evolved in the United States,” explained Dr. Luketich, chairman of cardiothoracic surgery at the University of Pittsburgh. “We started off laparoscopic [and] thoracoscopic. In my opinion, that was kind of a bad idea [and] we gave that up pretty early on [...] we’re chest surgeons, we put a thoracoscope in, and we loved it.”
However, as Dr. Luketich explained, the increasing lack of experience from new general surgery residents and attendings caused esophagectomy to become the more attractive option, as it was a procedure that everyone had experience with. This began a search for an effective but minimally invasive approach, which has slowly been cultivated and refined over the years.
Luketich discussed the outcome of his 2003 study assessing 222 consecutive patients who have undergone MIE at the University of Pittsburgh. In that study, patients had lower mortality rates (1.4%) and shorter hospital stays (7 days) than those with “most open series” invasive esophagectomy procedures, with a quality of life score 19 months post operation that was similar to preoperative scores and population norms.
The success of this trial led to the development of the intergroup ECOG 222 trial to determine MIE’s viability in a multicenter setting. Out of 104 patients eligible for MIE, 95 underwent the procedure. Median length of stay in intensive care units was 2 days, and hospital stay was 9 days, with a 2.1% 30-day mortality rate. At 35.8 months (the median follow-up time), the estimated 3-year overall survival was 58.4%.
Similar work was done in 2012, also headed by Dr. Luketich. In this trial, outcomes were evaluated in 1,033 consecutive MIE patients in order to assess the differences between “the modified McKeown minimally invasive approach (videothoracoscopic surgery, laparoscopy, neck anastomosis [MIE-neck]) with our current approach [and] a modified Ivor Lewis approach (laparoscopy, videothoracoscopic surgery, chest anastomosis [MIE-chest]).” MIE-neck was performed on 481 (48%) subjects and MIE-chest on 530 (52%) subjects.
Both procedures had similar median length of stay in hospital (8 days) and in the intensive care unit (2 days), with slightly lower rates of recurrent nerve injury in the MIE-chest cohort and mortality rate of 0.9%. The median number of lymph nodes resected was 21, and total operative mortality was 1.68%, leading investigators to conclude that MIE was the “preferred approach” for resection (P less than .001).
Dr. Luketich also briefly discussed the findings of a 2012 study by Dr. S.S. Biere – an open-label, randomized controlled trial at five study centers spread across three countries from June 2009 through March 2011. Fifty-six patients were randomized into cohorts receiving open esophagectomy, and 59 received MIE; all patients were aged 18-75 years and had resectable cancer of the esophagus or gastroesophageal junction.
Results showed a statistically significant decrease in postoperative pneumonia in the MIE cohort (9% vs. 29%; relative risk 0.35, P = 0.005), compared with open esophagectomy in the first two weeks after surgery and lower postoperative pulmonary infection in the entire hospital stay. MIE patients also experienced shorter hospital stays (11 vs. 14 days), higher short-term quality of life scores at 6 weeks post surgery, lower postoperative pain scores, lower operative blood loss, and lower rates of early morbidity.
Dr. Luketich disclosed having a “shareholder relationship” with Express Scripts and Intuitive Surgical.
AT AATS FOCUS ON THORACIC SURGERY: TECHNICAL CHALLENGES AND COMPLICATIONS
FDA advisory committees support changing codeine contraindications for children
SILVER SPRING, MD. – A joint meeting of the Food and Drug Administration’s Pulmonary-Allergy Drugs Advisory Committee (PADAC) and the Drug Safety and Risk Management Advisory Committee (DSaRM) on Dec. 10 voted overwhelmingly to support expanding the current contraindication for codeine to preclude its use for any pain management in all children under age 18 years.
Twenty members of the joint advisory panel voted for the aforementioned contraindication, while six members elected to contraindicate for any pain management in children younger than 12 years old, and another two members voted only to contraindicate for children younger than age 6 years. Only one member – Dr. Maureen Finnegan of the DSaRM – voted not to make any changes to the current contraindications for codeine.
The joint advisory panel also voted to contraindicate the use of codeine for the treatment of cough in all children younger than age 18 years by a similarly robust margin: 20 members voted for contraindicating in all pediatric patients, five voted to contraindicate only in patients younger than age 12 years, one voted to contraindicate in children younger than age 6 years, and three members voted not to make any changes at all.
The final voting question, asking whether to remove codeine from the FDA monograph for over-the-counter use in treating cough in children, was almost unanimously supported by the voting members of both committees. Only one member – Dr. Lorraine J. Gudas, a temporary voting member – supported removing codeine from the monograph only for children under age 2 years. Dr. Finnegan abstained from voting on this charge, telling the committee that “this is totally out of my wheelhouse.”
The decision to vote on approving amendments to the contraindications for codeine use – which would affect not just the monogram, but labeling as well – comes on the heels of the FDA announcing this summer that they would be investigating the safety of codeine-containing drugs in children, asking health care providers and patients to report any adverse events associated with the drug.
The joint advisory panel cited reports of respiratory depression and death in pediatric patients, variability of codeine metabolism based upon CYP2D6 activity, and the fact that “some regulatory agencies have restricted use of codeine for both cough and analgesia in pediatric patients” as their key reasons for considering the changes to current contraindications, according to Dr. Sally Seymour, the FDA’s Deputy Director for Safety.
In a presentation on codeine use for pediatric analgesia, FDA Medical Officer Dr. Timothy Jiang cited several studies detailing adverse events in children taking codeine-containing drugs. One 2007 study by Voronov et al involved a 29-month-old boy of North African heritage who took codeine/acetaminophen following adenotonsillectomy for “recurrent tonsillitis and mild-moderate sleep apnea;” the boy was found unresponsive the day after the operation, but was later resuscitated.
In another study – a 2009 study in the New England Journal of Medicine by Catherine Ciszkowski and her associates – cites a similar situation of a 2-year-old boy receiving codeine after adenotonsillectomy, only to be found unresponsive; in this case, however, the boy died 2 days after the operation. Additionally, Dr. Jiang cited a 2012 search of the Adverse Event Reporting System, looking at data from 1969 through May 1, 2012, which found that six additional cases of death, as well as seven literature cases that mentioned patients’ CYPD26 status as possibly contributing.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
SILVER SPRING, MD. – A joint meeting of the Food and Drug Administration’s Pulmonary-Allergy Drugs Advisory Committee (PADAC) and the Drug Safety and Risk Management Advisory Committee (DSaRM) on Dec. 10 voted overwhelmingly to support expanding the current contraindication for codeine to preclude its use for any pain management in all children under age 18 years.
Twenty members of the joint advisory panel voted for the aforementioned contraindication, while six members elected to contraindicate for any pain management in children younger than 12 years old, and another two members voted only to contraindicate for children younger than age 6 years. Only one member – Dr. Maureen Finnegan of the DSaRM – voted not to make any changes to the current contraindications for codeine.
The joint advisory panel also voted to contraindicate the use of codeine for the treatment of cough in all children younger than age 18 years by a similarly robust margin: 20 members voted for contraindicating in all pediatric patients, five voted to contraindicate only in patients younger than age 12 years, one voted to contraindicate in children younger than age 6 years, and three members voted not to make any changes at all.
The final voting question, asking whether to remove codeine from the FDA monograph for over-the-counter use in treating cough in children, was almost unanimously supported by the voting members of both committees. Only one member – Dr. Lorraine J. Gudas, a temporary voting member – supported removing codeine from the monograph only for children under age 2 years. Dr. Finnegan abstained from voting on this charge, telling the committee that “this is totally out of my wheelhouse.”
The decision to vote on approving amendments to the contraindications for codeine use – which would affect not just the monogram, but labeling as well – comes on the heels of the FDA announcing this summer that they would be investigating the safety of codeine-containing drugs in children, asking health care providers and patients to report any adverse events associated with the drug.
The joint advisory panel cited reports of respiratory depression and death in pediatric patients, variability of codeine metabolism based upon CYP2D6 activity, and the fact that “some regulatory agencies have restricted use of codeine for both cough and analgesia in pediatric patients” as their key reasons for considering the changes to current contraindications, according to Dr. Sally Seymour, the FDA’s Deputy Director for Safety.
In a presentation on codeine use for pediatric analgesia, FDA Medical Officer Dr. Timothy Jiang cited several studies detailing adverse events in children taking codeine-containing drugs. One 2007 study by Voronov et al involved a 29-month-old boy of North African heritage who took codeine/acetaminophen following adenotonsillectomy for “recurrent tonsillitis and mild-moderate sleep apnea;” the boy was found unresponsive the day after the operation, but was later resuscitated.
In another study – a 2009 study in the New England Journal of Medicine by Catherine Ciszkowski and her associates – cites a similar situation of a 2-year-old boy receiving codeine after adenotonsillectomy, only to be found unresponsive; in this case, however, the boy died 2 days after the operation. Additionally, Dr. Jiang cited a 2012 search of the Adverse Event Reporting System, looking at data from 1969 through May 1, 2012, which found that six additional cases of death, as well as seven literature cases that mentioned patients’ CYPD26 status as possibly contributing.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
SILVER SPRING, MD. – A joint meeting of the Food and Drug Administration’s Pulmonary-Allergy Drugs Advisory Committee (PADAC) and the Drug Safety and Risk Management Advisory Committee (DSaRM) on Dec. 10 voted overwhelmingly to support expanding the current contraindication for codeine to preclude its use for any pain management in all children under age 18 years.
Twenty members of the joint advisory panel voted for the aforementioned contraindication, while six members elected to contraindicate for any pain management in children younger than 12 years old, and another two members voted only to contraindicate for children younger than age 6 years. Only one member – Dr. Maureen Finnegan of the DSaRM – voted not to make any changes to the current contraindications for codeine.
The joint advisory panel also voted to contraindicate the use of codeine for the treatment of cough in all children younger than age 18 years by a similarly robust margin: 20 members voted for contraindicating in all pediatric patients, five voted to contraindicate only in patients younger than age 12 years, one voted to contraindicate in children younger than age 6 years, and three members voted not to make any changes at all.
The final voting question, asking whether to remove codeine from the FDA monograph for over-the-counter use in treating cough in children, was almost unanimously supported by the voting members of both committees. Only one member – Dr. Lorraine J. Gudas, a temporary voting member – supported removing codeine from the monograph only for children under age 2 years. Dr. Finnegan abstained from voting on this charge, telling the committee that “this is totally out of my wheelhouse.”
The decision to vote on approving amendments to the contraindications for codeine use – which would affect not just the monogram, but labeling as well – comes on the heels of the FDA announcing this summer that they would be investigating the safety of codeine-containing drugs in children, asking health care providers and patients to report any adverse events associated with the drug.
The joint advisory panel cited reports of respiratory depression and death in pediatric patients, variability of codeine metabolism based upon CYP2D6 activity, and the fact that “some regulatory agencies have restricted use of codeine for both cough and analgesia in pediatric patients” as their key reasons for considering the changes to current contraindications, according to Dr. Sally Seymour, the FDA’s Deputy Director for Safety.
In a presentation on codeine use for pediatric analgesia, FDA Medical Officer Dr. Timothy Jiang cited several studies detailing adverse events in children taking codeine-containing drugs. One 2007 study by Voronov et al involved a 29-month-old boy of North African heritage who took codeine/acetaminophen following adenotonsillectomy for “recurrent tonsillitis and mild-moderate sleep apnea;” the boy was found unresponsive the day after the operation, but was later resuscitated.
In another study – a 2009 study in the New England Journal of Medicine by Catherine Ciszkowski and her associates – cites a similar situation of a 2-year-old boy receiving codeine after adenotonsillectomy, only to be found unresponsive; in this case, however, the boy died 2 days after the operation. Additionally, Dr. Jiang cited a 2012 search of the Adverse Event Reporting System, looking at data from 1969 through May 1, 2012, which found that six additional cases of death, as well as seven literature cases that mentioned patients’ CYPD26 status as possibly contributing.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
AT AN FDA ADVISORY COMMITTEE MEETING
Reslizumab: FDA panel recommends approval for adults, but not children
SILVER SPRING, MD. Members of the FDA Pulmonary-Allergy Drugs Advisory Committee voted 11-3 on Dec. 9 to recommend the approval of reslizumab, a humanized monoclonal antibody, for the treatment of asthma and elevated blood eosinophils in patients aged 18 years and older but did not recommend approval for children aged 12-17 years.
The advisers were tasked to consider a dosage of 3 mg/kg of reslizumab, administered intravenously once every 4 weeks for the management of severe asthma. The lack of a recommendation for the pediatric population is because of what panel members considered to be insufficient data culled from a small sample population (19 patients), along with results that did not provide enough evidence that the treatment was of significant benefit to adolescents.
There were “limited data” presented to support use of reslizumab in pediatrics, and “all the evidence was going in the wrong direction,” according to panel member Erica H. Brittain, Ph.D., of the National Institute of Allergy and Infectious Diseases, Bethesda, Md.
Panelist Dr. Thomas A.E. Platts-Mills, professor of medicine at the University of Virginia, Charlottesville, added that a larger study is needed in order to more accurately assess the drug’s efficacy, and should also include patients younger than the age of 12 years.
Both of the questions posited to the panel by the FDA regarding reslizumab approval for pediatric patients, “Do the efficacy data provide substantial evidence of a clinically meaningful benefit?” and “Do the available efficacy and safety data support approval of reslizumab?” received unanimous “No” votes from the 14-member voting panel.
The advisory panel members considered data from five separate trials evaluating the safety and efficacy of reslizumab to be marketed as Cinqair by Teva Pharmaceuticals. Those trials included two 16-week lung-function studies examining forced expiratory volume in 1 second (FEV1), two year-long asthma exacerbation studies, and an open-label safety extension study.
Advisers generally agreed that reslizumab demonstrated substantial improvement in FEV1 and asthma exacerbation in the adult population. Specifically, in the two exacerbation studies, clinical asthma exacerbations did not occur at all over the 12-month study period in 61% and 73% of subjects on reslizumab, vs. 44% and 52% of subjects in the control cohort, respectively, with FEV1 dropping.
However, panelists voiced their concern about the risk of muscle toxicity and, especially, anaphylaxis. In a presentation regarding the treatment’s immunogenicity issues, João A. Pedras-Vasconcelos, Ph.D., of the FDA Office of Pharmaceutical Quality, cautioned that enough work was not done by the sponsors to “thoroughly investigate [the] root causes of anaphylaxis.”
Ultimately, the advisory committee largely agreed that the unmet need for reslizumab outweighed the potential risks associated with it.
In casting his “yes” vote regarding the adequacy of reslizumab’s safety profile, panel chair Dr. Dennis R. Ownby, professor of pediatrics at Georgia Regents University in Augusta, admitted that he was “reluctant” to endorse reslizumab, but said that he believes “this is a drug that clinicians will use very cautiously, [so] I’m placing faith on our practicing physicians” to prescribe the drug responsibly.
Advisers voted 11-3 to recommend approval of reslizumab as a safe and effective treatment of severe asthma and elevated blood eosinophils. Approval of reslizumab would make it the third monoclonal antibody to be FDA approved to treat asthma.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
SILVER SPRING, MD. Members of the FDA Pulmonary-Allergy Drugs Advisory Committee voted 11-3 on Dec. 9 to recommend the approval of reslizumab, a humanized monoclonal antibody, for the treatment of asthma and elevated blood eosinophils in patients aged 18 years and older but did not recommend approval for children aged 12-17 years.
The advisers were tasked to consider a dosage of 3 mg/kg of reslizumab, administered intravenously once every 4 weeks for the management of severe asthma. The lack of a recommendation for the pediatric population is because of what panel members considered to be insufficient data culled from a small sample population (19 patients), along with results that did not provide enough evidence that the treatment was of significant benefit to adolescents.
There were “limited data” presented to support use of reslizumab in pediatrics, and “all the evidence was going in the wrong direction,” according to panel member Erica H. Brittain, Ph.D., of the National Institute of Allergy and Infectious Diseases, Bethesda, Md.
Panelist Dr. Thomas A.E. Platts-Mills, professor of medicine at the University of Virginia, Charlottesville, added that a larger study is needed in order to more accurately assess the drug’s efficacy, and should also include patients younger than the age of 12 years.
Both of the questions posited to the panel by the FDA regarding reslizumab approval for pediatric patients, “Do the efficacy data provide substantial evidence of a clinically meaningful benefit?” and “Do the available efficacy and safety data support approval of reslizumab?” received unanimous “No” votes from the 14-member voting panel.
The advisory panel members considered data from five separate trials evaluating the safety and efficacy of reslizumab to be marketed as Cinqair by Teva Pharmaceuticals. Those trials included two 16-week lung-function studies examining forced expiratory volume in 1 second (FEV1), two year-long asthma exacerbation studies, and an open-label safety extension study.
Advisers generally agreed that reslizumab demonstrated substantial improvement in FEV1 and asthma exacerbation in the adult population. Specifically, in the two exacerbation studies, clinical asthma exacerbations did not occur at all over the 12-month study period in 61% and 73% of subjects on reslizumab, vs. 44% and 52% of subjects in the control cohort, respectively, with FEV1 dropping.
However, panelists voiced their concern about the risk of muscle toxicity and, especially, anaphylaxis. In a presentation regarding the treatment’s immunogenicity issues, João A. Pedras-Vasconcelos, Ph.D., of the FDA Office of Pharmaceutical Quality, cautioned that enough work was not done by the sponsors to “thoroughly investigate [the] root causes of anaphylaxis.”
Ultimately, the advisory committee largely agreed that the unmet need for reslizumab outweighed the potential risks associated with it.
In casting his “yes” vote regarding the adequacy of reslizumab’s safety profile, panel chair Dr. Dennis R. Ownby, professor of pediatrics at Georgia Regents University in Augusta, admitted that he was “reluctant” to endorse reslizumab, but said that he believes “this is a drug that clinicians will use very cautiously, [so] I’m placing faith on our practicing physicians” to prescribe the drug responsibly.
Advisers voted 11-3 to recommend approval of reslizumab as a safe and effective treatment of severe asthma and elevated blood eosinophils. Approval of reslizumab would make it the third monoclonal antibody to be FDA approved to treat asthma.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
SILVER SPRING, MD. Members of the FDA Pulmonary-Allergy Drugs Advisory Committee voted 11-3 on Dec. 9 to recommend the approval of reslizumab, a humanized monoclonal antibody, for the treatment of asthma and elevated blood eosinophils in patients aged 18 years and older but did not recommend approval for children aged 12-17 years.
The advisers were tasked to consider a dosage of 3 mg/kg of reslizumab, administered intravenously once every 4 weeks for the management of severe asthma. The lack of a recommendation for the pediatric population is because of what panel members considered to be insufficient data culled from a small sample population (19 patients), along with results that did not provide enough evidence that the treatment was of significant benefit to adolescents.
There were “limited data” presented to support use of reslizumab in pediatrics, and “all the evidence was going in the wrong direction,” according to panel member Erica H. Brittain, Ph.D., of the National Institute of Allergy and Infectious Diseases, Bethesda, Md.
Panelist Dr. Thomas A.E. Platts-Mills, professor of medicine at the University of Virginia, Charlottesville, added that a larger study is needed in order to more accurately assess the drug’s efficacy, and should also include patients younger than the age of 12 years.
Both of the questions posited to the panel by the FDA regarding reslizumab approval for pediatric patients, “Do the efficacy data provide substantial evidence of a clinically meaningful benefit?” and “Do the available efficacy and safety data support approval of reslizumab?” received unanimous “No” votes from the 14-member voting panel.
The advisory panel members considered data from five separate trials evaluating the safety and efficacy of reslizumab to be marketed as Cinqair by Teva Pharmaceuticals. Those trials included two 16-week lung-function studies examining forced expiratory volume in 1 second (FEV1), two year-long asthma exacerbation studies, and an open-label safety extension study.
Advisers generally agreed that reslizumab demonstrated substantial improvement in FEV1 and asthma exacerbation in the adult population. Specifically, in the two exacerbation studies, clinical asthma exacerbations did not occur at all over the 12-month study period in 61% and 73% of subjects on reslizumab, vs. 44% and 52% of subjects in the control cohort, respectively, with FEV1 dropping.
However, panelists voiced their concern about the risk of muscle toxicity and, especially, anaphylaxis. In a presentation regarding the treatment’s immunogenicity issues, João A. Pedras-Vasconcelos, Ph.D., of the FDA Office of Pharmaceutical Quality, cautioned that enough work was not done by the sponsors to “thoroughly investigate [the] root causes of anaphylaxis.”
Ultimately, the advisory committee largely agreed that the unmet need for reslizumab outweighed the potential risks associated with it.
In casting his “yes” vote regarding the adequacy of reslizumab’s safety profile, panel chair Dr. Dennis R. Ownby, professor of pediatrics at Georgia Regents University in Augusta, admitted that he was “reluctant” to endorse reslizumab, but said that he believes “this is a drug that clinicians will use very cautiously, [so] I’m placing faith on our practicing physicians” to prescribe the drug responsibly.
Advisers voted 11-3 to recommend approval of reslizumab as a safe and effective treatment of severe asthma and elevated blood eosinophils. Approval of reslizumab would make it the third monoclonal antibody to be FDA approved to treat asthma.
The FDA is not required to follow the advice of its advisory panels, but often does. No members of the panel reported any relevant financial conflicts of interest.
AT AN FDA ADVISORY COMMITTEE MEETING
ELF the most effective mortality predictor for coinfected HIV, HCV patients
A new study concluded that enhanced liver fibrosis (ELF) was significantly more effective than were APRI and FIB-4 at predicting all-cause mortality in women infected with both HIV and hepatitis C virus.
“Using all three measures did not improve on the predictive value of ELF alone,” the study – led by Dr. Marion G. Peters of the University of California at San Francisco and published in the journal AIDS – said that adding all-cause mortality was chosen because “prior work noted that death certificates have significant limitations and liver disease may have been the underlying or contributing cause of septic death, renal death, or multisystem organ failure.”
Investigators recruited 381 women from the Women’s Interagency HIV Study, an ongoing, prospective, multicenter cohort trial conducted by the National Institutes of Health since 1993 to “investigate the impact of HIV infection on women in the [United States]” These women were enrolled in 1994-1995 and 2001-2002, all had HIV/HCV coinfection, and were evaluated twice a year with physical exams and laboratory testing, among other things. (AIDS. 2015 Nov 29. doi: 10.1097/QAD.0000000000000975)
All women underwent APRI and FIB-4 testing to determine the presence and severity of fibrosis. However, since both tests are known to be individually associated with all-cause mortality, investigators also performed ELF, a procedure that “utilizes direct measures of fibrosis, hyaluronic acid, procollagen III aminoterminal peptide and tissue inhibitor of matrix metalloproteinase.” In total, there were 2,296 ELF measurements taken from the study population.
Of the 381 women enrolled, 134 (35.2%) died by the end of the study. Of these, 78 (58.2%) had severe liver fibrosis, compared to 270 women who had none or mild fibrosis and 33 who had intermediate fibrosis. Receiver operator characteristic curves showed that ELF was consistently more reliable an indicator of all-cause mortality, especially based on data collected within 3 years of a subject’s death. When collected 1-3 years before death, ELF outperformed APRI 0.71 to 0.61 (P = .005) and outperformed FIB-4 0.71 to 0.65 (P = .06). Within 1 year of death, ELF produced area under the curve of 0.85, significantly higher than APRI’s 0.69 (P less than .0001) and FIB-4’s 0.75 (P = .0036).
“This study showed that in a large cohort of women with HIV and HCV coinfection, ELF was better in predicting all-cause mortality than [was] APRI and FIB-4,” Dr. Peters and her colleagues concluded, adding that “ELF could be useful in stratifying patients to identify those HCV patients at greatest and most-immediate risk of liver decompensation and in need of new all-oral HCV therapies.”
The WIHS study is funded by the NIH’s National Institute of Allergy and Infectious Diseases. One of the study’s coauthors, Dr. William Rosenberg, disclosed being an inventor of the ELF test, which is currently owned by Siemens, a company for which Dr. Rosenberg consults; however, Dr. Rosenberg “receives no income from ELF testing.”
A new study concluded that enhanced liver fibrosis (ELF) was significantly more effective than were APRI and FIB-4 at predicting all-cause mortality in women infected with both HIV and hepatitis C virus.
“Using all three measures did not improve on the predictive value of ELF alone,” the study – led by Dr. Marion G. Peters of the University of California at San Francisco and published in the journal AIDS – said that adding all-cause mortality was chosen because “prior work noted that death certificates have significant limitations and liver disease may have been the underlying or contributing cause of septic death, renal death, or multisystem organ failure.”
Investigators recruited 381 women from the Women’s Interagency HIV Study, an ongoing, prospective, multicenter cohort trial conducted by the National Institutes of Health since 1993 to “investigate the impact of HIV infection on women in the [United States]” These women were enrolled in 1994-1995 and 2001-2002, all had HIV/HCV coinfection, and were evaluated twice a year with physical exams and laboratory testing, among other things. (AIDS. 2015 Nov 29. doi: 10.1097/QAD.0000000000000975)
All women underwent APRI and FIB-4 testing to determine the presence and severity of fibrosis. However, since both tests are known to be individually associated with all-cause mortality, investigators also performed ELF, a procedure that “utilizes direct measures of fibrosis, hyaluronic acid, procollagen III aminoterminal peptide and tissue inhibitor of matrix metalloproteinase.” In total, there were 2,296 ELF measurements taken from the study population.
Of the 381 women enrolled, 134 (35.2%) died by the end of the study. Of these, 78 (58.2%) had severe liver fibrosis, compared to 270 women who had none or mild fibrosis and 33 who had intermediate fibrosis. Receiver operator characteristic curves showed that ELF was consistently more reliable an indicator of all-cause mortality, especially based on data collected within 3 years of a subject’s death. When collected 1-3 years before death, ELF outperformed APRI 0.71 to 0.61 (P = .005) and outperformed FIB-4 0.71 to 0.65 (P = .06). Within 1 year of death, ELF produced area under the curve of 0.85, significantly higher than APRI’s 0.69 (P less than .0001) and FIB-4’s 0.75 (P = .0036).
“This study showed that in a large cohort of women with HIV and HCV coinfection, ELF was better in predicting all-cause mortality than [was] APRI and FIB-4,” Dr. Peters and her colleagues concluded, adding that “ELF could be useful in stratifying patients to identify those HCV patients at greatest and most-immediate risk of liver decompensation and in need of new all-oral HCV therapies.”
The WIHS study is funded by the NIH’s National Institute of Allergy and Infectious Diseases. One of the study’s coauthors, Dr. William Rosenberg, disclosed being an inventor of the ELF test, which is currently owned by Siemens, a company for which Dr. Rosenberg consults; however, Dr. Rosenberg “receives no income from ELF testing.”
A new study concluded that enhanced liver fibrosis (ELF) was significantly more effective than were APRI and FIB-4 at predicting all-cause mortality in women infected with both HIV and hepatitis C virus.
“Using all three measures did not improve on the predictive value of ELF alone,” the study – led by Dr. Marion G. Peters of the University of California at San Francisco and published in the journal AIDS – said that adding all-cause mortality was chosen because “prior work noted that death certificates have significant limitations and liver disease may have been the underlying or contributing cause of septic death, renal death, or multisystem organ failure.”
Investigators recruited 381 women from the Women’s Interagency HIV Study, an ongoing, prospective, multicenter cohort trial conducted by the National Institutes of Health since 1993 to “investigate the impact of HIV infection on women in the [United States]” These women were enrolled in 1994-1995 and 2001-2002, all had HIV/HCV coinfection, and were evaluated twice a year with physical exams and laboratory testing, among other things. (AIDS. 2015 Nov 29. doi: 10.1097/QAD.0000000000000975)
All women underwent APRI and FIB-4 testing to determine the presence and severity of fibrosis. However, since both tests are known to be individually associated with all-cause mortality, investigators also performed ELF, a procedure that “utilizes direct measures of fibrosis, hyaluronic acid, procollagen III aminoterminal peptide and tissue inhibitor of matrix metalloproteinase.” In total, there were 2,296 ELF measurements taken from the study population.
Of the 381 women enrolled, 134 (35.2%) died by the end of the study. Of these, 78 (58.2%) had severe liver fibrosis, compared to 270 women who had none or mild fibrosis and 33 who had intermediate fibrosis. Receiver operator characteristic curves showed that ELF was consistently more reliable an indicator of all-cause mortality, especially based on data collected within 3 years of a subject’s death. When collected 1-3 years before death, ELF outperformed APRI 0.71 to 0.61 (P = .005) and outperformed FIB-4 0.71 to 0.65 (P = .06). Within 1 year of death, ELF produced area under the curve of 0.85, significantly higher than APRI’s 0.69 (P less than .0001) and FIB-4’s 0.75 (P = .0036).
“This study showed that in a large cohort of women with HIV and HCV coinfection, ELF was better in predicting all-cause mortality than [was] APRI and FIB-4,” Dr. Peters and her colleagues concluded, adding that “ELF could be useful in stratifying patients to identify those HCV patients at greatest and most-immediate risk of liver decompensation and in need of new all-oral HCV therapies.”
The WIHS study is funded by the NIH’s National Institute of Allergy and Infectious Diseases. One of the study’s coauthors, Dr. William Rosenberg, disclosed being an inventor of the ELF test, which is currently owned by Siemens, a company for which Dr. Rosenberg consults; however, Dr. Rosenberg “receives no income from ELF testing.”
FROM AIDS
Key clinical point: Enhanced liver fibrosis is the most effective way to predict all-cause mortality in women infected with both HIV and HCV.
Major finding: For 381 women with 2,296 ELF measurements, 134 (35.2%) died; area under the curve was stronger on ELF than APRI (0.85 vs. 0.69; P less than .001) and FIB-4 (0.85 vs. 0.75; P = .0036) within 1 year of death and 1-3 years before death.
Data source: 381 women with 2,296 ELF measurements, from the NIH’s Women’s Interagency HIV Study, a prospective, multicenter cohort of women at risk for, or currently diagnosed with HIV.
Disclosures: Study was supported by the National Institute of Allergy and Infectious Diseases. One coauthor (Dr. William Rosenberg) disclosed that he is an inventor of the ELF test, which is owned by Siemens, a company for which he consults.
More time in front of screens linked to more migraines in young adults
Electronic screens and young adults seem increasingly inseparable these days, but a new study warned that too much time in front of such screens can lead to an increase in headaches and migraine in the young adult population.
“Previous studies have observed associations between screen time exposure and headaches” in 10- to 12-year-olds (Prev Med. 2014 Oct;67:128-33) and adolescents (BMC Public Health. 2010 Jun 9;10:324) and low-back and shoulder pain in adolescents (Eur J Public Health. 2006 Oct;16[5]:536-41), wrote investigators led by Ilaria Montagni, Ph.D., of the University of Bordeaux (France). “This had led to speculation that the high amount of screen time exposure among students of higher education institutions may be correlated with the high prevalence of headache and migraine observed in this population.”
Dr. Montagni and her coinvestigators enrolled 4,927 individuals in France, all of whom were aged 18 years and older and part of the Internet-based Students Health Research Enterprise (i-Share) project cohort, which is an ongoing, prospective, population-based cohort study of students at French-speaking universities and higher-education institutions. The mean age of the students involved was 20.8 years, and 75.5% were female (Cephalalgia. 2015 Dec 2. doi: 10.1177/0333102415620286).
Subjects completed self-reported surveys on the average amount of time they spend in front of screens during five activities: computer or tablet work, playing video games on a computer/tablet, Internet surfing on a computer/tablet, watching videos on a computer/tablet, and using a smartphone. All questions were scored using a 0-5 point scale: 0 for never, 1 for less than 30 minutes, 2 for 30 minutes to 2 hours, 3 for 4-8 hours, and 5 for any time of 8 hours or more. Scores from the surveys were divided into quartiles of very low, low, high, and very high screen-time exposure.
Surveys also asked if they had experienced any headaches in the last 12 months that lasted several hours; those who answered negatively were classified as “no headache” while those who answered positively were asked a series of follow-up questions related to symptom type and severity, sensitivity to light or sound, nausea, vomiting, and if the headaches ever disrupted daily routines, among other things. To establish a classification of migraine, the investigators used the “probable migraine” category of the International Classification of Headache Disorders, third edition. From these data, multinomial logistic regression models were used to calculate odds ratios of any relationship between screen-time exposure and the presence and severity of headaches.
Of the 4,927 subjects, 2,773 (56.3%) reported no headaches; however, 710 (14.4%) reported a nonmigraine headache, 791 (16.1%) reported migraine without aura, and 653 (13.3%) reported migraine with aura. In comparisons against very low screen-time exposure, very high exposure increased the likelihood of experiencing migraine by 37%, rising to a statistically significant 50% greater odds for migraine without aura but not for migraine with aura.
“Students reporting very high screen time exposure were more likely to be male, to be older, to have higher BMI, and to consume cannabis [and] were also more likely to report non-migraine headache or migraine,” the authors noted. Furthermore, higher exposure to screens was a significant indicator of recurrent headaches in adolescent males, and the same indicator was seen in adolescent females who spent more time on the computer and in front of the TV.
The study was funded by a grant of the “Future Investments” program in the framework of the IdEx University of Bordeaux program. The i-Share project is supported by the French National Research Agency. The authors did not report any relevant financial disclosures.
Electronic screens and young adults seem increasingly inseparable these days, but a new study warned that too much time in front of such screens can lead to an increase in headaches and migraine in the young adult population.
“Previous studies have observed associations between screen time exposure and headaches” in 10- to 12-year-olds (Prev Med. 2014 Oct;67:128-33) and adolescents (BMC Public Health. 2010 Jun 9;10:324) and low-back and shoulder pain in adolescents (Eur J Public Health. 2006 Oct;16[5]:536-41), wrote investigators led by Ilaria Montagni, Ph.D., of the University of Bordeaux (France). “This had led to speculation that the high amount of screen time exposure among students of higher education institutions may be correlated with the high prevalence of headache and migraine observed in this population.”
Dr. Montagni and her coinvestigators enrolled 4,927 individuals in France, all of whom were aged 18 years and older and part of the Internet-based Students Health Research Enterprise (i-Share) project cohort, which is an ongoing, prospective, population-based cohort study of students at French-speaking universities and higher-education institutions. The mean age of the students involved was 20.8 years, and 75.5% were female (Cephalalgia. 2015 Dec 2. doi: 10.1177/0333102415620286).
Subjects completed self-reported surveys on the average amount of time they spend in front of screens during five activities: computer or tablet work, playing video games on a computer/tablet, Internet surfing on a computer/tablet, watching videos on a computer/tablet, and using a smartphone. All questions were scored using a 0-5 point scale: 0 for never, 1 for less than 30 minutes, 2 for 30 minutes to 2 hours, 3 for 4-8 hours, and 5 for any time of 8 hours or more. Scores from the surveys were divided into quartiles of very low, low, high, and very high screen-time exposure.
Surveys also asked if they had experienced any headaches in the last 12 months that lasted several hours; those who answered negatively were classified as “no headache” while those who answered positively were asked a series of follow-up questions related to symptom type and severity, sensitivity to light or sound, nausea, vomiting, and if the headaches ever disrupted daily routines, among other things. To establish a classification of migraine, the investigators used the “probable migraine” category of the International Classification of Headache Disorders, third edition. From these data, multinomial logistic regression models were used to calculate odds ratios of any relationship between screen-time exposure and the presence and severity of headaches.
Of the 4,927 subjects, 2,773 (56.3%) reported no headaches; however, 710 (14.4%) reported a nonmigraine headache, 791 (16.1%) reported migraine without aura, and 653 (13.3%) reported migraine with aura. In comparisons against very low screen-time exposure, very high exposure increased the likelihood of experiencing migraine by 37%, rising to a statistically significant 50% greater odds for migraine without aura but not for migraine with aura.
“Students reporting very high screen time exposure were more likely to be male, to be older, to have higher BMI, and to consume cannabis [and] were also more likely to report non-migraine headache or migraine,” the authors noted. Furthermore, higher exposure to screens was a significant indicator of recurrent headaches in adolescent males, and the same indicator was seen in adolescent females who spent more time on the computer and in front of the TV.
The study was funded by a grant of the “Future Investments” program in the framework of the IdEx University of Bordeaux program. The i-Share project is supported by the French National Research Agency. The authors did not report any relevant financial disclosures.
Electronic screens and young adults seem increasingly inseparable these days, but a new study warned that too much time in front of such screens can lead to an increase in headaches and migraine in the young adult population.
“Previous studies have observed associations between screen time exposure and headaches” in 10- to 12-year-olds (Prev Med. 2014 Oct;67:128-33) and adolescents (BMC Public Health. 2010 Jun 9;10:324) and low-back and shoulder pain in adolescents (Eur J Public Health. 2006 Oct;16[5]:536-41), wrote investigators led by Ilaria Montagni, Ph.D., of the University of Bordeaux (France). “This had led to speculation that the high amount of screen time exposure among students of higher education institutions may be correlated with the high prevalence of headache and migraine observed in this population.”
Dr. Montagni and her coinvestigators enrolled 4,927 individuals in France, all of whom were aged 18 years and older and part of the Internet-based Students Health Research Enterprise (i-Share) project cohort, which is an ongoing, prospective, population-based cohort study of students at French-speaking universities and higher-education institutions. The mean age of the students involved was 20.8 years, and 75.5% were female (Cephalalgia. 2015 Dec 2. doi: 10.1177/0333102415620286).
Subjects completed self-reported surveys on the average amount of time they spend in front of screens during five activities: computer or tablet work, playing video games on a computer/tablet, Internet surfing on a computer/tablet, watching videos on a computer/tablet, and using a smartphone. All questions were scored using a 0-5 point scale: 0 for never, 1 for less than 30 minutes, 2 for 30 minutes to 2 hours, 3 for 4-8 hours, and 5 for any time of 8 hours or more. Scores from the surveys were divided into quartiles of very low, low, high, and very high screen-time exposure.
Surveys also asked if they had experienced any headaches in the last 12 months that lasted several hours; those who answered negatively were classified as “no headache” while those who answered positively were asked a series of follow-up questions related to symptom type and severity, sensitivity to light or sound, nausea, vomiting, and if the headaches ever disrupted daily routines, among other things. To establish a classification of migraine, the investigators used the “probable migraine” category of the International Classification of Headache Disorders, third edition. From these data, multinomial logistic regression models were used to calculate odds ratios of any relationship between screen-time exposure and the presence and severity of headaches.
Of the 4,927 subjects, 2,773 (56.3%) reported no headaches; however, 710 (14.4%) reported a nonmigraine headache, 791 (16.1%) reported migraine without aura, and 653 (13.3%) reported migraine with aura. In comparisons against very low screen-time exposure, very high exposure increased the likelihood of experiencing migraine by 37%, rising to a statistically significant 50% greater odds for migraine without aura but not for migraine with aura.
“Students reporting very high screen time exposure were more likely to be male, to be older, to have higher BMI, and to consume cannabis [and] were also more likely to report non-migraine headache or migraine,” the authors noted. Furthermore, higher exposure to screens was a significant indicator of recurrent headaches in adolescent males, and the same indicator was seen in adolescent females who spent more time on the computer and in front of the TV.
The study was funded by a grant of the “Future Investments” program in the framework of the IdEx University of Bordeaux program. The i-Share project is supported by the French National Research Agency. The authors did not report any relevant financial disclosures.
FROM CEPHALALGIA
Key clinical point: Young adults who have a high cumulative exposure to screens on computers, tablets, televisions, and smartphones are significantly more likely to experience migraines.
Major finding: Odds ratios were 1.37 for students with very high screen time exposure to develop headaches and 1.50 for migraine without aura, compared with students who reported very low screen time exposure.
Data source: Cross-sectional study of 4,927 university or higher education–enrolled individuals with a mean age of 20.8 years.
Disclosures: The study was funded by a grant of the “Future Investments” program in the framework of the IdEx University of Bordeaux program. The i-Share project is supported by the French National Research Agency. The authors did not report any relevant financial disclosures.
HbA1c strengthens diabetes predictive model more in whites than in African Americans
The addition of hemoglobin A1c to a type 2 diabetes risk prediction model greatly improved its performance among race-based populations, according to a new cohort study of white and African American populations.
However, while adding HbA1c to the existing Atherosclerosis Risk in Communities (ARIC) diabetes risk prediction model can improve its overall predictive value, the model is significantly more effective at predicting incidence of type 2 diabetes within white populations than among populations of African Americans (Diabetes Care. 2015 Dec 1. doi: 10.2337/dc15-0509).
“Results from the current analysis confirm our hypothesis that when type 2 diabetes prediction models are updated to include A1c, a reflection of current clinical practice for the diagnosis of type 2 diabetes, there exists a racial divide in model performance,” wrote Dr. Mary E. Lacy of Brown University, Providence, R.I.
The study enrolled 2,456 participants of the Coronary Artery Risk Development Study in Young Adults (CARDIA), an ongoing, multicenter, longitudinal study that began in 1985-1986 with 5,115 men and women aged 18-30 years: 2,637 African Americans and 2,478 whites. Dr. Lacy and her colleagues recruited from the year 20 group in 2005-2006. All subjects were diabetes free at baseline; the study excluded those who had prevalent diabetes or no diabetes data at year 20.
Follow-up was done after 5 years (year 25 from the start of CARDIA). The subjects had fasting and 2-hour postload glucose measured via the hexokinase ultraviolet method, as well as HbA1c, which was tested via a Tosoh G7 high-performance liquid chromatography instrument. These same measurements were taken at baseline in year 20. Model discrimination, calibration, and integrated discrimination improvement with incident diabetes were measured with these data, both before and after taking HbA1c into account, as defined by the American Diabetes Association’s 2010 guidelines.
Discrimination with the model for the overall population gave an area under the curve of 0.841, which investigators described as “good.” That number rose to 0.863 when factoring in HbA1c (P = .03). Similar increases in model discrimination were found in both white (0.875 to 0.902, P = .08) and African American (0.796 to 0.816, P = .14) populations.
“When racial subgroups in CARDIA were analyzed separately, model performance was better among whites than African Americans,” the authors concluded. “For all models that used ADA 2010 diagnostic guidelines, model discrimination was significantly higher in whites than African Americans [and] the addition of baseline A1c improved model discrimination in whites and African Americans to a similar degree.”
Integrated Discrimination Improvement analyses also showed that HbA1c substantially improved model discrimination for both the African-American and white cohorts. Furthermore, in the overall study population, “the 5-year incidence of type 2 diabetes was 3.0% (n = 74) under the ADA 2004 diagnostic guidelines versus 5.1% (n = 124) using the ADA 2010 guidelines.” African Americans were found to be slightly more at risk for developing type 2 diabetes compared with whites.
Dr. Lacy and her colleagues concluded by saying these findings highlight the need for discrimination models to use the 2010 American Diabetes Association guidelines and incorporate HbA1c as a parameter. However, further study is needed to find a predictive model as effective for African Americans and other racial minorities as the model in this trial turned out to be for whites.
The researchers did not report any conflicts of interest. The National Heart, Lung, and Blood Institute and the National Institute on Aging funded the study.
The addition of hemoglobin A1c to a type 2 diabetes risk prediction model greatly improved its performance among race-based populations, according to a new cohort study of white and African American populations.
However, while adding HbA1c to the existing Atherosclerosis Risk in Communities (ARIC) diabetes risk prediction model can improve its overall predictive value, the model is significantly more effective at predicting incidence of type 2 diabetes within white populations than among populations of African Americans (Diabetes Care. 2015 Dec 1. doi: 10.2337/dc15-0509).
“Results from the current analysis confirm our hypothesis that when type 2 diabetes prediction models are updated to include A1c, a reflection of current clinical practice for the diagnosis of type 2 diabetes, there exists a racial divide in model performance,” wrote Dr. Mary E. Lacy of Brown University, Providence, R.I.
The study enrolled 2,456 participants of the Coronary Artery Risk Development Study in Young Adults (CARDIA), an ongoing, multicenter, longitudinal study that began in 1985-1986 with 5,115 men and women aged 18-30 years: 2,637 African Americans and 2,478 whites. Dr. Lacy and her colleagues recruited from the year 20 group in 2005-2006. All subjects were diabetes free at baseline; the study excluded those who had prevalent diabetes or no diabetes data at year 20.
Follow-up was done after 5 years (year 25 from the start of CARDIA). The subjects had fasting and 2-hour postload glucose measured via the hexokinase ultraviolet method, as well as HbA1c, which was tested via a Tosoh G7 high-performance liquid chromatography instrument. These same measurements were taken at baseline in year 20. Model discrimination, calibration, and integrated discrimination improvement with incident diabetes were measured with these data, both before and after taking HbA1c into account, as defined by the American Diabetes Association’s 2010 guidelines.
Discrimination with the model for the overall population gave an area under the curve of 0.841, which investigators described as “good.” That number rose to 0.863 when factoring in HbA1c (P = .03). Similar increases in model discrimination were found in both white (0.875 to 0.902, P = .08) and African American (0.796 to 0.816, P = .14) populations.
“When racial subgroups in CARDIA were analyzed separately, model performance was better among whites than African Americans,” the authors concluded. “For all models that used ADA 2010 diagnostic guidelines, model discrimination was significantly higher in whites than African Americans [and] the addition of baseline A1c improved model discrimination in whites and African Americans to a similar degree.”
Integrated Discrimination Improvement analyses also showed that HbA1c substantially improved model discrimination for both the African-American and white cohorts. Furthermore, in the overall study population, “the 5-year incidence of type 2 diabetes was 3.0% (n = 74) under the ADA 2004 diagnostic guidelines versus 5.1% (n = 124) using the ADA 2010 guidelines.” African Americans were found to be slightly more at risk for developing type 2 diabetes compared with whites.
Dr. Lacy and her colleagues concluded by saying these findings highlight the need for discrimination models to use the 2010 American Diabetes Association guidelines and incorporate HbA1c as a parameter. However, further study is needed to find a predictive model as effective for African Americans and other racial minorities as the model in this trial turned out to be for whites.
The researchers did not report any conflicts of interest. The National Heart, Lung, and Blood Institute and the National Institute on Aging funded the study.
The addition of hemoglobin A1c to a type 2 diabetes risk prediction model greatly improved its performance among race-based populations, according to a new cohort study of white and African American populations.
However, while adding HbA1c to the existing Atherosclerosis Risk in Communities (ARIC) diabetes risk prediction model can improve its overall predictive value, the model is significantly more effective at predicting incidence of type 2 diabetes within white populations than among populations of African Americans (Diabetes Care. 2015 Dec 1. doi: 10.2337/dc15-0509).
“Results from the current analysis confirm our hypothesis that when type 2 diabetes prediction models are updated to include A1c, a reflection of current clinical practice for the diagnosis of type 2 diabetes, there exists a racial divide in model performance,” wrote Dr. Mary E. Lacy of Brown University, Providence, R.I.
The study enrolled 2,456 participants of the Coronary Artery Risk Development Study in Young Adults (CARDIA), an ongoing, multicenter, longitudinal study that began in 1985-1986 with 5,115 men and women aged 18-30 years: 2,637 African Americans and 2,478 whites. Dr. Lacy and her colleagues recruited from the year 20 group in 2005-2006. All subjects were diabetes free at baseline; the study excluded those who had prevalent diabetes or no diabetes data at year 20.
Follow-up was done after 5 years (year 25 from the start of CARDIA). The subjects had fasting and 2-hour postload glucose measured via the hexokinase ultraviolet method, as well as HbA1c, which was tested via a Tosoh G7 high-performance liquid chromatography instrument. These same measurements were taken at baseline in year 20. Model discrimination, calibration, and integrated discrimination improvement with incident diabetes were measured with these data, both before and after taking HbA1c into account, as defined by the American Diabetes Association’s 2010 guidelines.
Discrimination with the model for the overall population gave an area under the curve of 0.841, which investigators described as “good.” That number rose to 0.863 when factoring in HbA1c (P = .03). Similar increases in model discrimination were found in both white (0.875 to 0.902, P = .08) and African American (0.796 to 0.816, P = .14) populations.
“When racial subgroups in CARDIA were analyzed separately, model performance was better among whites than African Americans,” the authors concluded. “For all models that used ADA 2010 diagnostic guidelines, model discrimination was significantly higher in whites than African Americans [and] the addition of baseline A1c improved model discrimination in whites and African Americans to a similar degree.”
Integrated Discrimination Improvement analyses also showed that HbA1c substantially improved model discrimination for both the African-American and white cohorts. Furthermore, in the overall study population, “the 5-year incidence of type 2 diabetes was 3.0% (n = 74) under the ADA 2004 diagnostic guidelines versus 5.1% (n = 124) using the ADA 2010 guidelines.” African Americans were found to be slightly more at risk for developing type 2 diabetes compared with whites.
Dr. Lacy and her colleagues concluded by saying these findings highlight the need for discrimination models to use the 2010 American Diabetes Association guidelines and incorporate HbA1c as a parameter. However, further study is needed to find a predictive model as effective for African Americans and other racial minorities as the model in this trial turned out to be for whites.
The researchers did not report any conflicts of interest. The National Heart, Lung, and Blood Institute and the National Institute on Aging funded the study.
FROM DIABETES CARE
Key clinical point: Adding hemoglobin A1c to the existing Atherosclerosis Risk in Communities diabetes risk prediction model can significantly improve the accuracy of diabetes incidence prediction in white and African-American populations.
Major finding: With new parameters for prediction, model discrimination increased from 0.841 to 0.863 in the overall population, 0.796 to 0.816 among African Americans, and 0.875 to 0.902 among whites.
Data source: Cohort study of 2,456 white and African American participants in the CARDIA study in 2005-2006 who did not have diabetes.
Disclosures: The National Heart, Lung, and Blood Institute and the National Institute on Aging funded the study. The authors reported no conflicts of interest.
Study suggests radiologically isolated disease is part of MS spectrum
The findings of a new multicenter, multinational cohort study of radiologically isolated syndrome offer further evidence that the condition should be part of the multiple sclerosis treatment spectrum because of the rate at which patients progressed to primary progressive MS over the course of the study.
“This is the first report of the temporal course within the preprogression phase for an extremely rare group of subjects originally identified by MRI as having asymptomatic disease, who ultimately experienced progressive symptom evolution consistent with PPMS [primary progressive MS] that could not otherwise be explained by any other mechanism (excessive alcohol use, vitamin deficiencies, etc.),” wrote the investigators, led by Dr. Orhun H. Kantarci of the Mayo Clinic in Rochester, Minn. (Ann Neurol. 2015 Nov 24. doi: 10.1002/ana.24564).
Dr. Kantarci and his coinvestigators evaluated 453 patients with radiologically isolated syndrome (RIS) across 22 centers in five different countries – the United States, France, Italy, Spain, and Turkey – and collected data on MRIs, lesions, and cerebrospinal fluids at baseline and during follow-ups for up to just over 20 years in certain cohorts. Demographic and clinical data were also analyzed for each patient enrolled.
Ultimately, 128 of the 453 RIS patients developed symptomatic MS (28%), of which 15 (12%) evolved to PPMS while the remaining 113 “developed a first acute clinical event related to [central nervous system] demyelination consistent with CIS [clinically isolated syndrome]/MS diagnosis.” RIS occurred at a median age of 43.3 years with a range of 20-66 years and evolved to PPMS at a mean age of 49.1 years. The median time to conversion was 3.5 years over a median follow-up period of 5.8 years (P = .05).
Nine PPMS subjects were male, and the remaining six were women; PPMS patients were more often men, compared with those who developed CIS/MS (P = .005). Additionally, median age at the onset of RIS and median age at symptomatic evolution were both older by about 10 years in subjects with PPMS versus those that developed CIS/MS (P less than .001).
“The 12% prevalence of PPMS in this large RIS cohort, as well as age at PPMS onset, are strikingly similar to that of large clinical studies in MS,” the authors noted. “Studying RIS, therefore, provides an opportunity to better understand the onset of clinical MS and to test early intervention.”
Dr. Kantarci and his associates also pointed out that, in their study, the older age of PPMS onset versus CIS/MS was “clearly” independent of individual follow-up times. Therefore, they also concluded that “age dependence of progressive MS development, in the absence of previous clinical relapses despite having clear subclinically active MS, suggests that biological aging mechanisms may be a significant contributor for development of progressive MS.”
The study was conducted without any specific funding source. Dr. Kantarci and his colleagues did not report any relevant financial disclosures.
The findings of a new multicenter, multinational cohort study of radiologically isolated syndrome offer further evidence that the condition should be part of the multiple sclerosis treatment spectrum because of the rate at which patients progressed to primary progressive MS over the course of the study.
“This is the first report of the temporal course within the preprogression phase for an extremely rare group of subjects originally identified by MRI as having asymptomatic disease, who ultimately experienced progressive symptom evolution consistent with PPMS [primary progressive MS] that could not otherwise be explained by any other mechanism (excessive alcohol use, vitamin deficiencies, etc.),” wrote the investigators, led by Dr. Orhun H. Kantarci of the Mayo Clinic in Rochester, Minn. (Ann Neurol. 2015 Nov 24. doi: 10.1002/ana.24564).
Dr. Kantarci and his coinvestigators evaluated 453 patients with radiologically isolated syndrome (RIS) across 22 centers in five different countries – the United States, France, Italy, Spain, and Turkey – and collected data on MRIs, lesions, and cerebrospinal fluids at baseline and during follow-ups for up to just over 20 years in certain cohorts. Demographic and clinical data were also analyzed for each patient enrolled.
Ultimately, 128 of the 453 RIS patients developed symptomatic MS (28%), of which 15 (12%) evolved to PPMS while the remaining 113 “developed a first acute clinical event related to [central nervous system] demyelination consistent with CIS [clinically isolated syndrome]/MS diagnosis.” RIS occurred at a median age of 43.3 years with a range of 20-66 years and evolved to PPMS at a mean age of 49.1 years. The median time to conversion was 3.5 years over a median follow-up period of 5.8 years (P = .05).
Nine PPMS subjects were male, and the remaining six were women; PPMS patients were more often men, compared with those who developed CIS/MS (P = .005). Additionally, median age at the onset of RIS and median age at symptomatic evolution were both older by about 10 years in subjects with PPMS versus those that developed CIS/MS (P less than .001).
“The 12% prevalence of PPMS in this large RIS cohort, as well as age at PPMS onset, are strikingly similar to that of large clinical studies in MS,” the authors noted. “Studying RIS, therefore, provides an opportunity to better understand the onset of clinical MS and to test early intervention.”
Dr. Kantarci and his associates also pointed out that, in their study, the older age of PPMS onset versus CIS/MS was “clearly” independent of individual follow-up times. Therefore, they also concluded that “age dependence of progressive MS development, in the absence of previous clinical relapses despite having clear subclinically active MS, suggests that biological aging mechanisms may be a significant contributor for development of progressive MS.”
The study was conducted without any specific funding source. Dr. Kantarci and his colleagues did not report any relevant financial disclosures.
The findings of a new multicenter, multinational cohort study of radiologically isolated syndrome offer further evidence that the condition should be part of the multiple sclerosis treatment spectrum because of the rate at which patients progressed to primary progressive MS over the course of the study.
“This is the first report of the temporal course within the preprogression phase for an extremely rare group of subjects originally identified by MRI as having asymptomatic disease, who ultimately experienced progressive symptom evolution consistent with PPMS [primary progressive MS] that could not otherwise be explained by any other mechanism (excessive alcohol use, vitamin deficiencies, etc.),” wrote the investigators, led by Dr. Orhun H. Kantarci of the Mayo Clinic in Rochester, Minn. (Ann Neurol. 2015 Nov 24. doi: 10.1002/ana.24564).
Dr. Kantarci and his coinvestigators evaluated 453 patients with radiologically isolated syndrome (RIS) across 22 centers in five different countries – the United States, France, Italy, Spain, and Turkey – and collected data on MRIs, lesions, and cerebrospinal fluids at baseline and during follow-ups for up to just over 20 years in certain cohorts. Demographic and clinical data were also analyzed for each patient enrolled.
Ultimately, 128 of the 453 RIS patients developed symptomatic MS (28%), of which 15 (12%) evolved to PPMS while the remaining 113 “developed a first acute clinical event related to [central nervous system] demyelination consistent with CIS [clinically isolated syndrome]/MS diagnosis.” RIS occurred at a median age of 43.3 years with a range of 20-66 years and evolved to PPMS at a mean age of 49.1 years. The median time to conversion was 3.5 years over a median follow-up period of 5.8 years (P = .05).
Nine PPMS subjects were male, and the remaining six were women; PPMS patients were more often men, compared with those who developed CIS/MS (P = .005). Additionally, median age at the onset of RIS and median age at symptomatic evolution were both older by about 10 years in subjects with PPMS versus those that developed CIS/MS (P less than .001).
“The 12% prevalence of PPMS in this large RIS cohort, as well as age at PPMS onset, are strikingly similar to that of large clinical studies in MS,” the authors noted. “Studying RIS, therefore, provides an opportunity to better understand the onset of clinical MS and to test early intervention.”
Dr. Kantarci and his associates also pointed out that, in their study, the older age of PPMS onset versus CIS/MS was “clearly” independent of individual follow-up times. Therefore, they also concluded that “age dependence of progressive MS development, in the absence of previous clinical relapses despite having clear subclinically active MS, suggests that biological aging mechanisms may be a significant contributor for development of progressive MS.”
The study was conducted without any specific funding source. Dr. Kantarci and his colleagues did not report any relevant financial disclosures.
FROM ANNALS OF NEUROLOGY
Key clinical point: Patients with radiologically isolated syndrome should be treated as part of the multiple sclerosis spectrum.
Major finding: 12% of RIS patients developed primary progressive MS at a mean onset age of 49.1 years (P = .05), roughly the same rate as the general MS population.
Data source: Multicenter cohort study of 453 RIS patients at 22 clinical sites across five countries.
Disclosures: The study was conducted without any specific funding source. The authors reported no relevant financial disclosures.