Air pollution seen acting on stress hormones

How pollution exposure may lead to disease
Article Type
Changed
Fri, 01/18/2019 - 16:58

 

Increases in stress hormone levels and other adverse metabolic changes accompany higher exposure to air pollution, Chinese researchers have found, while cutting indoor pollution levels appears to mitigate these effects.

copyright Sergiy Serdyuk/istockphoto.com
For their research, Huichu Li, MS, and Jing Cai, PhD, and their colleagues at Fudan University in Shanghai, China, recruited 55 college students (27 female) living in nonsmoking dormitories into a randomized controlled trial in which air purifiers were placed in half the cohort’s dormitories for 9 days, with sham purifiers used in the other half. Students’ serum and urine metabolites were analyzed after 9 days using gas and liquid chromatography–mass spectrometry. After a washout period of 12 days, the study arms were switched for another 9 days, and the tests repeated.

The study design required that students spend as much time in their dorms as possible with the windows closed, though they could venture out for classes and exams. Fine particle concentration in the dorms treated by purifiers was 8.6 mcg per cubic meter during the study period, compared with a mean 101.4 outdoors. The researchers determined that the time-weighted average student exposure to fine particle pollutants was reduced by more than half when the dorm air was being purified, though average student exposure was estimated at 24 mcg per cubic meter at best. The World Health Organization considers levels below 10 mcg to be safe.

Students in untreated dorms had significant increases in cortisol, cortisone, epinephrine, norepinephrine, and biomarkers of oxidative stress at 9 days compared to those in treated ones. Glucose, insulin, measures of insulin resistance, amino acids, fatty acids, and lipids differed significantly between treatment assignments, and the untreated dorm groups also saw 2.61% higher systolic blood pressure (95% confidence interval [CI], 0.39-4.79).

Glucocorticoids are known to affect blood pressure, the investigators noted. Serum cortisol and cortisone levels were 1.3 and 1.2 times higher for the students in the sham-treated dorms, with each 10-mcg increase in pollutant exposure associated with a 7.8% increase in cortisol (95% CI, 4.75-10.91) and a nearly 3.8% increase in cortisone (95% CI, 1.84-5.71). Similar exposure-dependent increases were seen for norepinephrine, melatonin, phenylalanine, tyrosine, L-tryptophan and other compounds.

“To the best of our knowledge, this is the first study that used the untargeted metabolomics approach to investigate human global metabolic changes in relation to changes in ambient [air pollution] exposures,” the investigators wrote in their analysis, adding that the findings “provide insights into the potential mechanisms of the adverse health effects that have been found to be associated with [pollution] exposure.”

Mr. Li and Dr. Cai recommended the use of indoor air purification technology as a practical way to reduce harmful exposure, noting that the benefits of long-term use, particularly relating to cardiovascular and metabolic health, remain to be established.

The study was funded with grants from national and regional government agencies in China, and none of its authors declared conflicts of interest.

Body

 

Although the past decade has seen much advancement in our knowledge of how air pollutants promote cardiovascular diseases, important questions remain.

There is a need to better understand the precise nature and systemic pathways whereby ambient air pollution elicits a multitude of adverse responses in the heart and vasculature anatomically remote from the site of inhalation. Also, what can (and should) an individual do to protect oneself against the hazards of air pollution, given that substantial improvements in air quality throughout many parts of the world are likely decades away?

Li and colleagues have provided some significant insights into both of these issues. Responses to short-term exposure to high levels of pollution include increased blood pressure and insulin resistance, along with alterations in a battery of circulating markers indicative of systemic inflammation, oxidative stress, and platelet activation.

A distinguishing feature of their work is the detailed exploration of health responses using state-of-the-art metabolomic profiling. Although similar outcomes after brief exposure to ozone have been shown, this was the first usage of an untargeted metabolomic approach to evaluate the impact of ambient air pollution. The results confirm and extend the growing body of evidence that air pollution elicits systemic perturbations favoring the development of the metabolic syndrome. The findings also add to the growing body of evidence that simple interventions such as air purifier systems with high-efficiency filters can help protect against adverse health impacts of air pollution. The reduction in estimated exposure afforded by filtration favorably influenced most of the health outcomes (blood pressure, insulin resistance, oxidative stress, inflammation), curtailed pollution-induced activation of the sympathetic nervous system and hypothalamic-pituitary-adrenal axis, and helped mitigate the ensuing metabolomic perturbations.
 

Robert D Brook, MD, of the University of Michigan in Ann Arbor, and Sanjay Rajagopalan, MD, of Cleveland Hospitals, made these comments in an editorial (Circulation. 2017 Aug 14;136:628-31). Dr. Brook receives research support from RB, Inc. Dr. Rajagopalan had no disclosures.

Publications
Topics
Sections
Body

 

Although the past decade has seen much advancement in our knowledge of how air pollutants promote cardiovascular diseases, important questions remain.

There is a need to better understand the precise nature and systemic pathways whereby ambient air pollution elicits a multitude of adverse responses in the heart and vasculature anatomically remote from the site of inhalation. Also, what can (and should) an individual do to protect oneself against the hazards of air pollution, given that substantial improvements in air quality throughout many parts of the world are likely decades away?

Li and colleagues have provided some significant insights into both of these issues. Responses to short-term exposure to high levels of pollution include increased blood pressure and insulin resistance, along with alterations in a battery of circulating markers indicative of systemic inflammation, oxidative stress, and platelet activation.

A distinguishing feature of their work is the detailed exploration of health responses using state-of-the-art metabolomic profiling. Although similar outcomes after brief exposure to ozone have been shown, this was the first usage of an untargeted metabolomic approach to evaluate the impact of ambient air pollution. The results confirm and extend the growing body of evidence that air pollution elicits systemic perturbations favoring the development of the metabolic syndrome. The findings also add to the growing body of evidence that simple interventions such as air purifier systems with high-efficiency filters can help protect against adverse health impacts of air pollution. The reduction in estimated exposure afforded by filtration favorably influenced most of the health outcomes (blood pressure, insulin resistance, oxidative stress, inflammation), curtailed pollution-induced activation of the sympathetic nervous system and hypothalamic-pituitary-adrenal axis, and helped mitigate the ensuing metabolomic perturbations.
 

Robert D Brook, MD, of the University of Michigan in Ann Arbor, and Sanjay Rajagopalan, MD, of Cleveland Hospitals, made these comments in an editorial (Circulation. 2017 Aug 14;136:628-31). Dr. Brook receives research support from RB, Inc. Dr. Rajagopalan had no disclosures.

Body

 

Although the past decade has seen much advancement in our knowledge of how air pollutants promote cardiovascular diseases, important questions remain.

There is a need to better understand the precise nature and systemic pathways whereby ambient air pollution elicits a multitude of adverse responses in the heart and vasculature anatomically remote from the site of inhalation. Also, what can (and should) an individual do to protect oneself against the hazards of air pollution, given that substantial improvements in air quality throughout many parts of the world are likely decades away?

Li and colleagues have provided some significant insights into both of these issues. Responses to short-term exposure to high levels of pollution include increased blood pressure and insulin resistance, along with alterations in a battery of circulating markers indicative of systemic inflammation, oxidative stress, and platelet activation.

A distinguishing feature of their work is the detailed exploration of health responses using state-of-the-art metabolomic profiling. Although similar outcomes after brief exposure to ozone have been shown, this was the first usage of an untargeted metabolomic approach to evaluate the impact of ambient air pollution. The results confirm and extend the growing body of evidence that air pollution elicits systemic perturbations favoring the development of the metabolic syndrome. The findings also add to the growing body of evidence that simple interventions such as air purifier systems with high-efficiency filters can help protect against adverse health impacts of air pollution. The reduction in estimated exposure afforded by filtration favorably influenced most of the health outcomes (blood pressure, insulin resistance, oxidative stress, inflammation), curtailed pollution-induced activation of the sympathetic nervous system and hypothalamic-pituitary-adrenal axis, and helped mitigate the ensuing metabolomic perturbations.
 

Robert D Brook, MD, of the University of Michigan in Ann Arbor, and Sanjay Rajagopalan, MD, of Cleveland Hospitals, made these comments in an editorial (Circulation. 2017 Aug 14;136:628-31). Dr. Brook receives research support from RB, Inc. Dr. Rajagopalan had no disclosures.

Title
How pollution exposure may lead to disease
How pollution exposure may lead to disease

 

Increases in stress hormone levels and other adverse metabolic changes accompany higher exposure to air pollution, Chinese researchers have found, while cutting indoor pollution levels appears to mitigate these effects.

copyright Sergiy Serdyuk/istockphoto.com
For their research, Huichu Li, MS, and Jing Cai, PhD, and their colleagues at Fudan University in Shanghai, China, recruited 55 college students (27 female) living in nonsmoking dormitories into a randomized controlled trial in which air purifiers were placed in half the cohort’s dormitories for 9 days, with sham purifiers used in the other half. Students’ serum and urine metabolites were analyzed after 9 days using gas and liquid chromatography–mass spectrometry. After a washout period of 12 days, the study arms were switched for another 9 days, and the tests repeated.

The study design required that students spend as much time in their dorms as possible with the windows closed, though they could venture out for classes and exams. Fine particle concentration in the dorms treated by purifiers was 8.6 mcg per cubic meter during the study period, compared with a mean 101.4 outdoors. The researchers determined that the time-weighted average student exposure to fine particle pollutants was reduced by more than half when the dorm air was being purified, though average student exposure was estimated at 24 mcg per cubic meter at best. The World Health Organization considers levels below 10 mcg to be safe.

Students in untreated dorms had significant increases in cortisol, cortisone, epinephrine, norepinephrine, and biomarkers of oxidative stress at 9 days compared to those in treated ones. Glucose, insulin, measures of insulin resistance, amino acids, fatty acids, and lipids differed significantly between treatment assignments, and the untreated dorm groups also saw 2.61% higher systolic blood pressure (95% confidence interval [CI], 0.39-4.79).

Glucocorticoids are known to affect blood pressure, the investigators noted. Serum cortisol and cortisone levels were 1.3 and 1.2 times higher for the students in the sham-treated dorms, with each 10-mcg increase in pollutant exposure associated with a 7.8% increase in cortisol (95% CI, 4.75-10.91) and a nearly 3.8% increase in cortisone (95% CI, 1.84-5.71). Similar exposure-dependent increases were seen for norepinephrine, melatonin, phenylalanine, tyrosine, L-tryptophan and other compounds.

“To the best of our knowledge, this is the first study that used the untargeted metabolomics approach to investigate human global metabolic changes in relation to changes in ambient [air pollution] exposures,” the investigators wrote in their analysis, adding that the findings “provide insights into the potential mechanisms of the adverse health effects that have been found to be associated with [pollution] exposure.”

Mr. Li and Dr. Cai recommended the use of indoor air purification technology as a practical way to reduce harmful exposure, noting that the benefits of long-term use, particularly relating to cardiovascular and metabolic health, remain to be established.

The study was funded with grants from national and regional government agencies in China, and none of its authors declared conflicts of interest.

 

Increases in stress hormone levels and other adverse metabolic changes accompany higher exposure to air pollution, Chinese researchers have found, while cutting indoor pollution levels appears to mitigate these effects.

copyright Sergiy Serdyuk/istockphoto.com
For their research, Huichu Li, MS, and Jing Cai, PhD, and their colleagues at Fudan University in Shanghai, China, recruited 55 college students (27 female) living in nonsmoking dormitories into a randomized controlled trial in which air purifiers were placed in half the cohort’s dormitories for 9 days, with sham purifiers used in the other half. Students’ serum and urine metabolites were analyzed after 9 days using gas and liquid chromatography–mass spectrometry. After a washout period of 12 days, the study arms were switched for another 9 days, and the tests repeated.

The study design required that students spend as much time in their dorms as possible with the windows closed, though they could venture out for classes and exams. Fine particle concentration in the dorms treated by purifiers was 8.6 mcg per cubic meter during the study period, compared with a mean 101.4 outdoors. The researchers determined that the time-weighted average student exposure to fine particle pollutants was reduced by more than half when the dorm air was being purified, though average student exposure was estimated at 24 mcg per cubic meter at best. The World Health Organization considers levels below 10 mcg to be safe.

Students in untreated dorms had significant increases in cortisol, cortisone, epinephrine, norepinephrine, and biomarkers of oxidative stress at 9 days compared to those in treated ones. Glucose, insulin, measures of insulin resistance, amino acids, fatty acids, and lipids differed significantly between treatment assignments, and the untreated dorm groups also saw 2.61% higher systolic blood pressure (95% confidence interval [CI], 0.39-4.79).

Glucocorticoids are known to affect blood pressure, the investigators noted. Serum cortisol and cortisone levels were 1.3 and 1.2 times higher for the students in the sham-treated dorms, with each 10-mcg increase in pollutant exposure associated with a 7.8% increase in cortisol (95% CI, 4.75-10.91) and a nearly 3.8% increase in cortisone (95% CI, 1.84-5.71). Similar exposure-dependent increases were seen for norepinephrine, melatonin, phenylalanine, tyrosine, L-tryptophan and other compounds.

“To the best of our knowledge, this is the first study that used the untargeted metabolomics approach to investigate human global metabolic changes in relation to changes in ambient [air pollution] exposures,” the investigators wrote in their analysis, adding that the findings “provide insights into the potential mechanisms of the adverse health effects that have been found to be associated with [pollution] exposure.”

Mr. Li and Dr. Cai recommended the use of indoor air purification technology as a practical way to reduce harmful exposure, noting that the benefits of long-term use, particularly relating to cardiovascular and metabolic health, remain to be established.

The study was funded with grants from national and regional government agencies in China, and none of its authors declared conflicts of interest.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Air pollution exposure is associated with increases in stress hormones and a wide array of other biochemical changes.

Major finding: Increases in pollution exposure corresponded to increases in cortisol, cortisone, epinephrine, and norepinephrine.

Data source: A randomized, double-blinded crossover trial on 55 subjects in Shanghai, China.

Disclosures: Chinese national and regional governments supported the study, whose authors declared no conflicts of interest.

Disqus Comments
Default

APAP improves aerophagia symptoms

Relief from aerophagia symptoms should compel switch to APAP
Article Type
Changed
Fri, 01/18/2019 - 16:57

Switching continuous positive airway pressure–treated patients to autotitrating positive airway pressure (APAP) systems resulted in reduced severity of patient-reported aerophagia symptoms, according to results from a double-blind, randomized study.

Body

Dr. Octavian Ioachimescu
Octavian C. Ioachimescu, MD, PhD, FCCP, comments: This is an interesting study, as currently we have very few therapeutic modalities available for patients with obstructive sleep apnea and positive airway pressure-induced or exacerbated aerophagia. The interesting findings of auto-adjusting continuous positive airway pressure being superior to fixed positive airway pressure therapy may be related to the large differences between the pressures seen in the two groups (larger than in prior studies). Nevertheless, this may be of help to clinicians. What does one do when a patient on autoPAP therapy has significant aeoropagia? Well, this is for another article and another editorial...

Publications
Topics
Sections
Body

Dr. Octavian Ioachimescu
Octavian C. Ioachimescu, MD, PhD, FCCP, comments: This is an interesting study, as currently we have very few therapeutic modalities available for patients with obstructive sleep apnea and positive airway pressure-induced or exacerbated aerophagia. The interesting findings of auto-adjusting continuous positive airway pressure being superior to fixed positive airway pressure therapy may be related to the large differences between the pressures seen in the two groups (larger than in prior studies). Nevertheless, this may be of help to clinicians. What does one do when a patient on autoPAP therapy has significant aeoropagia? Well, this is for another article and another editorial...

Body

Dr. Octavian Ioachimescu
Octavian C. Ioachimescu, MD, PhD, FCCP, comments: This is an interesting study, as currently we have very few therapeutic modalities available for patients with obstructive sleep apnea and positive airway pressure-induced or exacerbated aerophagia. The interesting findings of auto-adjusting continuous positive airway pressure being superior to fixed positive airway pressure therapy may be related to the large differences between the pressures seen in the two groups (larger than in prior studies). Nevertheless, this may be of help to clinicians. What does one do when a patient on autoPAP therapy has significant aeoropagia? Well, this is for another article and another editorial...

Title
Relief from aerophagia symptoms should compel switch to APAP
Relief from aerophagia symptoms should compel switch to APAP

Switching continuous positive airway pressure–treated patients to autotitrating positive airway pressure (APAP) systems resulted in reduced severity of patient-reported aerophagia symptoms, according to results from a double-blind, randomized study.

Switching continuous positive airway pressure–treated patients to autotitrating positive airway pressure (APAP) systems resulted in reduced severity of patient-reported aerophagia symptoms, according to results from a double-blind, randomized study.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF CLINICAL SLEEP MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Sleep apnea patients complaining of aerophagia associated with continuous positive airway pressure may find relief using autotitrating positive airway pressure.

Major finding: The APAP-treated group saw significantly reduced median therapeutic pressure levels compared with the CPAP-treated patients (9.8 vs. 14.0 cm H2O, P less than .001) and slight but statistically significant reductions in self-reported symptoms of bloating, flatulence, and belching.

Data source: A randomized, double-blinded trial of 56 adult sleep apnea patients, with previous CPAP and aerophagia, treated at a hospital sleep clinic in Australia.

Disclosures: The government of Queensland helped fund the study. The authors disclosed no conflicts of interest.

Disqus Comments
Default

Eye disease affects 1 in 5 adults with severe atopic dermatitis

Article Type
Changed
Fri, 01/18/2019 - 16:57

 

Results of a large cohort study in Denmark found that adults with atopic dermatitis (AD) were significantly more likely to be affected by certain ocular conditions, compared with those who did not have AD.

“Keratitis, conjunctivitis, and keratoconus as well as cataracts in patients younger than 50 years occurred more frequently in patients with AD and in a disease severity–dependent manner,” concluded the authors, who wrote that as far as they know, this is the largest study conducted to date of ocular disorders in adults with AD.

Dr. Jacob Thyssen
The findings, published in June, are based on a population-based sample of 4.25 million adults in Denmark, using national health care and prescription registries, of whom 5,766 had been diagnosed with mild AD and another 4,272 with severe AD. The researchers, led by Jacob Thyssen, MD, PhD, of the department of dermatology and allergy, Herlev and Gentofte Hospital, University of Copenhagen, Hellerup, Denmark, found that 12% of patients with mild AD and 19% of those with severe AD were prescribed at least one anti-inflammatory ocular medication commonly used to treat conjunctivitis or keratitis, compared with only 4.5% of those without AD (J Am Acad Dermatol. 201 Aug;77[2]:280-6).

The investigators also found an elevated risk of a keratitis diagnosis among patients with mild AD (hazard ratio, 1.66; 95% confidence interval, 1.15-2.40) and those with severe AD (HR, 3.17; 95% CI, 2.31-4.35). Severe AD was associated with an elevated risk of keratoconus (HR, 10.01; 95% CI, 5.02-19.96),

Cataracts and glaucoma were not more common among those with AD overall. However, cataracts were increased among those under age 50 years with mild and severe AD, which were significant associations for both, but not among those over age 50 with AD. There were no differences for glaucoma risk associated with AD by age.

The investigators acknowledged that the study could not capture the reasons why anti-inflammatory ocular medicines were prescribed and that such medicines could have been prescribed for conditions other than the ocular conditions.

Capturing the risk of ocular diseases in AD is important, they wrote. They referred to “emerging concern” about the incidence of conjunctivitis with “near-future” biologic treatments for AD and the potential for long-term consequences. They referred to adverse event data from randomized clinical trials of dupilumab, an interleukin-4 receptor–alpha antagonist, approved by the Food and Drug Administration in March 2017 for treatment of moderate to severe AD, which included more cases of conjunctivitis among those treated with the biologic, compared with those on placebo (N Engl J Med. 2016 Dec 15;375:2335-48). A “weak trend” for more cases of conjunctivitis was also reported among treated patients with an IL-13 inhibitor, lebrikizumab, in a phase 2 study of adults with AD, they wrote.

Treatments targeting IL-4 receptor–alpha have been shown to result in increased blood eosinophil counts, and “these elevations might have clinical effects,” Dr. Thyssen and his colleagues wrote, adding: “Notably, eosinophils are pathognomonic for allergic eye disease.”

Dr. Thyssen disclosed funding from the Lundbeck Foundation and honoraria from Roche, Sanofi Genzyme, and LEO Pharma. Three other authors on the study reported research funding and/or honoraria from pharmaceutical firms.

Publications
Topics
Sections

 

Results of a large cohort study in Denmark found that adults with atopic dermatitis (AD) were significantly more likely to be affected by certain ocular conditions, compared with those who did not have AD.

“Keratitis, conjunctivitis, and keratoconus as well as cataracts in patients younger than 50 years occurred more frequently in patients with AD and in a disease severity–dependent manner,” concluded the authors, who wrote that as far as they know, this is the largest study conducted to date of ocular disorders in adults with AD.

Dr. Jacob Thyssen
The findings, published in June, are based on a population-based sample of 4.25 million adults in Denmark, using national health care and prescription registries, of whom 5,766 had been diagnosed with mild AD and another 4,272 with severe AD. The researchers, led by Jacob Thyssen, MD, PhD, of the department of dermatology and allergy, Herlev and Gentofte Hospital, University of Copenhagen, Hellerup, Denmark, found that 12% of patients with mild AD and 19% of those with severe AD were prescribed at least one anti-inflammatory ocular medication commonly used to treat conjunctivitis or keratitis, compared with only 4.5% of those without AD (J Am Acad Dermatol. 201 Aug;77[2]:280-6).

The investigators also found an elevated risk of a keratitis diagnosis among patients with mild AD (hazard ratio, 1.66; 95% confidence interval, 1.15-2.40) and those with severe AD (HR, 3.17; 95% CI, 2.31-4.35). Severe AD was associated with an elevated risk of keratoconus (HR, 10.01; 95% CI, 5.02-19.96),

Cataracts and glaucoma were not more common among those with AD overall. However, cataracts were increased among those under age 50 years with mild and severe AD, which were significant associations for both, but not among those over age 50 with AD. There were no differences for glaucoma risk associated with AD by age.

The investigators acknowledged that the study could not capture the reasons why anti-inflammatory ocular medicines were prescribed and that such medicines could have been prescribed for conditions other than the ocular conditions.

Capturing the risk of ocular diseases in AD is important, they wrote. They referred to “emerging concern” about the incidence of conjunctivitis with “near-future” biologic treatments for AD and the potential for long-term consequences. They referred to adverse event data from randomized clinical trials of dupilumab, an interleukin-4 receptor–alpha antagonist, approved by the Food and Drug Administration in March 2017 for treatment of moderate to severe AD, which included more cases of conjunctivitis among those treated with the biologic, compared with those on placebo (N Engl J Med. 2016 Dec 15;375:2335-48). A “weak trend” for more cases of conjunctivitis was also reported among treated patients with an IL-13 inhibitor, lebrikizumab, in a phase 2 study of adults with AD, they wrote.

Treatments targeting IL-4 receptor–alpha have been shown to result in increased blood eosinophil counts, and “these elevations might have clinical effects,” Dr. Thyssen and his colleagues wrote, adding: “Notably, eosinophils are pathognomonic for allergic eye disease.”

Dr. Thyssen disclosed funding from the Lundbeck Foundation and honoraria from Roche, Sanofi Genzyme, and LEO Pharma. Three other authors on the study reported research funding and/or honoraria from pharmaceutical firms.

 

Results of a large cohort study in Denmark found that adults with atopic dermatitis (AD) were significantly more likely to be affected by certain ocular conditions, compared with those who did not have AD.

“Keratitis, conjunctivitis, and keratoconus as well as cataracts in patients younger than 50 years occurred more frequently in patients with AD and in a disease severity–dependent manner,” concluded the authors, who wrote that as far as they know, this is the largest study conducted to date of ocular disorders in adults with AD.

Dr. Jacob Thyssen
The findings, published in June, are based on a population-based sample of 4.25 million adults in Denmark, using national health care and prescription registries, of whom 5,766 had been diagnosed with mild AD and another 4,272 with severe AD. The researchers, led by Jacob Thyssen, MD, PhD, of the department of dermatology and allergy, Herlev and Gentofte Hospital, University of Copenhagen, Hellerup, Denmark, found that 12% of patients with mild AD and 19% of those with severe AD were prescribed at least one anti-inflammatory ocular medication commonly used to treat conjunctivitis or keratitis, compared with only 4.5% of those without AD (J Am Acad Dermatol. 201 Aug;77[2]:280-6).

The investigators also found an elevated risk of a keratitis diagnosis among patients with mild AD (hazard ratio, 1.66; 95% confidence interval, 1.15-2.40) and those with severe AD (HR, 3.17; 95% CI, 2.31-4.35). Severe AD was associated with an elevated risk of keratoconus (HR, 10.01; 95% CI, 5.02-19.96),

Cataracts and glaucoma were not more common among those with AD overall. However, cataracts were increased among those under age 50 years with mild and severe AD, which were significant associations for both, but not among those over age 50 with AD. There were no differences for glaucoma risk associated with AD by age.

The investigators acknowledged that the study could not capture the reasons why anti-inflammatory ocular medicines were prescribed and that such medicines could have been prescribed for conditions other than the ocular conditions.

Capturing the risk of ocular diseases in AD is important, they wrote. They referred to “emerging concern” about the incidence of conjunctivitis with “near-future” biologic treatments for AD and the potential for long-term consequences. They referred to adverse event data from randomized clinical trials of dupilumab, an interleukin-4 receptor–alpha antagonist, approved by the Food and Drug Administration in March 2017 for treatment of moderate to severe AD, which included more cases of conjunctivitis among those treated with the biologic, compared with those on placebo (N Engl J Med. 2016 Dec 15;375:2335-48). A “weak trend” for more cases of conjunctivitis was also reported among treated patients with an IL-13 inhibitor, lebrikizumab, in a phase 2 study of adults with AD, they wrote.

Treatments targeting IL-4 receptor–alpha have been shown to result in increased blood eosinophil counts, and “these elevations might have clinical effects,” Dr. Thyssen and his colleagues wrote, adding: “Notably, eosinophils are pathognomonic for allergic eye disease.”

Dr. Thyssen disclosed funding from the Lundbeck Foundation and honoraria from Roche, Sanofi Genzyme, and LEO Pharma. Three other authors on the study reported research funding and/or honoraria from pharmaceutical firms.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Conjunctivitis, keratitis, and keratoconus are common in patients with atopic dermatitis, compared with the general population

Major finding: 19% of adults with severe AD received a prescription for an anti-inflammatory eye medication, compared with 4.5% of the general population.

Data source: Epidemiologic data from more than 4 million patients in Danish health care and prescription registries.

Disclosures: Four investigators disclosed outside grant funding and/or financial relationships with pharmaceutical manufacturers.

Disqus Comments
Default

Acute otitis media: Which children to treat

Article Type
Changed
Fri, 01/18/2019 - 16:57

 

Children whose ear infections involve severe bulging of the eardrum are likely to benefit from antibiotic treatment, while children with a peaked tympanogram pattern are likely to recover from the infections without the use of antibiotics, according to results from a new study.

KatarzynaBialasiewicz/Thinkstock
The researchers found, looking at results from across groups, that a peaked tympanogram result at the start of the study lowered the risk of treatment failure (hazard ratio 0.43, 95% confidence interval 0.21-0.88, P = 0.02). They also found that the biggest between-group differences in treatment failure occurred among children with severe bulging of the tympanic membrane, defined by the authors as “convexity markedly increased beyond the edges of tympanic membrane, resembling a doughnut” (11.1% vs. 64.1%, rate difference −53.0%; 95% CI, −73.5% to −32.4%). The number needed to treat among children with severe bulging was 1.9, indicating that 2 children need to be treated with antimicrobial agents to prevent treatment failure in 1 child.

Dr. Tähtinen and her colleagues noted that in their study, severe bulging was an important prognostic factor irrespective of the number of ears affected or the severity of other common symptoms such as pain or fever. In this study, involvement of both ears and severity of other symptoms such as pain or fever were not seen affecting risk of treatment failure. Though only a minority of children in the study had a peaked tympanogram, failure rates for these were low in both treatment and placebo groups, and the number needed to treat to prevent treatment failure was 1 in 29.

“The evaluation of otoscopic signs is always subjective and prone to interobserver bias,” the researchers wrote. “Severe bulging is, however, a sign that is difficult to miss even for a less experienced otoscopist, and therefore this prognostic factor as an indication for antimicrobial treatment could be easily applied into clinical practice.”

Dr. Tähtinen and her colleagues’ research was supported by grants from the European Society for Pediatric Infectious Diseases and several other foundations. The study authors disclosed no financial conflicts of interest.

Publications
Topics
Sections

 

Children whose ear infections involve severe bulging of the eardrum are likely to benefit from antibiotic treatment, while children with a peaked tympanogram pattern are likely to recover from the infections without the use of antibiotics, according to results from a new study.

KatarzynaBialasiewicz/Thinkstock
The researchers found, looking at results from across groups, that a peaked tympanogram result at the start of the study lowered the risk of treatment failure (hazard ratio 0.43, 95% confidence interval 0.21-0.88, P = 0.02). They also found that the biggest between-group differences in treatment failure occurred among children with severe bulging of the tympanic membrane, defined by the authors as “convexity markedly increased beyond the edges of tympanic membrane, resembling a doughnut” (11.1% vs. 64.1%, rate difference −53.0%; 95% CI, −73.5% to −32.4%). The number needed to treat among children with severe bulging was 1.9, indicating that 2 children need to be treated with antimicrobial agents to prevent treatment failure in 1 child.

Dr. Tähtinen and her colleagues noted that in their study, severe bulging was an important prognostic factor irrespective of the number of ears affected or the severity of other common symptoms such as pain or fever. In this study, involvement of both ears and severity of other symptoms such as pain or fever were not seen affecting risk of treatment failure. Though only a minority of children in the study had a peaked tympanogram, failure rates for these were low in both treatment and placebo groups, and the number needed to treat to prevent treatment failure was 1 in 29.

“The evaluation of otoscopic signs is always subjective and prone to interobserver bias,” the researchers wrote. “Severe bulging is, however, a sign that is difficult to miss even for a less experienced otoscopist, and therefore this prognostic factor as an indication for antimicrobial treatment could be easily applied into clinical practice.”

Dr. Tähtinen and her colleagues’ research was supported by grants from the European Society for Pediatric Infectious Diseases and several other foundations. The study authors disclosed no financial conflicts of interest.

 

Children whose ear infections involve severe bulging of the eardrum are likely to benefit from antibiotic treatment, while children with a peaked tympanogram pattern are likely to recover from the infections without the use of antibiotics, according to results from a new study.

KatarzynaBialasiewicz/Thinkstock
The researchers found, looking at results from across groups, that a peaked tympanogram result at the start of the study lowered the risk of treatment failure (hazard ratio 0.43, 95% confidence interval 0.21-0.88, P = 0.02). They also found that the biggest between-group differences in treatment failure occurred among children with severe bulging of the tympanic membrane, defined by the authors as “convexity markedly increased beyond the edges of tympanic membrane, resembling a doughnut” (11.1% vs. 64.1%, rate difference −53.0%; 95% CI, −73.5% to −32.4%). The number needed to treat among children with severe bulging was 1.9, indicating that 2 children need to be treated with antimicrobial agents to prevent treatment failure in 1 child.

Dr. Tähtinen and her colleagues noted that in their study, severe bulging was an important prognostic factor irrespective of the number of ears affected or the severity of other common symptoms such as pain or fever. In this study, involvement of both ears and severity of other symptoms such as pain or fever were not seen affecting risk of treatment failure. Though only a minority of children in the study had a peaked tympanogram, failure rates for these were low in both treatment and placebo groups, and the number needed to treat to prevent treatment failure was 1 in 29.

“The evaluation of otoscopic signs is always subjective and prone to interobserver bias,” the researchers wrote. “Severe bulging is, however, a sign that is difficult to miss even for a less experienced otoscopist, and therefore this prognostic factor as an indication for antimicrobial treatment could be easily applied into clinical practice.”

Dr. Tähtinen and her colleagues’ research was supported by grants from the European Society for Pediatric Infectious Diseases and several other foundations. The study authors disclosed no financial conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Severe bulging of the eardrum, or a peaked tympanogram pattern, can inform the administration or withholding of antibiotic treatment in AOM.

Major finding: The number needed to treat using antibiotics among children presenting with severe eardrum bulging was 1.9.

Data source: Analysis of results from a randomized, placebo controlled trial (n = 319) of children younger than 3 with AOM conducted in Finland during 2006-2008.

Disclosures: Study authors disclosed research support from several foundations but no commercial conflicts of interest.

Disqus Comments
Default

Studies support early use of genetic tests in early childhood disorders

Evidence for a new first-tier diagnostic approach
Article Type
Changed
Fri, 01/18/2019 - 16:56

 

Results from two new studies suggest that genetic testing early in the diagnostic pathway may allow for earlier and more precise diagnoses in early-life epilepsies and a range of other childhood-onset disorders, and potentially limit costs associated with a long diagnostic course.

Both papers, published online July 31 in JAMA Pediatrics, showed the diagnostic yield of genetic testing approaches, including whole-exome sequencing (WES), to be high.

The results also argue for the incorporation of genetic testing into the first diagnostic assessments; not limiting it to severe presentations only; and for broad testing methods to be employed in lieu of narrower ones.

Dr. Anne T. Berg
In a prospective cohort study led by Anne T. Berg, PhD, of Ann & Robert H. Lurie Children’s Hospital in Chicago, 680 children with newly diagnosed early-life epilepsy (onset at less than 3 years of age) and without acquired brain injury were recruited from 17 hospitals in the United States.

Of these patients, just under half (n = 327) underwent various forms of genetic testing at the discretion of the treating physician, including karyotyping, microarrays, epilepsy gene panels, WES, mitochondrial panels, and other tests. Pathogenic variants were discovered in 132 children, or 40% of those receiving genetic testing (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Of all the genetic testing methods employed in the study, diagnostic yields were significantly greater for epilepsy gene panels (29.2%) and WES (27.8%), compared with chromosome microarray (7.9%).

The results, the investigators said, provide “added impetus to move the diagnosis of the specific cause to the point of initial presentation ... it is time to provide greater emphasis on and support for thorough genetic evaluations, particularly sequencing-based evaluations, for children with newly presenting epilepsies in the first few years of life.”

In addition to aiding management decisions, early genetic testing “ends the diagnostic odyssey during which parents and physicians spend untold amounts of time searching for an explanation for a child’s epilepsy and reduces associated costs,” Dr. Berg and her colleagues concluded.

In a separate study led by Tiong Yang Tan, MBBS, PhD, of Victorian Clinical Genetics Services in Melbourne, Australia, and his colleagues, singleton WES was used in 44 children recruited at outpatient clinics of a Melbourne hospital system (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1755).

Children in the study were aged 2-18 years (with mean age at presentation 28 months) and had a wide variety of suspected genetic disorders, including skeletal, skin, neurometabolic, and intellectual disorders. Some of these had features overlapping several conditions. The children in the cohort had not received prior genetic testing before undergoing WES.

The molecular test resulted in a diagnosis in 52% (n = 23) of the children, including unexpected diagnoses in eight of these. Clinical management was altered as result of sequencing findings in six children.

“Although phenotyping is critical, 35% of children had a diagnosis caused by a gene outside the initially prioritized gene list. This finding not only possibly reflects lack of clinical recognition but also underscores the utility of WES in achieving a diagnosis even when the a priori hypothesis is imprecise,” Dr. Tan and his associates wrote in their analysis.

Dr. Tan and his colleagues conducted a cost analysis that found WES performed at initial tertiary presentation resulted in a cost savings of U.S. $6,838 per additional diagnosis (95% confidence interval, U.S. $3,263-$11,678), compared with the standard diagnostic pathway. The figures reflect costs in an Australian care setting.

The children in the study had a mean diagnostic odyssey of 6 years, including a mean of 19 tests and four clinical genetics and four non–genetics specialist consultations. A quarter of them had undergone at least one diagnostic procedure under general anesthesia.

“The diagnostic odyssey of children suspected of having monogenic disorders is protracted and painful and may not provide a precise diagnosis,” Dr. Tan and his colleagues wrote in their analysis. “This paradigm has markedly shifted with the advent of WES.”

WES is best targeted to children “with genetically heterogeneous disorders or features overlapping several conditions,” the investigators concluded. “Our findings suggest that these children are best served by early recognition by their pediatrician and expedited referral to clinical genetics with WES applied after chromosomal microarray but before an extensive diagnostic process.”

Dr. Tan and his colleagues’ study was funded by the Melbourne Genomics Health Alliance and state and national governments in Australia. None of the authors declared conflicts of interest. Dr. Berg and her colleagues’ study was funded by the Pediatric Epilepsy Research Foundation, and none of its authors disclosed commercial conflicts of interest.

 

 

Body

 

The studies by Tan et al. and Berg et al. demonstrate the dramatic effect of the diagnostic yield of different genetic testing approaches on cost-effectiveness and the potential design of testing strategies in children with suspected monogenic conditions. Both studies emphasize the effect of the results of genetic testing. Whereas Tan et al. showed that, in 26% of cases, the result enabled a specific modification of patient care, Berg et al. also demonstrated that there is no basis for identifying optimal, targeted treatments, when testing is not performed and genetic diagnoses are not made.

However, in the absence of targeted treatments, a genetic diagnosis is of high value for the patients, their families, and treating physicians. A clear diagnosis may not only be of prognostic value but also put an end to a possibly stressful and demanding diagnostic odyssey. It may enable patient care that is explicitly focused on the individual needs of the patient. A clear diagnosis usually also allows a better assessment of the risks of recurrence in the family and possibly enables prenatal testing in relatives. Finally, it enables research and a better scientific understanding of the underlying pathophysiology, which may ideally lead to the identification of novel therapeutic prospects. Seven years ago, an international consensus statement endorsed the replacement of classic cytogenetic karyotype analysis by chromosomal microarrays as a first-tier diagnostic test in individuals with developmental disabilities or congenital anomalies. The studies add to the growing evidence that this consensus may already be outdated, as high-throughput sequencing techniques may achieve even higher diagnostic yields and, thus, are capable to become the new first-tier diagnostic test in congenital and early-onset disorders.
 

Johannes R. Lemke, MD, is with the Institute of Human Genetics at the University of Leipzig (Germany) Hospitals and Clinics. He reports no conflicts of interest associated with his editorial, which accompanied the JAMA Pediatrics reports (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Publications
Topics
Sections
Body

 

The studies by Tan et al. and Berg et al. demonstrate the dramatic effect of the diagnostic yield of different genetic testing approaches on cost-effectiveness and the potential design of testing strategies in children with suspected monogenic conditions. Both studies emphasize the effect of the results of genetic testing. Whereas Tan et al. showed that, in 26% of cases, the result enabled a specific modification of patient care, Berg et al. also demonstrated that there is no basis for identifying optimal, targeted treatments, when testing is not performed and genetic diagnoses are not made.

However, in the absence of targeted treatments, a genetic diagnosis is of high value for the patients, their families, and treating physicians. A clear diagnosis may not only be of prognostic value but also put an end to a possibly stressful and demanding diagnostic odyssey. It may enable patient care that is explicitly focused on the individual needs of the patient. A clear diagnosis usually also allows a better assessment of the risks of recurrence in the family and possibly enables prenatal testing in relatives. Finally, it enables research and a better scientific understanding of the underlying pathophysiology, which may ideally lead to the identification of novel therapeutic prospects. Seven years ago, an international consensus statement endorsed the replacement of classic cytogenetic karyotype analysis by chromosomal microarrays as a first-tier diagnostic test in individuals with developmental disabilities or congenital anomalies. The studies add to the growing evidence that this consensus may already be outdated, as high-throughput sequencing techniques may achieve even higher diagnostic yields and, thus, are capable to become the new first-tier diagnostic test in congenital and early-onset disorders.
 

Johannes R. Lemke, MD, is with the Institute of Human Genetics at the University of Leipzig (Germany) Hospitals and Clinics. He reports no conflicts of interest associated with his editorial, which accompanied the JAMA Pediatrics reports (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Body

 

The studies by Tan et al. and Berg et al. demonstrate the dramatic effect of the diagnostic yield of different genetic testing approaches on cost-effectiveness and the potential design of testing strategies in children with suspected monogenic conditions. Both studies emphasize the effect of the results of genetic testing. Whereas Tan et al. showed that, in 26% of cases, the result enabled a specific modification of patient care, Berg et al. also demonstrated that there is no basis for identifying optimal, targeted treatments, when testing is not performed and genetic diagnoses are not made.

However, in the absence of targeted treatments, a genetic diagnosis is of high value for the patients, their families, and treating physicians. A clear diagnosis may not only be of prognostic value but also put an end to a possibly stressful and demanding diagnostic odyssey. It may enable patient care that is explicitly focused on the individual needs of the patient. A clear diagnosis usually also allows a better assessment of the risks of recurrence in the family and possibly enables prenatal testing in relatives. Finally, it enables research and a better scientific understanding of the underlying pathophysiology, which may ideally lead to the identification of novel therapeutic prospects. Seven years ago, an international consensus statement endorsed the replacement of classic cytogenetic karyotype analysis by chromosomal microarrays as a first-tier diagnostic test in individuals with developmental disabilities or congenital anomalies. The studies add to the growing evidence that this consensus may already be outdated, as high-throughput sequencing techniques may achieve even higher diagnostic yields and, thus, are capable to become the new first-tier diagnostic test in congenital and early-onset disorders.
 

Johannes R. Lemke, MD, is with the Institute of Human Genetics at the University of Leipzig (Germany) Hospitals and Clinics. He reports no conflicts of interest associated with his editorial, which accompanied the JAMA Pediatrics reports (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Title
Evidence for a new first-tier diagnostic approach
Evidence for a new first-tier diagnostic approach

 

Results from two new studies suggest that genetic testing early in the diagnostic pathway may allow for earlier and more precise diagnoses in early-life epilepsies and a range of other childhood-onset disorders, and potentially limit costs associated with a long diagnostic course.

Both papers, published online July 31 in JAMA Pediatrics, showed the diagnostic yield of genetic testing approaches, including whole-exome sequencing (WES), to be high.

The results also argue for the incorporation of genetic testing into the first diagnostic assessments; not limiting it to severe presentations only; and for broad testing methods to be employed in lieu of narrower ones.

Dr. Anne T. Berg
In a prospective cohort study led by Anne T. Berg, PhD, of Ann & Robert H. Lurie Children’s Hospital in Chicago, 680 children with newly diagnosed early-life epilepsy (onset at less than 3 years of age) and without acquired brain injury were recruited from 17 hospitals in the United States.

Of these patients, just under half (n = 327) underwent various forms of genetic testing at the discretion of the treating physician, including karyotyping, microarrays, epilepsy gene panels, WES, mitochondrial panels, and other tests. Pathogenic variants were discovered in 132 children, or 40% of those receiving genetic testing (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Of all the genetic testing methods employed in the study, diagnostic yields were significantly greater for epilepsy gene panels (29.2%) and WES (27.8%), compared with chromosome microarray (7.9%).

The results, the investigators said, provide “added impetus to move the diagnosis of the specific cause to the point of initial presentation ... it is time to provide greater emphasis on and support for thorough genetic evaluations, particularly sequencing-based evaluations, for children with newly presenting epilepsies in the first few years of life.”

In addition to aiding management decisions, early genetic testing “ends the diagnostic odyssey during which parents and physicians spend untold amounts of time searching for an explanation for a child’s epilepsy and reduces associated costs,” Dr. Berg and her colleagues concluded.

In a separate study led by Tiong Yang Tan, MBBS, PhD, of Victorian Clinical Genetics Services in Melbourne, Australia, and his colleagues, singleton WES was used in 44 children recruited at outpatient clinics of a Melbourne hospital system (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1755).

Children in the study were aged 2-18 years (with mean age at presentation 28 months) and had a wide variety of suspected genetic disorders, including skeletal, skin, neurometabolic, and intellectual disorders. Some of these had features overlapping several conditions. The children in the cohort had not received prior genetic testing before undergoing WES.

The molecular test resulted in a diagnosis in 52% (n = 23) of the children, including unexpected diagnoses in eight of these. Clinical management was altered as result of sequencing findings in six children.

“Although phenotyping is critical, 35% of children had a diagnosis caused by a gene outside the initially prioritized gene list. This finding not only possibly reflects lack of clinical recognition but also underscores the utility of WES in achieving a diagnosis even when the a priori hypothesis is imprecise,” Dr. Tan and his associates wrote in their analysis.

Dr. Tan and his colleagues conducted a cost analysis that found WES performed at initial tertiary presentation resulted in a cost savings of U.S. $6,838 per additional diagnosis (95% confidence interval, U.S. $3,263-$11,678), compared with the standard diagnostic pathway. The figures reflect costs in an Australian care setting.

The children in the study had a mean diagnostic odyssey of 6 years, including a mean of 19 tests and four clinical genetics and four non–genetics specialist consultations. A quarter of them had undergone at least one diagnostic procedure under general anesthesia.

“The diagnostic odyssey of children suspected of having monogenic disorders is protracted and painful and may not provide a precise diagnosis,” Dr. Tan and his colleagues wrote in their analysis. “This paradigm has markedly shifted with the advent of WES.”

WES is best targeted to children “with genetically heterogeneous disorders or features overlapping several conditions,” the investigators concluded. “Our findings suggest that these children are best served by early recognition by their pediatrician and expedited referral to clinical genetics with WES applied after chromosomal microarray but before an extensive diagnostic process.”

Dr. Tan and his colleagues’ study was funded by the Melbourne Genomics Health Alliance and state and national governments in Australia. None of the authors declared conflicts of interest. Dr. Berg and her colleagues’ study was funded by the Pediatric Epilepsy Research Foundation, and none of its authors disclosed commercial conflicts of interest.

 

 

 

Results from two new studies suggest that genetic testing early in the diagnostic pathway may allow for earlier and more precise diagnoses in early-life epilepsies and a range of other childhood-onset disorders, and potentially limit costs associated with a long diagnostic course.

Both papers, published online July 31 in JAMA Pediatrics, showed the diagnostic yield of genetic testing approaches, including whole-exome sequencing (WES), to be high.

The results also argue for the incorporation of genetic testing into the first diagnostic assessments; not limiting it to severe presentations only; and for broad testing methods to be employed in lieu of narrower ones.

Dr. Anne T. Berg
In a prospective cohort study led by Anne T. Berg, PhD, of Ann & Robert H. Lurie Children’s Hospital in Chicago, 680 children with newly diagnosed early-life epilepsy (onset at less than 3 years of age) and without acquired brain injury were recruited from 17 hospitals in the United States.

Of these patients, just under half (n = 327) underwent various forms of genetic testing at the discretion of the treating physician, including karyotyping, microarrays, epilepsy gene panels, WES, mitochondrial panels, and other tests. Pathogenic variants were discovered in 132 children, or 40% of those receiving genetic testing (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1743).

Of all the genetic testing methods employed in the study, diagnostic yields were significantly greater for epilepsy gene panels (29.2%) and WES (27.8%), compared with chromosome microarray (7.9%).

The results, the investigators said, provide “added impetus to move the diagnosis of the specific cause to the point of initial presentation ... it is time to provide greater emphasis on and support for thorough genetic evaluations, particularly sequencing-based evaluations, for children with newly presenting epilepsies in the first few years of life.”

In addition to aiding management decisions, early genetic testing “ends the diagnostic odyssey during which parents and physicians spend untold amounts of time searching for an explanation for a child’s epilepsy and reduces associated costs,” Dr. Berg and her colleagues concluded.

In a separate study led by Tiong Yang Tan, MBBS, PhD, of Victorian Clinical Genetics Services in Melbourne, Australia, and his colleagues, singleton WES was used in 44 children recruited at outpatient clinics of a Melbourne hospital system (JAMA Pediatr. 2017 July 31. doi: 10.1001/jamapediatrics.2017.1755).

Children in the study were aged 2-18 years (with mean age at presentation 28 months) and had a wide variety of suspected genetic disorders, including skeletal, skin, neurometabolic, and intellectual disorders. Some of these had features overlapping several conditions. The children in the cohort had not received prior genetic testing before undergoing WES.

The molecular test resulted in a diagnosis in 52% (n = 23) of the children, including unexpected diagnoses in eight of these. Clinical management was altered as result of sequencing findings in six children.

“Although phenotyping is critical, 35% of children had a diagnosis caused by a gene outside the initially prioritized gene list. This finding not only possibly reflects lack of clinical recognition but also underscores the utility of WES in achieving a diagnosis even when the a priori hypothesis is imprecise,” Dr. Tan and his associates wrote in their analysis.

Dr. Tan and his colleagues conducted a cost analysis that found WES performed at initial tertiary presentation resulted in a cost savings of U.S. $6,838 per additional diagnosis (95% confidence interval, U.S. $3,263-$11,678), compared with the standard diagnostic pathway. The figures reflect costs in an Australian care setting.

The children in the study had a mean diagnostic odyssey of 6 years, including a mean of 19 tests and four clinical genetics and four non–genetics specialist consultations. A quarter of them had undergone at least one diagnostic procedure under general anesthesia.

“The diagnostic odyssey of children suspected of having monogenic disorders is protracted and painful and may not provide a precise diagnosis,” Dr. Tan and his colleagues wrote in their analysis. “This paradigm has markedly shifted with the advent of WES.”

WES is best targeted to children “with genetically heterogeneous disorders or features overlapping several conditions,” the investigators concluded. “Our findings suggest that these children are best served by early recognition by their pediatrician and expedited referral to clinical genetics with WES applied after chromosomal microarray but before an extensive diagnostic process.”

Dr. Tan and his colleagues’ study was funded by the Melbourne Genomics Health Alliance and state and national governments in Australia. None of the authors declared conflicts of interest. Dr. Berg and her colleagues’ study was funded by the Pediatric Epilepsy Research Foundation, and none of its authors disclosed commercial conflicts of interest.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM JAMA PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
143637
Disqus Comments
Default

Short, simple antibiotic courses effective in latent TB

Article Type
Changed
Fri, 01/18/2019 - 16:56

 

Latent tuberculosis infection can be safely and effectively treated with 3- and 4-month medication regimens, including those using once-weekly dosing, according to results from a new meta-analysis.

The findings, published online July 31 in Annals of Internal Medicine, bolster evidence that shorter antibiotic regimens using rifamycins alone or in combination with other drugs are a viable alternative to the longer courses (Ann Intern Med. 2017;167:248-55).

Zerbor/Thinkstock
While the new study looked at efficacy and toxicity across treatment strategies only and found no significant differences between shorter rifamycin-based regimens and isoniazid-based regimens lasting 6 months or longer, short courses are considered likely to see better patient adherence, previous research in latent TB has indicated (BMC Infect Dis. 2016;16:257).

For their research, Dominik Zenner, MD, an epidemiologist with Public Health England in London, and his colleagues updated a meta-analysis they published in 2014. The team added 8 new randomized studies to the 53 that had been included in the earlier paper (Ann Intern Med. 2014 Sep;161:419-28).

Using pairwise comparisons and a Bayesian network analysis, Dr. Zenner and his colleagues found comparable efficacy among isoniazid regimens of 6 months or more; rifampicin-isoniazid regimens of 3 or 4 months, rifampicin-only regimens, and rifampicin-pyrazinamide regimens, compared with placebo (P less than .05 for all).

Importantly, a rifapentine-based regimen in which patients took a weekly dose for 12 weeks was as effective as the others.

“We think that you can get away with shorter regimens,” Dr. Zenner said in an interview. Although 3- to 4-month courses are already recommended in some countries, including the United Kingdom, for most patients with latent TB, “clinicians in some settings have been quite slow to adopt them,” he said.

The U.S. Centers for Disease Control and Prevention currently recommend multiple treatment strategies for latent TB, depending on patient characteristics. These include 6 or 9 months of isoniazid; 3 months of once-weekly isoniazid and rifapentine; or 4 months of daily rifampin.

In the meta-analysis, rifamycin-only regimens performed as well as did those regimens that also used isoniazid, the study showed, suggesting that, for most patients who can safely be treated with rifamycins, “there is no added gain of using isoniazid,” Dr. Zenner said.

He noted that the longer isoniazid-alone regimens are nonetheless effective and appropriate for some, including people who might have potential drug interactions, such as HIV patients taking antiretroviral medications.

About 2 billion people worldwide are estimated to have latent TB, and most will not go on to develop active TB. However, because latent TB acts as the reservoir for active TB, screening of high-risk groups and close contacts of TB patients and treating latent infections is a public health priority.

But many of these asymptomatic patients will get lost between a positive screen result and successful treatment completion, Dr. Zenner said.

“We have huge drop-offs in the cascade of treatment, and treatment completion is one of the worries,” he said. “Whether it makes a huge difference in compliance to take only 12 doses is not sufficiently studied, but it does make a lot of sense. By reducing the pill burden, as we call it, we think that we will see quite good adherence rates – but that’s a subject of further detailed study.”

The investigators noted as a limitation of their study that hepatotoxicity outcomes were not available for all studies and that some of the included trials had a potential for bias. They did not see statistically significant differences in treatment efficacy between regimens in HIV-positive and HIV-negative patients, but noted in their analysis that “efficacy may have been weaker in HIV-positive populations.”

The U.K. National Institute for Health Research provided some funding for Dr. Zenner and his colleagues’ study. One coauthor, Helen Stagg, PhD, reported nonfinancial support from Sanofi during the study, and financial support from Otsuka for unrelated work.


 

Publications
Topics
Sections

 

Latent tuberculosis infection can be safely and effectively treated with 3- and 4-month medication regimens, including those using once-weekly dosing, according to results from a new meta-analysis.

The findings, published online July 31 in Annals of Internal Medicine, bolster evidence that shorter antibiotic regimens using rifamycins alone or in combination with other drugs are a viable alternative to the longer courses (Ann Intern Med. 2017;167:248-55).

Zerbor/Thinkstock
While the new study looked at efficacy and toxicity across treatment strategies only and found no significant differences between shorter rifamycin-based regimens and isoniazid-based regimens lasting 6 months or longer, short courses are considered likely to see better patient adherence, previous research in latent TB has indicated (BMC Infect Dis. 2016;16:257).

For their research, Dominik Zenner, MD, an epidemiologist with Public Health England in London, and his colleagues updated a meta-analysis they published in 2014. The team added 8 new randomized studies to the 53 that had been included in the earlier paper (Ann Intern Med. 2014 Sep;161:419-28).

Using pairwise comparisons and a Bayesian network analysis, Dr. Zenner and his colleagues found comparable efficacy among isoniazid regimens of 6 months or more; rifampicin-isoniazid regimens of 3 or 4 months, rifampicin-only regimens, and rifampicin-pyrazinamide regimens, compared with placebo (P less than .05 for all).

Importantly, a rifapentine-based regimen in which patients took a weekly dose for 12 weeks was as effective as the others.

“We think that you can get away with shorter regimens,” Dr. Zenner said in an interview. Although 3- to 4-month courses are already recommended in some countries, including the United Kingdom, for most patients with latent TB, “clinicians in some settings have been quite slow to adopt them,” he said.

The U.S. Centers for Disease Control and Prevention currently recommend multiple treatment strategies for latent TB, depending on patient characteristics. These include 6 or 9 months of isoniazid; 3 months of once-weekly isoniazid and rifapentine; or 4 months of daily rifampin.

In the meta-analysis, rifamycin-only regimens performed as well as did those regimens that also used isoniazid, the study showed, suggesting that, for most patients who can safely be treated with rifamycins, “there is no added gain of using isoniazid,” Dr. Zenner said.

He noted that the longer isoniazid-alone regimens are nonetheless effective and appropriate for some, including people who might have potential drug interactions, such as HIV patients taking antiretroviral medications.

About 2 billion people worldwide are estimated to have latent TB, and most will not go on to develop active TB. However, because latent TB acts as the reservoir for active TB, screening of high-risk groups and close contacts of TB patients and treating latent infections is a public health priority.

But many of these asymptomatic patients will get lost between a positive screen result and successful treatment completion, Dr. Zenner said.

“We have huge drop-offs in the cascade of treatment, and treatment completion is one of the worries,” he said. “Whether it makes a huge difference in compliance to take only 12 doses is not sufficiently studied, but it does make a lot of sense. By reducing the pill burden, as we call it, we think that we will see quite good adherence rates – but that’s a subject of further detailed study.”

The investigators noted as a limitation of their study that hepatotoxicity outcomes were not available for all studies and that some of the included trials had a potential for bias. They did not see statistically significant differences in treatment efficacy between regimens in HIV-positive and HIV-negative patients, but noted in their analysis that “efficacy may have been weaker in HIV-positive populations.”

The U.K. National Institute for Health Research provided some funding for Dr. Zenner and his colleagues’ study. One coauthor, Helen Stagg, PhD, reported nonfinancial support from Sanofi during the study, and financial support from Otsuka for unrelated work.


 

 

Latent tuberculosis infection can be safely and effectively treated with 3- and 4-month medication regimens, including those using once-weekly dosing, according to results from a new meta-analysis.

The findings, published online July 31 in Annals of Internal Medicine, bolster evidence that shorter antibiotic regimens using rifamycins alone or in combination with other drugs are a viable alternative to the longer courses (Ann Intern Med. 2017;167:248-55).

Zerbor/Thinkstock
While the new study looked at efficacy and toxicity across treatment strategies only and found no significant differences between shorter rifamycin-based regimens and isoniazid-based regimens lasting 6 months or longer, short courses are considered likely to see better patient adherence, previous research in latent TB has indicated (BMC Infect Dis. 2016;16:257).

For their research, Dominik Zenner, MD, an epidemiologist with Public Health England in London, and his colleagues updated a meta-analysis they published in 2014. The team added 8 new randomized studies to the 53 that had been included in the earlier paper (Ann Intern Med. 2014 Sep;161:419-28).

Using pairwise comparisons and a Bayesian network analysis, Dr. Zenner and his colleagues found comparable efficacy among isoniazid regimens of 6 months or more; rifampicin-isoniazid regimens of 3 or 4 months, rifampicin-only regimens, and rifampicin-pyrazinamide regimens, compared with placebo (P less than .05 for all).

Importantly, a rifapentine-based regimen in which patients took a weekly dose for 12 weeks was as effective as the others.

“We think that you can get away with shorter regimens,” Dr. Zenner said in an interview. Although 3- to 4-month courses are already recommended in some countries, including the United Kingdom, for most patients with latent TB, “clinicians in some settings have been quite slow to adopt them,” he said.

The U.S. Centers for Disease Control and Prevention currently recommend multiple treatment strategies for latent TB, depending on patient characteristics. These include 6 or 9 months of isoniazid; 3 months of once-weekly isoniazid and rifapentine; or 4 months of daily rifampin.

In the meta-analysis, rifamycin-only regimens performed as well as did those regimens that also used isoniazid, the study showed, suggesting that, for most patients who can safely be treated with rifamycins, “there is no added gain of using isoniazid,” Dr. Zenner said.

He noted that the longer isoniazid-alone regimens are nonetheless effective and appropriate for some, including people who might have potential drug interactions, such as HIV patients taking antiretroviral medications.

About 2 billion people worldwide are estimated to have latent TB, and most will not go on to develop active TB. However, because latent TB acts as the reservoir for active TB, screening of high-risk groups and close contacts of TB patients and treating latent infections is a public health priority.

But many of these asymptomatic patients will get lost between a positive screen result and successful treatment completion, Dr. Zenner said.

“We have huge drop-offs in the cascade of treatment, and treatment completion is one of the worries,” he said. “Whether it makes a huge difference in compliance to take only 12 doses is not sufficiently studied, but it does make a lot of sense. By reducing the pill burden, as we call it, we think that we will see quite good adherence rates – but that’s a subject of further detailed study.”

The investigators noted as a limitation of their study that hepatotoxicity outcomes were not available for all studies and that some of the included trials had a potential for bias. They did not see statistically significant differences in treatment efficacy between regimens in HIV-positive and HIV-negative patients, but noted in their analysis that “efficacy may have been weaker in HIV-positive populations.”

The U.K. National Institute for Health Research provided some funding for Dr. Zenner and his colleagues’ study. One coauthor, Helen Stagg, PhD, reported nonfinancial support from Sanofi during the study, and financial support from Otsuka for unrelated work.


 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Rifamycin-only treatment of latent TB works as well as combination regimens, and shorter dosing schedules show no loss in efficacy vs. longer ones.

Major finding: Rifamycin-only regimens, rifampicin-isoniazid regimens of 3 or 4 months, rifampicin-pyrazinamide regimens were all effective, compared with placebo and with isoniazid regimens of 6, 12 and 72 months.

Data source: A network meta-analysis of 61 randomized trials, 8 of them published in last 3 years

Disclosures: The National Institute for Health Research (UK) funded some co-authors; one co-author disclosed a financial relationship with a pharmaceutical firm.

Disqus Comments
Default

Free water in brain marks Parkinson’s progression

Article Type
Changed
Mon, 01/07/2019 - 12:58

 

Free water in the posterior substantia nigra brain region increased as clinical Parkinson’s disease progressed in a 4-year longitudinal study of participants in the Parkinson’s Progression Markers Initiative.

Image courtesy of David Vaillancourt, PhD, University of Florida
NIH-funded scientists have discovered that Parkinson's disease increases the amount of "free" water in the posterior substantia nigra (in squares).
The research, published online July 28 in Brain, was led by Roxana G. Burciu, PhD, of the University of Florida, Gainesville (Brain. 2017:140;2183-92). Dr. Burciu and her colleagues used imaging data from the Parkinson’s Progression Markers Initiative (PPMI), a longitudinal, multisite, international observational study, to measure free water in 103 newly diagnosed, early-stage Parkinson’s patients imaged at baseline and 1 year (n = 69 males), and in 49 healthy controls (n = 29 males). Patients and controls were matched for age and sex, and the mean age in the cohort was 60.

Dr. Burciu and her colleagues found that free water increased over the first year post diagnosis in Parkinson’s patients but not in controls (P = .043), confirming similar results from an earlier study (Neurobiol Aging. 2015a;36:1097-104; Brain. 2015b;138:2322-31).

The researchers also looked at data from 46 Parkinson’s patients in the cohort who underwent imaging at 2 and 4 years to learn whether the observed increases in free water corresponded to progression measured on the Hoehn and Yahr scale, a widely used measure of Parkinson’s symptom severity.

Free water continued to increase in the Parkinson’s patients through 4 years, and increases in the first and second years after diagnosis were significantly associated with worsening of symptoms through 4 years (P less than .05 for both). Moreover, the investigators noted, men saw greater 4-year increases in free water levels, compared with women.

“The short-term increase in free water is related to the long-term progression of motor symptoms. Moreover, sex and baseline free water levels significantly predicted the rate of change in free water in [the posterior substantia nigra] over 4 years,” the investigators wrote.

The results were consistent across study sites, they found.

Dr. Burciu and her colleagues disclosed funding from the PPMI, which is supported by the Michael J. Fox Foundation and a consortium of pharmaceutical, biotech, and financial firms. The researchers also received funding from the National Institutes of Health. None disclosed financial conflicts of interest.

Publications
Topics
Sections

 

Free water in the posterior substantia nigra brain region increased as clinical Parkinson’s disease progressed in a 4-year longitudinal study of participants in the Parkinson’s Progression Markers Initiative.

Image courtesy of David Vaillancourt, PhD, University of Florida
NIH-funded scientists have discovered that Parkinson's disease increases the amount of "free" water in the posterior substantia nigra (in squares).
The research, published online July 28 in Brain, was led by Roxana G. Burciu, PhD, of the University of Florida, Gainesville (Brain. 2017:140;2183-92). Dr. Burciu and her colleagues used imaging data from the Parkinson’s Progression Markers Initiative (PPMI), a longitudinal, multisite, international observational study, to measure free water in 103 newly diagnosed, early-stage Parkinson’s patients imaged at baseline and 1 year (n = 69 males), and in 49 healthy controls (n = 29 males). Patients and controls were matched for age and sex, and the mean age in the cohort was 60.

Dr. Burciu and her colleagues found that free water increased over the first year post diagnosis in Parkinson’s patients but not in controls (P = .043), confirming similar results from an earlier study (Neurobiol Aging. 2015a;36:1097-104; Brain. 2015b;138:2322-31).

The researchers also looked at data from 46 Parkinson’s patients in the cohort who underwent imaging at 2 and 4 years to learn whether the observed increases in free water corresponded to progression measured on the Hoehn and Yahr scale, a widely used measure of Parkinson’s symptom severity.

Free water continued to increase in the Parkinson’s patients through 4 years, and increases in the first and second years after diagnosis were significantly associated with worsening of symptoms through 4 years (P less than .05 for both). Moreover, the investigators noted, men saw greater 4-year increases in free water levels, compared with women.

“The short-term increase in free water is related to the long-term progression of motor symptoms. Moreover, sex and baseline free water levels significantly predicted the rate of change in free water in [the posterior substantia nigra] over 4 years,” the investigators wrote.

The results were consistent across study sites, they found.

Dr. Burciu and her colleagues disclosed funding from the PPMI, which is supported by the Michael J. Fox Foundation and a consortium of pharmaceutical, biotech, and financial firms. The researchers also received funding from the National Institutes of Health. None disclosed financial conflicts of interest.

 

Free water in the posterior substantia nigra brain region increased as clinical Parkinson’s disease progressed in a 4-year longitudinal study of participants in the Parkinson’s Progression Markers Initiative.

Image courtesy of David Vaillancourt, PhD, University of Florida
NIH-funded scientists have discovered that Parkinson's disease increases the amount of "free" water in the posterior substantia nigra (in squares).
The research, published online July 28 in Brain, was led by Roxana G. Burciu, PhD, of the University of Florida, Gainesville (Brain. 2017:140;2183-92). Dr. Burciu and her colleagues used imaging data from the Parkinson’s Progression Markers Initiative (PPMI), a longitudinal, multisite, international observational study, to measure free water in 103 newly diagnosed, early-stage Parkinson’s patients imaged at baseline and 1 year (n = 69 males), and in 49 healthy controls (n = 29 males). Patients and controls were matched for age and sex, and the mean age in the cohort was 60.

Dr. Burciu and her colleagues found that free water increased over the first year post diagnosis in Parkinson’s patients but not in controls (P = .043), confirming similar results from an earlier study (Neurobiol Aging. 2015a;36:1097-104; Brain. 2015b;138:2322-31).

The researchers also looked at data from 46 Parkinson’s patients in the cohort who underwent imaging at 2 and 4 years to learn whether the observed increases in free water corresponded to progression measured on the Hoehn and Yahr scale, a widely used measure of Parkinson’s symptom severity.

Free water continued to increase in the Parkinson’s patients through 4 years, and increases in the first and second years after diagnosis were significantly associated with worsening of symptoms through 4 years (P less than .05 for both). Moreover, the investigators noted, men saw greater 4-year increases in free water levels, compared with women.

“The short-term increase in free water is related to the long-term progression of motor symptoms. Moreover, sex and baseline free water levels significantly predicted the rate of change in free water in [the posterior substantia nigra] over 4 years,” the investigators wrote.

The results were consistent across study sites, they found.

Dr. Burciu and her colleagues disclosed funding from the PPMI, which is supported by the Michael J. Fox Foundation and a consortium of pharmaceutical, biotech, and financial firms. The researchers also received funding from the National Institutes of Health. None disclosed financial conflicts of interest.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM BRAIN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Water in one brain region increases alongside Parkinson’s symptoms over a 4-year period, making it a biomarker for disease progression.

Major finding: Increases measured in years 1 or 2 after diagnosis were associated with worsening of symptoms through year 4 (P less than .05)

Data source: Analysis of 103 patients and 49 controls from a large, multisite, international, observational, longitudinal study seeking Parkinson’s biomarkers.

Disclosures: The National Institutes of Health and the Parkinson’s Progression Markers Initiative (PPMI) funded this analysis. The PPMI receives broad funding from industry and foundations. None of the researchers disclosed financial conflicts of interest.

Disqus Comments
Default

CDC refocuses Zika testing recommendations in pregnancy

Article Type
Changed
Fri, 01/18/2019 - 16:56

 

Federal health officials are no longer recommending routine Zika virus testing for pregnant women who are asymptomatic, including those who may have been exposed before pregnancy through travel or sexual contact.

In updated guidance released July 24, the Centers for Disease Control and Prevention cited a combination of factors behind the change in recommendations, including the declining prevalence of Zika virus across the Americas and a high likelihood of false positives associated with the use of a common serologic assay (MMWR Morb Mortal Wkly Rep. ePub 2017 Jul 24. doi: 10.15585/mmwr.mm6629e1).

copyright Felipe Caparrós Cruz/Thinkstock
While Zika virus immunoglobulin M (IgM) antibody tests have been widely used to detect Zika in pregnant women, positive results can persist for months after active infection, according to the CDC. That inability to determine whether an infection occurred before or during a pregnancy is a “major challenge for pregnant women and their health care providers, making it difficult … to counsel pregnant women about the risk for congenital Zika virus infection,” wrote Titilope Oduyebo, MD, of the CDC’s Zika virus team in Atlanta and her colleagues.

Positive IgM results can also occur after previous exposure to other flaviviruses besides Zika, Dr. Oduyebo and her colleagues noted.

The CDC now recommends that pregnant women with likely continuing – not previous – exposure to the Zika virus and those with symptoms suggestive of Zika virus disease be tested. Those higher-risk groups should receive nucleic acid testing (NAT).

The new guidance presents two updated testing algorithms, one for each group.

Any pregnant woman with symptoms suggestive of Zika should be tested “as soon as possible through 12 weeks after symptom onset,” the CDC said, with both NAT (serum and urine) and IgM serology testing.

Women with likely ongoing exposure to Zika – such as those living in or traveling to an area of mosquito-borne Zika transmission or those whose partners are living in or traveling to such an area – should be tested up to three times during the pregnancy using NAT serum and urine tests. IgM testing is not recommended for that group.

All pregnant women should be asked about their potential Zika exposures before and during the current pregnancy, the CDC said. That discussion, which covers potential travel and partner exposures along with questions about symptoms, should be repeated at every prenatal visit.

While routine testing of asymptomatic women without ongoing exposure is not recommended, patient preferences, clinical judgment, and a “balanced assessment of risks and expected outcomes” should guide decisions about testing, according to the CDC.

Publications
Topics
Sections

 

Federal health officials are no longer recommending routine Zika virus testing for pregnant women who are asymptomatic, including those who may have been exposed before pregnancy through travel or sexual contact.

In updated guidance released July 24, the Centers for Disease Control and Prevention cited a combination of factors behind the change in recommendations, including the declining prevalence of Zika virus across the Americas and a high likelihood of false positives associated with the use of a common serologic assay (MMWR Morb Mortal Wkly Rep. ePub 2017 Jul 24. doi: 10.15585/mmwr.mm6629e1).

copyright Felipe Caparrós Cruz/Thinkstock
While Zika virus immunoglobulin M (IgM) antibody tests have been widely used to detect Zika in pregnant women, positive results can persist for months after active infection, according to the CDC. That inability to determine whether an infection occurred before or during a pregnancy is a “major challenge for pregnant women and their health care providers, making it difficult … to counsel pregnant women about the risk for congenital Zika virus infection,” wrote Titilope Oduyebo, MD, of the CDC’s Zika virus team in Atlanta and her colleagues.

Positive IgM results can also occur after previous exposure to other flaviviruses besides Zika, Dr. Oduyebo and her colleagues noted.

The CDC now recommends that pregnant women with likely continuing – not previous – exposure to the Zika virus and those with symptoms suggestive of Zika virus disease be tested. Those higher-risk groups should receive nucleic acid testing (NAT).

The new guidance presents two updated testing algorithms, one for each group.

Any pregnant woman with symptoms suggestive of Zika should be tested “as soon as possible through 12 weeks after symptom onset,” the CDC said, with both NAT (serum and urine) and IgM serology testing.

Women with likely ongoing exposure to Zika – such as those living in or traveling to an area of mosquito-borne Zika transmission or those whose partners are living in or traveling to such an area – should be tested up to three times during the pregnancy using NAT serum and urine tests. IgM testing is not recommended for that group.

All pregnant women should be asked about their potential Zika exposures before and during the current pregnancy, the CDC said. That discussion, which covers potential travel and partner exposures along with questions about symptoms, should be repeated at every prenatal visit.

While routine testing of asymptomatic women without ongoing exposure is not recommended, patient preferences, clinical judgment, and a “balanced assessment of risks and expected outcomes” should guide decisions about testing, according to the CDC.

 

Federal health officials are no longer recommending routine Zika virus testing for pregnant women who are asymptomatic, including those who may have been exposed before pregnancy through travel or sexual contact.

In updated guidance released July 24, the Centers for Disease Control and Prevention cited a combination of factors behind the change in recommendations, including the declining prevalence of Zika virus across the Americas and a high likelihood of false positives associated with the use of a common serologic assay (MMWR Morb Mortal Wkly Rep. ePub 2017 Jul 24. doi: 10.15585/mmwr.mm6629e1).

copyright Felipe Caparrós Cruz/Thinkstock
While Zika virus immunoglobulin M (IgM) antibody tests have been widely used to detect Zika in pregnant women, positive results can persist for months after active infection, according to the CDC. That inability to determine whether an infection occurred before or during a pregnancy is a “major challenge for pregnant women and their health care providers, making it difficult … to counsel pregnant women about the risk for congenital Zika virus infection,” wrote Titilope Oduyebo, MD, of the CDC’s Zika virus team in Atlanta and her colleagues.

Positive IgM results can also occur after previous exposure to other flaviviruses besides Zika, Dr. Oduyebo and her colleagues noted.

The CDC now recommends that pregnant women with likely continuing – not previous – exposure to the Zika virus and those with symptoms suggestive of Zika virus disease be tested. Those higher-risk groups should receive nucleic acid testing (NAT).

The new guidance presents two updated testing algorithms, one for each group.

Any pregnant woman with symptoms suggestive of Zika should be tested “as soon as possible through 12 weeks after symptom onset,” the CDC said, with both NAT (serum and urine) and IgM serology testing.

Women with likely ongoing exposure to Zika – such as those living in or traveling to an area of mosquito-borne Zika transmission or those whose partners are living in or traveling to such an area – should be tested up to three times during the pregnancy using NAT serum and urine tests. IgM testing is not recommended for that group.

All pregnant women should be asked about their potential Zika exposures before and during the current pregnancy, the CDC said. That discussion, which covers potential travel and partner exposures along with questions about symptoms, should be repeated at every prenatal visit.

While routine testing of asymptomatic women without ongoing exposure is not recommended, patient preferences, clinical judgment, and a “balanced assessment of risks and expected outcomes” should guide decisions about testing, according to the CDC.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM MMWR

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Acute liver failure in the ED

Article Type
Changed
Wed, 12/12/2018 - 20:58

 

Acute liver failure (ALF), is a life-threatening deterioration of liver function in people without preexisting cirrhosis. It can be caused by acetaminophen toxicity, pregnancy, ischemia, hepatitis A infection, and Wilson disease, among other things.

In emergency medicine, ALF can pose serious dilemmas. While transplantation has drastically improved survival rates in recent decades, it is not always required, and no firm criteria for transplantation exist.

But delays in the decision to go ahead with a liver transplant can lead to death.

A new literature review aims to distill the decision-making process for emergency medicine practitioners. Knowing which candidates will benefit and when to perform transplantation “is crucial in improving the likelihood of survival,” its authors say, because of the many factors involved.

In a paper published online in May in The American Journal of Emergency Medicine (2017 May. doi. 10.1016/j.ajem.2017.05.028), Hamid Shokoohi, MD, and his colleagues at George Washington University Medical Center in Washington say that establishing the cause of acute liver failure is essential to making treatment decisions, as some causes are associated with poorer prognosis without transplantation.

“We wanted to improve awareness among emergency medicine physicians, who are the first in the chain of command for transferring patients to a transplant site,” said Ali Pourmand, MD, of George Washington University, Washington, and the corresponding author of the study. “The high risk of early death among these cases makes it necessary for emergency physicians to consider coexisting etiology, be aware of indications and criteria available to determine the need for emergent transplantation, and be able to expedite patient transfer to a transplant center, when indicated.”

As patients presenting with ALF are likely too impaired be able to provide a history, and physical exam findings may be nonspecific, laboratory findings are key in establishing both severity and likely cause. ALF patients in general will have a prolonged prothrombin time, markedly elevated aminotransferase levels, elevated bilirubin, and low platelet count.

Patients with ALF caused by acetaminophen toxicity (the most common cause of ALF in the United States) are likely to present with very high aminotransferase levels, low bilirubin, and high international normalized ratio (INR). Those with viral causes of ALF, meanwhile, tend to have aminotransferase levels of 1,000-2,000 IU/L, and alanine transaminase higher than aspartate transaminase.

Prognosis without transplantation is considerably poorer in patients with severe ALF caused by Wilson disease, Budd-Chiari syndrome, or idiosyncratic drug reactions, compared with those who experience viral hepatitis or acetaminophen toxicity.

Dr. Shokoohi and his colleagues noted that two validated scoring systems can be used to assess prognosis for severe ALF. The King’s College Criteria can be used to establish prognosis for ALF caused by acetaminophen, and ALF from other causes, while the MELD score, recommended by the American Association for the Study of Liver Diseases, incorporates bilirubin, INR, sodium, and creatinine levels to predict prognosis. Both of these scoring systems can be used to inform decisions about transplantation. 

Finally, the authors advised that patients with alcoholic liver disease be considered under the same criteria for transplantation as those with other causes of ALF. “Recent research has shown that only a minority of patients ... will have poor follow-up and noncompliance to therapy and/or will revert to heavy alcohol use or abuse after transplant,” they wrote in their analysis. The researchers disclosed no outside funding of conflicts of interest related to their article.

Publications
Topics
Sections

 

Acute liver failure (ALF), is a life-threatening deterioration of liver function in people without preexisting cirrhosis. It can be caused by acetaminophen toxicity, pregnancy, ischemia, hepatitis A infection, and Wilson disease, among other things.

In emergency medicine, ALF can pose serious dilemmas. While transplantation has drastically improved survival rates in recent decades, it is not always required, and no firm criteria for transplantation exist.

But delays in the decision to go ahead with a liver transplant can lead to death.

A new literature review aims to distill the decision-making process for emergency medicine practitioners. Knowing which candidates will benefit and when to perform transplantation “is crucial in improving the likelihood of survival,” its authors say, because of the many factors involved.

In a paper published online in May in The American Journal of Emergency Medicine (2017 May. doi. 10.1016/j.ajem.2017.05.028), Hamid Shokoohi, MD, and his colleagues at George Washington University Medical Center in Washington say that establishing the cause of acute liver failure is essential to making treatment decisions, as some causes are associated with poorer prognosis without transplantation.

“We wanted to improve awareness among emergency medicine physicians, who are the first in the chain of command for transferring patients to a transplant site,” said Ali Pourmand, MD, of George Washington University, Washington, and the corresponding author of the study. “The high risk of early death among these cases makes it necessary for emergency physicians to consider coexisting etiology, be aware of indications and criteria available to determine the need for emergent transplantation, and be able to expedite patient transfer to a transplant center, when indicated.”

As patients presenting with ALF are likely too impaired be able to provide a history, and physical exam findings may be nonspecific, laboratory findings are key in establishing both severity and likely cause. ALF patients in general will have a prolonged prothrombin time, markedly elevated aminotransferase levels, elevated bilirubin, and low platelet count.

Patients with ALF caused by acetaminophen toxicity (the most common cause of ALF in the United States) are likely to present with very high aminotransferase levels, low bilirubin, and high international normalized ratio (INR). Those with viral causes of ALF, meanwhile, tend to have aminotransferase levels of 1,000-2,000 IU/L, and alanine transaminase higher than aspartate transaminase.

Prognosis without transplantation is considerably poorer in patients with severe ALF caused by Wilson disease, Budd-Chiari syndrome, or idiosyncratic drug reactions, compared with those who experience viral hepatitis or acetaminophen toxicity.

Dr. Shokoohi and his colleagues noted that two validated scoring systems can be used to assess prognosis for severe ALF. The King’s College Criteria can be used to establish prognosis for ALF caused by acetaminophen, and ALF from other causes, while the MELD score, recommended by the American Association for the Study of Liver Diseases, incorporates bilirubin, INR, sodium, and creatinine levels to predict prognosis. Both of these scoring systems can be used to inform decisions about transplantation. 

Finally, the authors advised that patients with alcoholic liver disease be considered under the same criteria for transplantation as those with other causes of ALF. “Recent research has shown that only a minority of patients ... will have poor follow-up and noncompliance to therapy and/or will revert to heavy alcohol use or abuse after transplant,” they wrote in their analysis. The researchers disclosed no outside funding of conflicts of interest related to their article.

 

Acute liver failure (ALF), is a life-threatening deterioration of liver function in people without preexisting cirrhosis. It can be caused by acetaminophen toxicity, pregnancy, ischemia, hepatitis A infection, and Wilson disease, among other things.

In emergency medicine, ALF can pose serious dilemmas. While transplantation has drastically improved survival rates in recent decades, it is not always required, and no firm criteria for transplantation exist.

But delays in the decision to go ahead with a liver transplant can lead to death.

A new literature review aims to distill the decision-making process for emergency medicine practitioners. Knowing which candidates will benefit and when to perform transplantation “is crucial in improving the likelihood of survival,” its authors say, because of the many factors involved.

In a paper published online in May in The American Journal of Emergency Medicine (2017 May. doi. 10.1016/j.ajem.2017.05.028), Hamid Shokoohi, MD, and his colleagues at George Washington University Medical Center in Washington say that establishing the cause of acute liver failure is essential to making treatment decisions, as some causes are associated with poorer prognosis without transplantation.

“We wanted to improve awareness among emergency medicine physicians, who are the first in the chain of command for transferring patients to a transplant site,” said Ali Pourmand, MD, of George Washington University, Washington, and the corresponding author of the study. “The high risk of early death among these cases makes it necessary for emergency physicians to consider coexisting etiology, be aware of indications and criteria available to determine the need for emergent transplantation, and be able to expedite patient transfer to a transplant center, when indicated.”

As patients presenting with ALF are likely too impaired be able to provide a history, and physical exam findings may be nonspecific, laboratory findings are key in establishing both severity and likely cause. ALF patients in general will have a prolonged prothrombin time, markedly elevated aminotransferase levels, elevated bilirubin, and low platelet count.

Patients with ALF caused by acetaminophen toxicity (the most common cause of ALF in the United States) are likely to present with very high aminotransferase levels, low bilirubin, and high international normalized ratio (INR). Those with viral causes of ALF, meanwhile, tend to have aminotransferase levels of 1,000-2,000 IU/L, and alanine transaminase higher than aspartate transaminase.

Prognosis without transplantation is considerably poorer in patients with severe ALF caused by Wilson disease, Budd-Chiari syndrome, or idiosyncratic drug reactions, compared with those who experience viral hepatitis or acetaminophen toxicity.

Dr. Shokoohi and his colleagues noted that two validated scoring systems can be used to assess prognosis for severe ALF. The King’s College Criteria can be used to establish prognosis for ALF caused by acetaminophen, and ALF from other causes, while the MELD score, recommended by the American Association for the Study of Liver Diseases, incorporates bilirubin, INR, sodium, and creatinine levels to predict prognosis. Both of these scoring systems can be used to inform decisions about transplantation. 

Finally, the authors advised that patients with alcoholic liver disease be considered under the same criteria for transplantation as those with other causes of ALF. “Recent research has shown that only a minority of patients ... will have poor follow-up and noncompliance to therapy and/or will revert to heavy alcohol use or abuse after transplant,” they wrote in their analysis. The researchers disclosed no outside funding of conflicts of interest related to their article.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF EMERGENCY MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Cancer immunotherapy seen repigmenting gray hair

Article Type
Changed
Mon, 01/14/2019 - 10:05

 

Patients on immunotherapy treatments for lung cancer have experienced repigmentation of their formerly gray hair, according to a new report. Moreover, researchers say, all but one of the patients experiencing this effect also responded well to the therapy, suggesting that hair repigmentation could potentially serve as a marker of treatment response.

A woman with grey hair with a bob hairstyle.
XiXinXing/Thinkstock
Anti–PD-1 and anti–PD-L1 therapies work by preventing tumors from escaping the immune system response and have been seen in other studies associated with skin events including cutaneous eruption, vitiligo, and pruritus. Patients receiving anti–PD-1 therapies for melanoma have been reported to develop vitiligo involving their hair. Hair repigmentation has previously been documented in association with a handful of other drugs used in cancer or rheumatology, including thalidomide, lenalidomide, erlotinib, adalimumab, and etretinate, but the mechanisms by which any of these agents affect hair or skin is poorly understood.

Dr. Rivera and her colleagues wrote in their analysis that gray hair follicles “still preserve a reduced number of differentiated and functioning melanocytes located in the hair bulb. This reduced number of melanocytes may explain the possibility of [repigmentation] under appropriate conditions.” But, there are competing theories as to why this should occur with cancer immunotherapy, they noted. One is that the drugs’ inhibition of proinflammatory cytokines acts as negative regulators of melanogenesis. Another is that melanocytes in hair follicles are activated through inflammatory mediators. Of the patients with hair repigmentation in the study, only one, who was being treated with nivolumab for lung squamous cell carcinoma, had disease progression. This patient was discontinued after four treatment sessions and died. The other 13 patients saw either stable disease or a partial response.

The study received no outside funding, but two investigators disclosed financial relationships with pharmaceutical manufacturers.

Publications
Topics
Sections
Related Articles

 

Patients on immunotherapy treatments for lung cancer have experienced repigmentation of their formerly gray hair, according to a new report. Moreover, researchers say, all but one of the patients experiencing this effect also responded well to the therapy, suggesting that hair repigmentation could potentially serve as a marker of treatment response.

A woman with grey hair with a bob hairstyle.
XiXinXing/Thinkstock
Anti–PD-1 and anti–PD-L1 therapies work by preventing tumors from escaping the immune system response and have been seen in other studies associated with skin events including cutaneous eruption, vitiligo, and pruritus. Patients receiving anti–PD-1 therapies for melanoma have been reported to develop vitiligo involving their hair. Hair repigmentation has previously been documented in association with a handful of other drugs used in cancer or rheumatology, including thalidomide, lenalidomide, erlotinib, adalimumab, and etretinate, but the mechanisms by which any of these agents affect hair or skin is poorly understood.

Dr. Rivera and her colleagues wrote in their analysis that gray hair follicles “still preserve a reduced number of differentiated and functioning melanocytes located in the hair bulb. This reduced number of melanocytes may explain the possibility of [repigmentation] under appropriate conditions.” But, there are competing theories as to why this should occur with cancer immunotherapy, they noted. One is that the drugs’ inhibition of proinflammatory cytokines acts as negative regulators of melanogenesis. Another is that melanocytes in hair follicles are activated through inflammatory mediators. Of the patients with hair repigmentation in the study, only one, who was being treated with nivolumab for lung squamous cell carcinoma, had disease progression. This patient was discontinued after four treatment sessions and died. The other 13 patients saw either stable disease or a partial response.

The study received no outside funding, but two investigators disclosed financial relationships with pharmaceutical manufacturers.

 

Patients on immunotherapy treatments for lung cancer have experienced repigmentation of their formerly gray hair, according to a new report. Moreover, researchers say, all but one of the patients experiencing this effect also responded well to the therapy, suggesting that hair repigmentation could potentially serve as a marker of treatment response.

A woman with grey hair with a bob hairstyle.
XiXinXing/Thinkstock
Anti–PD-1 and anti–PD-L1 therapies work by preventing tumors from escaping the immune system response and have been seen in other studies associated with skin events including cutaneous eruption, vitiligo, and pruritus. Patients receiving anti–PD-1 therapies for melanoma have been reported to develop vitiligo involving their hair. Hair repigmentation has previously been documented in association with a handful of other drugs used in cancer or rheumatology, including thalidomide, lenalidomide, erlotinib, adalimumab, and etretinate, but the mechanisms by which any of these agents affect hair or skin is poorly understood.

Dr. Rivera and her colleagues wrote in their analysis that gray hair follicles “still preserve a reduced number of differentiated and functioning melanocytes located in the hair bulb. This reduced number of melanocytes may explain the possibility of [repigmentation] under appropriate conditions.” But, there are competing theories as to why this should occur with cancer immunotherapy, they noted. One is that the drugs’ inhibition of proinflammatory cytokines acts as negative regulators of melanogenesis. Another is that melanocytes in hair follicles are activated through inflammatory mediators. Of the patients with hair repigmentation in the study, only one, who was being treated with nivolumab for lung squamous cell carcinoma, had disease progression. This patient was discontinued after four treatment sessions and died. The other 13 patients saw either stable disease or a partial response.

The study received no outside funding, but two investigators disclosed financial relationships with pharmaceutical manufacturers.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Patients treated with anti–PD-1 and anti–PD-L1 immunotherapies for lung cancer experienced repigmentation of gray hair during treatment.

Major finding: Of 52 patients, 14 patients saw a diffuse restoration of their original hair color during the course of treatment. All but 1 of these also saw a robust treatment response.

Data source: A case series drawn from a single-center cohort of 52 lung cancer patients treated with anti–PD-1 and anti–PD-L1 and monitored for cutaneous effects.

Disclosures: Two coauthors disclosed financial relationships with several drug manufacturers.

Disqus Comments
Default