User login
Patients with high cholesterol refrain from medicating
At a time when more than a third of the U.S. population fell under the umbrella of taking or being eligible to take cholesterol-lowering medication, many of the adults in such a category made no attempt to improve their health, according to the Centers for Disease Control and Prevention’s analysis of data from the 2005-2012 National Health and Nutrition Examination surveys.
The researchers assessed as a group adults who met criteria in the 2013 American College of Cardiology/American Heart Association cholesterol management guidelines, as well as those currently taking cholesterol-lowering medication. At the time of the survey, 78.1 million (36.7%) of the U.S. population aged 21 and over had taken or were eligible for cholesterol-lowering treatments, the investigators noted.
Among those who were on, or were eligible for taking, cholesterol-lowering medication, 55.5% said they were taking their drugs, 46.6% reported having made lifestyle modifications such as exercising and adopting a heart-healthy diet, and 37.1 % reported making lifestyle modifications and taking cholesterol-lowing medications. Although many Americans with high cholesterol levels made efforts to decrease their cholesterol levels, the researchers noted that 35.5% of the adults who needed or used cholesterol-lowering drugs neither took the medicine nor implemented appropriate lifestyle changes.
There were significant differences in the proportion of men (52.9%) and women (58.6%) taking cholesterol drugs (P = .010), as well as among racial/ethnic groups (whites, 58.0%; Mexican-Americans, 47.1%; and blacks, 46.0%; P less than .001).
“This report is one of the first to examine sex and racial/ethnic differences in medication use in a nationally representative sample of adults who are eligible for treatment,” wrote Carla Mercado, Ph.D., of the CDC and her colleagues.
The researchers recommended that stakeholders “implement evidence-based interventions from the Guide to Community Preventive Services to improve screening and management of cholesterol.”
Read the article in MMWR (2015 Dec 4;64[47]:1305-11. doi: 10.15585/mmwr.mm6447a1).
At a time when more than a third of the U.S. population fell under the umbrella of taking or being eligible to take cholesterol-lowering medication, many of the adults in such a category made no attempt to improve their health, according to the Centers for Disease Control and Prevention’s analysis of data from the 2005-2012 National Health and Nutrition Examination surveys.
The researchers assessed as a group adults who met criteria in the 2013 American College of Cardiology/American Heart Association cholesterol management guidelines, as well as those currently taking cholesterol-lowering medication. At the time of the survey, 78.1 million (36.7%) of the U.S. population aged 21 and over had taken or were eligible for cholesterol-lowering treatments, the investigators noted.
Among those who were on, or were eligible for taking, cholesterol-lowering medication, 55.5% said they were taking their drugs, 46.6% reported having made lifestyle modifications such as exercising and adopting a heart-healthy diet, and 37.1 % reported making lifestyle modifications and taking cholesterol-lowing medications. Although many Americans with high cholesterol levels made efforts to decrease their cholesterol levels, the researchers noted that 35.5% of the adults who needed or used cholesterol-lowering drugs neither took the medicine nor implemented appropriate lifestyle changes.
There were significant differences in the proportion of men (52.9%) and women (58.6%) taking cholesterol drugs (P = .010), as well as among racial/ethnic groups (whites, 58.0%; Mexican-Americans, 47.1%; and blacks, 46.0%; P less than .001).
“This report is one of the first to examine sex and racial/ethnic differences in medication use in a nationally representative sample of adults who are eligible for treatment,” wrote Carla Mercado, Ph.D., of the CDC and her colleagues.
The researchers recommended that stakeholders “implement evidence-based interventions from the Guide to Community Preventive Services to improve screening and management of cholesterol.”
Read the article in MMWR (2015 Dec 4;64[47]:1305-11. doi: 10.15585/mmwr.mm6447a1).
At a time when more than a third of the U.S. population fell under the umbrella of taking or being eligible to take cholesterol-lowering medication, many of the adults in such a category made no attempt to improve their health, according to the Centers for Disease Control and Prevention’s analysis of data from the 2005-2012 National Health and Nutrition Examination surveys.
The researchers assessed as a group adults who met criteria in the 2013 American College of Cardiology/American Heart Association cholesterol management guidelines, as well as those currently taking cholesterol-lowering medication. At the time of the survey, 78.1 million (36.7%) of the U.S. population aged 21 and over had taken or were eligible for cholesterol-lowering treatments, the investigators noted.
Among those who were on, or were eligible for taking, cholesterol-lowering medication, 55.5% said they were taking their drugs, 46.6% reported having made lifestyle modifications such as exercising and adopting a heart-healthy diet, and 37.1 % reported making lifestyle modifications and taking cholesterol-lowing medications. Although many Americans with high cholesterol levels made efforts to decrease their cholesterol levels, the researchers noted that 35.5% of the adults who needed or used cholesterol-lowering drugs neither took the medicine nor implemented appropriate lifestyle changes.
There were significant differences in the proportion of men (52.9%) and women (58.6%) taking cholesterol drugs (P = .010), as well as among racial/ethnic groups (whites, 58.0%; Mexican-Americans, 47.1%; and blacks, 46.0%; P less than .001).
“This report is one of the first to examine sex and racial/ethnic differences in medication use in a nationally representative sample of adults who are eligible for treatment,” wrote Carla Mercado, Ph.D., of the CDC and her colleagues.
The researchers recommended that stakeholders “implement evidence-based interventions from the Guide to Community Preventive Services to improve screening and management of cholesterol.”
Read the article in MMWR (2015 Dec 4;64[47]:1305-11. doi: 10.15585/mmwr.mm6447a1).
FROM MMWR
Study: Children with pet dogs less likely to have anxiety
A higher percentage of children without pet dogs (21%) than children with pet dogs (12%) tested positive for anxiety, in a cross-sectional study.
Researchers conducted the study at a general pediatric clinic in an academic medical center in Upstate New York. All parents of children enrolled in the study completed SCARED-5, a 5-item scale adapted from the Screen for Child Anxiety and Related Disorders, a tool validated in both psychiatric and primary care settings.
Dr. Anne M. Gadomski, attending pediatrician and research scientist at Bassett Medical Center in Cooperstown, N.Y., and her colleagues analyzed the mean SCARED-5 score and the proportion of children meeting the SCARED-5 clinical score threshold of 3 or more, a point at which further assessment is indicated to diagnose anxiety. Their final analysis involved 370 children with a pet dog and 273 children with no pet dog. The children were aged 4-10 years. Ill or developmentally disabled children were excluded from the study.
In a univariate analysis, the mean SCARED-5 score was significantly lower among children with a pet dog, compared with children without a pet dog. The average score for children with dogs was 1.13 vs. 1.40 for children without dogs (P = .01). The predicted probability of a SCARE-5 score of 3 or higher was 0.20 for children without pet dogs, compared with 0.11 for children with pet dogs. Further demonstrating the link between children with pet dogs and a decreased likelihood of childhood anxiety was the study’s finding of a pet dog having been associated with a 9% decreased probability of child scoring greater than or equal to 3 in the SCARED-5.
“Our study results suggest that children who have a pet dog in the home have a lower anxiety screening score than children who do not,” wrote Dr. Gadomski and her colleagues.
Further research on the anxiety levels of children with pet dogs should determine whether having a pet dog prevents a child from being anxious, and if so, how pets contribute to this absence of anxiety in children, they noted.
Read the full study in Preventing Chronic Disease (doi: 10.5888/pcd12.150204).
A higher percentage of children without pet dogs (21%) than children with pet dogs (12%) tested positive for anxiety, in a cross-sectional study.
Researchers conducted the study at a general pediatric clinic in an academic medical center in Upstate New York. All parents of children enrolled in the study completed SCARED-5, a 5-item scale adapted from the Screen for Child Anxiety and Related Disorders, a tool validated in both psychiatric and primary care settings.
Dr. Anne M. Gadomski, attending pediatrician and research scientist at Bassett Medical Center in Cooperstown, N.Y., and her colleagues analyzed the mean SCARED-5 score and the proportion of children meeting the SCARED-5 clinical score threshold of 3 or more, a point at which further assessment is indicated to diagnose anxiety. Their final analysis involved 370 children with a pet dog and 273 children with no pet dog. The children were aged 4-10 years. Ill or developmentally disabled children were excluded from the study.
In a univariate analysis, the mean SCARED-5 score was significantly lower among children with a pet dog, compared with children without a pet dog. The average score for children with dogs was 1.13 vs. 1.40 for children without dogs (P = .01). The predicted probability of a SCARE-5 score of 3 or higher was 0.20 for children without pet dogs, compared with 0.11 for children with pet dogs. Further demonstrating the link between children with pet dogs and a decreased likelihood of childhood anxiety was the study’s finding of a pet dog having been associated with a 9% decreased probability of child scoring greater than or equal to 3 in the SCARED-5.
“Our study results suggest that children who have a pet dog in the home have a lower anxiety screening score than children who do not,” wrote Dr. Gadomski and her colleagues.
Further research on the anxiety levels of children with pet dogs should determine whether having a pet dog prevents a child from being anxious, and if so, how pets contribute to this absence of anxiety in children, they noted.
Read the full study in Preventing Chronic Disease (doi: 10.5888/pcd12.150204).
A higher percentage of children without pet dogs (21%) than children with pet dogs (12%) tested positive for anxiety, in a cross-sectional study.
Researchers conducted the study at a general pediatric clinic in an academic medical center in Upstate New York. All parents of children enrolled in the study completed SCARED-5, a 5-item scale adapted from the Screen for Child Anxiety and Related Disorders, a tool validated in both psychiatric and primary care settings.
Dr. Anne M. Gadomski, attending pediatrician and research scientist at Bassett Medical Center in Cooperstown, N.Y., and her colleagues analyzed the mean SCARED-5 score and the proportion of children meeting the SCARED-5 clinical score threshold of 3 or more, a point at which further assessment is indicated to diagnose anxiety. Their final analysis involved 370 children with a pet dog and 273 children with no pet dog. The children were aged 4-10 years. Ill or developmentally disabled children were excluded from the study.
In a univariate analysis, the mean SCARED-5 score was significantly lower among children with a pet dog, compared with children without a pet dog. The average score for children with dogs was 1.13 vs. 1.40 for children without dogs (P = .01). The predicted probability of a SCARE-5 score of 3 or higher was 0.20 for children without pet dogs, compared with 0.11 for children with pet dogs. Further demonstrating the link between children with pet dogs and a decreased likelihood of childhood anxiety was the study’s finding of a pet dog having been associated with a 9% decreased probability of child scoring greater than or equal to 3 in the SCARED-5.
“Our study results suggest that children who have a pet dog in the home have a lower anxiety screening score than children who do not,” wrote Dr. Gadomski and her colleagues.
Further research on the anxiety levels of children with pet dogs should determine whether having a pet dog prevents a child from being anxious, and if so, how pets contribute to this absence of anxiety in children, they noted.
Read the full study in Preventing Chronic Disease (doi: 10.5888/pcd12.150204).
FROM PREVENTING CHRONIC DISEASE
Study: Exposure History Critical to Design of Universal Flu Vaccine
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
FROM SCIENCE TRANSLATIONAL MEDICINE
Study: Exposure history critical to design of universal flu vaccine
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
In a study with implications for the development of new influenza vaccine strategies, researchers discovered that – among patients who received the 2009 H1N1 influenza vaccine – individuals with low levels of H1N1-specific antibodies prior to vaccination produced a more broadly protective immune response against the influenza virus than patients with high levels of H1N1-specific antibodies prior to vaccination.
A research team led by Patrick C. Wilson, Ph.D., of the Knapp Center for Lupus and Immunology Research at the University of Chicago, studied the B cell response in patients who received the pandemic 2009 H1N1 vaccine 2 years in a row and had varied histories of influenza exposure. All patients were 18 years or older, healthy, and had not received the yearly influenza vaccine prior to participating in the study. The researchers compared the patients’ “vaccine-induced plasmablast response upon first vaccination with the pandemic H1N1 strain in 2009-2010” with the patients’ plasmablast response upon revaccination with this same strain in 2010-2011 or 2011-2012. Each of the 21 study participants provided the researchers with at least four H1N1-specific plasmablasts.
The researchers “analyzed the immunoglobulin regions, strain specificity, and functional properties of the antibodies produced by this plasmablast population at the single-cell level across multiple years,” which allowed them to directly evaluate the effect of immune memory on the specificity of the current response to the virus.
Among the study’s findings was that “only individuals with low preexisting serological levels of pandemic H1N1 specific antibodies generated a broadly neutralizing plasmablast response directed toward the [hemagglutinin] stalk,” which is part of the hemagglutinin protein located on the surface of the influenza virus.
“[W]e demonstrate that the immune subdominance of the [hemagglutinin] stalk is a function of both the poor accessibility to the broadly protective epitopes and the inherent polyreactivity of the antibodies that can bind. We conclude that immunological memory profoundly shapes the viral epitopes targeted upon exposure with divergent influenza strains and determines the likelihood of generating a broadly protective response,” said Dr. Wilson and his coauthors. The authors reported no conflicts of interest.
Read the full study in Science Translational Medicine (doi: 10.1126/scitranslmed.aad0522).
FROM SCIENCE TRANSLATIONAL MEDICINE
Dr. Fernando Stein named president-elect of American Academy of Pediatrics
American Academy of Pediatrics members have selected their next president-elect. Dr. Fernando Stein will begin serving as president of the AAP on Jan. 1, 2017.
Dr. Stein, a general pediatrician specializing in critical care, completed his subspecialty training at institutions affiliated with the Baylor College of Medicine, Houston. He is a founding member of the AAP Section on Critical Care, a past member of the Council on Sections Management Committee and the Committee on Membership, a past member of the former Committee of Scientific Meetings, and one of the original members of the Task Force on Minorities. His other work for the AAP has included serving as chair of the Council of Sections, the Section on Critical Care, and the Committee on Membership.
Dr. Stein’s career has focused on caring for children surviving critical illness and those with technological dependency. “His work on advocacy for the integrated management of childhood illness is recognized at a continental level,” notes the AAP’s 2015 conference website. Dr. Stein is a native of Guatemala.
He will succeed Dr. Benard P. Dreyer as president. Dr. Dreyer will assume his elected office on Jan. 1, 2016. He will replace the AAP’s current president, Dr. Sandra G. Hassink.
American Academy of Pediatrics members have selected their next president-elect. Dr. Fernando Stein will begin serving as president of the AAP on Jan. 1, 2017.
Dr. Stein, a general pediatrician specializing in critical care, completed his subspecialty training at institutions affiliated with the Baylor College of Medicine, Houston. He is a founding member of the AAP Section on Critical Care, a past member of the Council on Sections Management Committee and the Committee on Membership, a past member of the former Committee of Scientific Meetings, and one of the original members of the Task Force on Minorities. His other work for the AAP has included serving as chair of the Council of Sections, the Section on Critical Care, and the Committee on Membership.
Dr. Stein’s career has focused on caring for children surviving critical illness and those with technological dependency. “His work on advocacy for the integrated management of childhood illness is recognized at a continental level,” notes the AAP’s 2015 conference website. Dr. Stein is a native of Guatemala.
He will succeed Dr. Benard P. Dreyer as president. Dr. Dreyer will assume his elected office on Jan. 1, 2016. He will replace the AAP’s current president, Dr. Sandra G. Hassink.
American Academy of Pediatrics members have selected their next president-elect. Dr. Fernando Stein will begin serving as president of the AAP on Jan. 1, 2017.
Dr. Stein, a general pediatrician specializing in critical care, completed his subspecialty training at institutions affiliated with the Baylor College of Medicine, Houston. He is a founding member of the AAP Section on Critical Care, a past member of the Council on Sections Management Committee and the Committee on Membership, a past member of the former Committee of Scientific Meetings, and one of the original members of the Task Force on Minorities. His other work for the AAP has included serving as chair of the Council of Sections, the Section on Critical Care, and the Committee on Membership.
Dr. Stein’s career has focused on caring for children surviving critical illness and those with technological dependency. “His work on advocacy for the integrated management of childhood illness is recognized at a continental level,” notes the AAP’s 2015 conference website. Dr. Stein is a native of Guatemala.
He will succeed Dr. Benard P. Dreyer as president. Dr. Dreyer will assume his elected office on Jan. 1, 2016. He will replace the AAP’s current president, Dr. Sandra G. Hassink.
Cerebellar soft signs similar in schizophrenia, bipolar
Cerebellar soft signs are common symptoms in schizophrenia and bipolar disorder, a study suggests.
“While many authors used [neurological soft signs] scales to measure severity and progression of [schizophrenia ] and [bipolar disorder], we propose [cerebellar soft signs] scale as an accurate measure of cerebellar signs, which seems to co-occur in both diseases,” Adrian Andrzej Chrobak and his colleagues wrote.
The study included 30 patients with bipolar disorder, 30 patients with schizophrenia, and 28 individuals who had not been diagnosed with either bipolar or schizophrenia. The criteria for schizophrenia and bipolar disorder patient participation in the study included being in a state of symptomatic remission, as defined as scoring less than 3 on the Positive and Negative Syndrome Scale, and being treated with antipsychotic drugs from the dibenzoxazepine class (clozapine, quetiapine, and olanzapine). Schizophrenia and bipolar disorder patients treated with lithium or who had a history of alcohol or drug abuse; severe, acute or chronic neurologic and somatic diseases; and severe personality disorders were not allowed to participate in the study.
The researchers used the Neurological Evaluation Scale (NES) and the International Cooperative Ataxia Rating Scale (ICARS) to determine the presence and severity of neurological soft signs and cerebellar soft signs, respectively, in all of the study participants.
The average ICARS scores for the schizophrenia and groups were significantly higher than the mean ICARS score of the control group. No significant differences were found between the schizophrenia group and bipolar disorder group’s total ICARS and ICARS subscales scores. While the schizophrenia group scored significantly higher in all ICARS subscales than the control group, the bipolar disorder group only scored significantly higher than controls in the ICARS subscales of posture, gait disturbances, and oculomotor disorders.
The NES scores for the schizophrenia and bipolar groups also were significantly higher than that of the control group. No statistically significant differences between the schizophrenia group and bipolar group’s total NES and NES subscales were found.
“Our results suggest that there is no significant difference in both [neurological soft signs] and [cerebellar soft signs] scores between [bipolar disorder] and [schizophrenia] groups. This stays in tune with the theory of schizophrenia-bipolar disorder boundary and points to [the] cerebellum as a possible target for further research in this field,” according to the researchers.
Read the full study in Progress in Neuro-Psychopharmacology & Biological Psychiatry (doi: 10.1016/j.pnpbp.2015.07.009).
Cerebellar soft signs are common symptoms in schizophrenia and bipolar disorder, a study suggests.
“While many authors used [neurological soft signs] scales to measure severity and progression of [schizophrenia ] and [bipolar disorder], we propose [cerebellar soft signs] scale as an accurate measure of cerebellar signs, which seems to co-occur in both diseases,” Adrian Andrzej Chrobak and his colleagues wrote.
The study included 30 patients with bipolar disorder, 30 patients with schizophrenia, and 28 individuals who had not been diagnosed with either bipolar or schizophrenia. The criteria for schizophrenia and bipolar disorder patient participation in the study included being in a state of symptomatic remission, as defined as scoring less than 3 on the Positive and Negative Syndrome Scale, and being treated with antipsychotic drugs from the dibenzoxazepine class (clozapine, quetiapine, and olanzapine). Schizophrenia and bipolar disorder patients treated with lithium or who had a history of alcohol or drug abuse; severe, acute or chronic neurologic and somatic diseases; and severe personality disorders were not allowed to participate in the study.
The researchers used the Neurological Evaluation Scale (NES) and the International Cooperative Ataxia Rating Scale (ICARS) to determine the presence and severity of neurological soft signs and cerebellar soft signs, respectively, in all of the study participants.
The average ICARS scores for the schizophrenia and groups were significantly higher than the mean ICARS score of the control group. No significant differences were found between the schizophrenia group and bipolar disorder group’s total ICARS and ICARS subscales scores. While the schizophrenia group scored significantly higher in all ICARS subscales than the control group, the bipolar disorder group only scored significantly higher than controls in the ICARS subscales of posture, gait disturbances, and oculomotor disorders.
The NES scores for the schizophrenia and bipolar groups also were significantly higher than that of the control group. No statistically significant differences between the schizophrenia group and bipolar group’s total NES and NES subscales were found.
“Our results suggest that there is no significant difference in both [neurological soft signs] and [cerebellar soft signs] scores between [bipolar disorder] and [schizophrenia] groups. This stays in tune with the theory of schizophrenia-bipolar disorder boundary and points to [the] cerebellum as a possible target for further research in this field,” according to the researchers.
Read the full study in Progress in Neuro-Psychopharmacology & Biological Psychiatry (doi: 10.1016/j.pnpbp.2015.07.009).
Cerebellar soft signs are common symptoms in schizophrenia and bipolar disorder, a study suggests.
“While many authors used [neurological soft signs] scales to measure severity and progression of [schizophrenia ] and [bipolar disorder], we propose [cerebellar soft signs] scale as an accurate measure of cerebellar signs, which seems to co-occur in both diseases,” Adrian Andrzej Chrobak and his colleagues wrote.
The study included 30 patients with bipolar disorder, 30 patients with schizophrenia, and 28 individuals who had not been diagnosed with either bipolar or schizophrenia. The criteria for schizophrenia and bipolar disorder patient participation in the study included being in a state of symptomatic remission, as defined as scoring less than 3 on the Positive and Negative Syndrome Scale, and being treated with antipsychotic drugs from the dibenzoxazepine class (clozapine, quetiapine, and olanzapine). Schizophrenia and bipolar disorder patients treated with lithium or who had a history of alcohol or drug abuse; severe, acute or chronic neurologic and somatic diseases; and severe personality disorders were not allowed to participate in the study.
The researchers used the Neurological Evaluation Scale (NES) and the International Cooperative Ataxia Rating Scale (ICARS) to determine the presence and severity of neurological soft signs and cerebellar soft signs, respectively, in all of the study participants.
The average ICARS scores for the schizophrenia and groups were significantly higher than the mean ICARS score of the control group. No significant differences were found between the schizophrenia group and bipolar disorder group’s total ICARS and ICARS subscales scores. While the schizophrenia group scored significantly higher in all ICARS subscales than the control group, the bipolar disorder group only scored significantly higher than controls in the ICARS subscales of posture, gait disturbances, and oculomotor disorders.
The NES scores for the schizophrenia and bipolar groups also were significantly higher than that of the control group. No statistically significant differences between the schizophrenia group and bipolar group’s total NES and NES subscales were found.
“Our results suggest that there is no significant difference in both [neurological soft signs] and [cerebellar soft signs] scores between [bipolar disorder] and [schizophrenia] groups. This stays in tune with the theory of schizophrenia-bipolar disorder boundary and points to [the] cerebellum as a possible target for further research in this field,” according to the researchers.
Read the full study in Progress in Neuro-Psychopharmacology & Biological Psychiatry (doi: 10.1016/j.pnpbp.2015.07.009).
FROM PROGRESS IN NEURO-PSYCHOPHARMACOLOGY & BIOLOGICAL PSYCHIATRY
Schizophrenia patients, decision makers aligned on treatments
Schizophrenia patients and their named alternative decision makers expressed similar views on how to make decisions about the patients’ treatment and participation in research, a pilot study shows.
Twenty individuals with schizophrenia who were living in a community setting participated in a written survey inquiring about their decisions regarding their participation in treatment and research, and their underlying values. Each schizophrenia patient identified one individual as a preferred alternative decision maker, and those individuals participated in a “parallel survey.” The patients’ perceived importance of items pertaining to burden, happiness, and safety when making decisions about treatment and research were among the areas addressed in the surveys’ questions.
“Personal views of ill individuals were compared to predictions made by preferred alternative decision makers regarding ill individuals’ perspectives (“attunement”) for aspects related to treatment and research decisions,” the study says. To assess alignment between the members of each patient-alternative decision-maker pair, the researchers asked each study participant about the closeness of the relationship and frequency of contact between that individual and the other member of his or her pair. Each participant in the study also was asked to rate the importance of several ethical principles.
When making decisions pertaining to treatment, most of the individuals with schizophrenia and their respective alternative decision makers stated that the following issues were important to consider: the burden of treatment placed on the ill individual, the safety of the ill individual, the happiness of the ill individual, the safety of the family, and the happiness of the family. The two categories of study participants also reached attunement on all six aspects of research decision making covered in the surveys. An additional study finding was that both members of patient-alternative decision-maker pairs were aligned in their views of ethically salient aspects of daily life.
“The strong overall pattern of attunement and alignment revealed in this study should, we suggest, inspire confidence in the ethical safeguard of alternative decision makers, particularly in very close relationships between ill individuals and their preferred alternative decision maker,” said Dr. Laura Weiss Roberts andJane Paik Kim, Ph.D., both of the department of psychiatry and behavioral sciences at Stanford (Calif.) University.
The study’s findings “must be considered preliminary, and warrant additional and more systematic inquiry,” according to the researchers.
Read the full study in Journal of Psychiatric Research (doi: 10.1016/j.jpsychires.2015.09.014).
Schizophrenia patients and their named alternative decision makers expressed similar views on how to make decisions about the patients’ treatment and participation in research, a pilot study shows.
Twenty individuals with schizophrenia who were living in a community setting participated in a written survey inquiring about their decisions regarding their participation in treatment and research, and their underlying values. Each schizophrenia patient identified one individual as a preferred alternative decision maker, and those individuals participated in a “parallel survey.” The patients’ perceived importance of items pertaining to burden, happiness, and safety when making decisions about treatment and research were among the areas addressed in the surveys’ questions.
“Personal views of ill individuals were compared to predictions made by preferred alternative decision makers regarding ill individuals’ perspectives (“attunement”) for aspects related to treatment and research decisions,” the study says. To assess alignment between the members of each patient-alternative decision-maker pair, the researchers asked each study participant about the closeness of the relationship and frequency of contact between that individual and the other member of his or her pair. Each participant in the study also was asked to rate the importance of several ethical principles.
When making decisions pertaining to treatment, most of the individuals with schizophrenia and their respective alternative decision makers stated that the following issues were important to consider: the burden of treatment placed on the ill individual, the safety of the ill individual, the happiness of the ill individual, the safety of the family, and the happiness of the family. The two categories of study participants also reached attunement on all six aspects of research decision making covered in the surveys. An additional study finding was that both members of patient-alternative decision-maker pairs were aligned in their views of ethically salient aspects of daily life.
“The strong overall pattern of attunement and alignment revealed in this study should, we suggest, inspire confidence in the ethical safeguard of alternative decision makers, particularly in very close relationships between ill individuals and their preferred alternative decision maker,” said Dr. Laura Weiss Roberts andJane Paik Kim, Ph.D., both of the department of psychiatry and behavioral sciences at Stanford (Calif.) University.
The study’s findings “must be considered preliminary, and warrant additional and more systematic inquiry,” according to the researchers.
Read the full study in Journal of Psychiatric Research (doi: 10.1016/j.jpsychires.2015.09.014).
Schizophrenia patients and their named alternative decision makers expressed similar views on how to make decisions about the patients’ treatment and participation in research, a pilot study shows.
Twenty individuals with schizophrenia who were living in a community setting participated in a written survey inquiring about their decisions regarding their participation in treatment and research, and their underlying values. Each schizophrenia patient identified one individual as a preferred alternative decision maker, and those individuals participated in a “parallel survey.” The patients’ perceived importance of items pertaining to burden, happiness, and safety when making decisions about treatment and research were among the areas addressed in the surveys’ questions.
“Personal views of ill individuals were compared to predictions made by preferred alternative decision makers regarding ill individuals’ perspectives (“attunement”) for aspects related to treatment and research decisions,” the study says. To assess alignment between the members of each patient-alternative decision-maker pair, the researchers asked each study participant about the closeness of the relationship and frequency of contact between that individual and the other member of his or her pair. Each participant in the study also was asked to rate the importance of several ethical principles.
When making decisions pertaining to treatment, most of the individuals with schizophrenia and their respective alternative decision makers stated that the following issues were important to consider: the burden of treatment placed on the ill individual, the safety of the ill individual, the happiness of the ill individual, the safety of the family, and the happiness of the family. The two categories of study participants also reached attunement on all six aspects of research decision making covered in the surveys. An additional study finding was that both members of patient-alternative decision-maker pairs were aligned in their views of ethically salient aspects of daily life.
“The strong overall pattern of attunement and alignment revealed in this study should, we suggest, inspire confidence in the ethical safeguard of alternative decision makers, particularly in very close relationships between ill individuals and their preferred alternative decision maker,” said Dr. Laura Weiss Roberts andJane Paik Kim, Ph.D., both of the department of psychiatry and behavioral sciences at Stanford (Calif.) University.
The study’s findings “must be considered preliminary, and warrant additional and more systematic inquiry,” according to the researchers.
Read the full study in Journal of Psychiatric Research (doi: 10.1016/j.jpsychires.2015.09.014).
FROM JOURNAL OF PSYCHIATRIC RESEARCH
Synovitis, effusion associated with increased pain sensitivity
Synovitis and effusion were associated with increases in pain sensitivity at the patella and wrist, respectively, in a study of 1,111 patients with or at risk of knee osteoarthritis (OA).
Radiographs and MRIs were taken of the patients’ knees, and the patients wrists and patellae were subjected to standardized quantitative sensory testing (QST) measures. The QST measures included temporal summation, which is a measure of central pain amplification based on “an augmented response to repetitive mechanical stimulation,” and pressure pain threshold (PPT), a measure of sensitivity to pain evoked by mechanical stimulation of nociceptors. (Lower PPTs represent a greater degree of sensitization or pain sensitivity.) All tests were conducted at baseline and 2 years later.
Synovitis was associated with a significant decrease in PPT at the patella, while effusion was associated with a decrease in PPT at the wrist. Effusion was additionally associated with risk of incident temporal summation. In contrast to synovitis and effusion, bone marrow lesions were not associated with either temporal summation or decreased pressure pain threshold.
“Our findings support the potential relevance of inflammation in the development and heightening of sensitization in knee osteoarthritis in humans. We found that synovitis was associated with lower PPT and a decrease in PPT at the patella over time, indicating increased pain sensitization or sensitivity. Effusion was associated with development of new temporal summation at the patella, and with a decrease in PPT at the wrist, a site distant to the pathology; both findings suggest the involvement of central sensitization. Thus inflammation appears to influence the development of and perhaps amplification of sensitization,” said Dr. Tuhina Neogi, of the department of medicine at Boston University and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39488).
Synovitis and effusion were associated with increases in pain sensitivity at the patella and wrist, respectively, in a study of 1,111 patients with or at risk of knee osteoarthritis (OA).
Radiographs and MRIs were taken of the patients’ knees, and the patients wrists and patellae were subjected to standardized quantitative sensory testing (QST) measures. The QST measures included temporal summation, which is a measure of central pain amplification based on “an augmented response to repetitive mechanical stimulation,” and pressure pain threshold (PPT), a measure of sensitivity to pain evoked by mechanical stimulation of nociceptors. (Lower PPTs represent a greater degree of sensitization or pain sensitivity.) All tests were conducted at baseline and 2 years later.
Synovitis was associated with a significant decrease in PPT at the patella, while effusion was associated with a decrease in PPT at the wrist. Effusion was additionally associated with risk of incident temporal summation. In contrast to synovitis and effusion, bone marrow lesions were not associated with either temporal summation or decreased pressure pain threshold.
“Our findings support the potential relevance of inflammation in the development and heightening of sensitization in knee osteoarthritis in humans. We found that synovitis was associated with lower PPT and a decrease in PPT at the patella over time, indicating increased pain sensitization or sensitivity. Effusion was associated with development of new temporal summation at the patella, and with a decrease in PPT at the wrist, a site distant to the pathology; both findings suggest the involvement of central sensitization. Thus inflammation appears to influence the development of and perhaps amplification of sensitization,” said Dr. Tuhina Neogi, of the department of medicine at Boston University and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39488).
Synovitis and effusion were associated with increases in pain sensitivity at the patella and wrist, respectively, in a study of 1,111 patients with or at risk of knee osteoarthritis (OA).
Radiographs and MRIs were taken of the patients’ knees, and the patients wrists and patellae were subjected to standardized quantitative sensory testing (QST) measures. The QST measures included temporal summation, which is a measure of central pain amplification based on “an augmented response to repetitive mechanical stimulation,” and pressure pain threshold (PPT), a measure of sensitivity to pain evoked by mechanical stimulation of nociceptors. (Lower PPTs represent a greater degree of sensitization or pain sensitivity.) All tests were conducted at baseline and 2 years later.
Synovitis was associated with a significant decrease in PPT at the patella, while effusion was associated with a decrease in PPT at the wrist. Effusion was additionally associated with risk of incident temporal summation. In contrast to synovitis and effusion, bone marrow lesions were not associated with either temporal summation or decreased pressure pain threshold.
“Our findings support the potential relevance of inflammation in the development and heightening of sensitization in knee osteoarthritis in humans. We found that synovitis was associated with lower PPT and a decrease in PPT at the patella over time, indicating increased pain sensitization or sensitivity. Effusion was associated with development of new temporal summation at the patella, and with a decrease in PPT at the wrist, a site distant to the pathology; both findings suggest the involvement of central sensitization. Thus inflammation appears to influence the development of and perhaps amplification of sensitization,” said Dr. Tuhina Neogi, of the department of medicine at Boston University and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39488).
FROM ARTHRITIS & RHEUMATOLOGY
Overweight, obese patients at greater risk for knee replacement surgery
Both overweight and obese patients with knee osteoarthritis (OA) are more likely to get knee replacement surgery, compared with normal-weight patients with knee OA, results of a population-based cohort study of people in Catalonia, Spain, suggest.
The study included 105,189 patients, who had been diagnosed with knee OA between 2006 and 2011. Patients with a history of knee OA or knee replacement in either knee before Jan. 1, 2006, and patients with a history inflammatory arthritis were not included in the study.
The patients were followed from the date of knee OA diagnosis until the date they underwent elective knee replacement surgery or until Dec. 31, 2011. (The researchers were unable to follow up with all individuals initially enrolled in the study.) The participants were broken up into the following categories based on their body mass index: normal (BMI was less than 25 kg/m2), overweight (BMI was 25 to less than 30 kg/m2), obese class I (BMI was 30 to less than 35 kg/m2), obese class II (BMI was 35 to less than 40 kg/m2), and obese class III (BMI was greater than or equal to 40 kg/m2).
The risk of knee replacement increased with BMI. For patients with a normal weight, the incidence rates of surgery were 1.35/100 person-years, compared with 3.49/100 person-years in patients in obese class III. Adjusted hazard ratios for knee replacement surgery were 1.41 for overweight, 1.97 for obese class I, 2.39 for obese class II, and 2.67 for obese class III, compared with normal-weight study participants.
An additional finding was a significant interaction between BMI and age on the risk of knee replacement (P is less than .001), with a higher relative hazard associated with obesity among patients aged less than 68 years.
“This research demonstrates that overweight and obesity are strong independent predictors of the clinical progression of knee OA, from disease onset/diagnosis to joint failure and subsequent [knee replacement]. Overweight subjects are at over 40% increased risk of surgery, and those who are obese have a more than doubled risk when compared to subjects with normal weight,” said Kristen M. Leyland, D.Phil., and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39486).
Both overweight and obese patients with knee osteoarthritis (OA) are more likely to get knee replacement surgery, compared with normal-weight patients with knee OA, results of a population-based cohort study of people in Catalonia, Spain, suggest.
The study included 105,189 patients, who had been diagnosed with knee OA between 2006 and 2011. Patients with a history of knee OA or knee replacement in either knee before Jan. 1, 2006, and patients with a history inflammatory arthritis were not included in the study.
The patients were followed from the date of knee OA diagnosis until the date they underwent elective knee replacement surgery or until Dec. 31, 2011. (The researchers were unable to follow up with all individuals initially enrolled in the study.) The participants were broken up into the following categories based on their body mass index: normal (BMI was less than 25 kg/m2), overweight (BMI was 25 to less than 30 kg/m2), obese class I (BMI was 30 to less than 35 kg/m2), obese class II (BMI was 35 to less than 40 kg/m2), and obese class III (BMI was greater than or equal to 40 kg/m2).
The risk of knee replacement increased with BMI. For patients with a normal weight, the incidence rates of surgery were 1.35/100 person-years, compared with 3.49/100 person-years in patients in obese class III. Adjusted hazard ratios for knee replacement surgery were 1.41 for overweight, 1.97 for obese class I, 2.39 for obese class II, and 2.67 for obese class III, compared with normal-weight study participants.
An additional finding was a significant interaction between BMI and age on the risk of knee replacement (P is less than .001), with a higher relative hazard associated with obesity among patients aged less than 68 years.
“This research demonstrates that overweight and obesity are strong independent predictors of the clinical progression of knee OA, from disease onset/diagnosis to joint failure and subsequent [knee replacement]. Overweight subjects are at over 40% increased risk of surgery, and those who are obese have a more than doubled risk when compared to subjects with normal weight,” said Kristen M. Leyland, D.Phil., and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39486).
Both overweight and obese patients with knee osteoarthritis (OA) are more likely to get knee replacement surgery, compared with normal-weight patients with knee OA, results of a population-based cohort study of people in Catalonia, Spain, suggest.
The study included 105,189 patients, who had been diagnosed with knee OA between 2006 and 2011. Patients with a history of knee OA or knee replacement in either knee before Jan. 1, 2006, and patients with a history inflammatory arthritis were not included in the study.
The patients were followed from the date of knee OA diagnosis until the date they underwent elective knee replacement surgery or until Dec. 31, 2011. (The researchers were unable to follow up with all individuals initially enrolled in the study.) The participants were broken up into the following categories based on their body mass index: normal (BMI was less than 25 kg/m2), overweight (BMI was 25 to less than 30 kg/m2), obese class I (BMI was 30 to less than 35 kg/m2), obese class II (BMI was 35 to less than 40 kg/m2), and obese class III (BMI was greater than or equal to 40 kg/m2).
The risk of knee replacement increased with BMI. For patients with a normal weight, the incidence rates of surgery were 1.35/100 person-years, compared with 3.49/100 person-years in patients in obese class III. Adjusted hazard ratios for knee replacement surgery were 1.41 for overweight, 1.97 for obese class I, 2.39 for obese class II, and 2.67 for obese class III, compared with normal-weight study participants.
An additional finding was a significant interaction between BMI and age on the risk of knee replacement (P is less than .001), with a higher relative hazard associated with obesity among patients aged less than 68 years.
“This research demonstrates that overweight and obesity are strong independent predictors of the clinical progression of knee OA, from disease onset/diagnosis to joint failure and subsequent [knee replacement]. Overweight subjects are at over 40% increased risk of surgery, and those who are obese have a more than doubled risk when compared to subjects with normal weight,” said Kristen M. Leyland, D.Phil., and her colleagues.
Read the full study in Arthritis & Rheumatology (doi: 10.1002/art.39486).
FROM ARTHRITIS & RHEUMATOLOGY
Food-antigen–specific immunoglobulin E is not a predictor of food allergies in atopic dermatitis
Food-antigen–specific immunoglobulin E (sIgE) levels were not clinically useful for predicting food allergy development in a study of infants with atopic dermatitis (AD).
The dual-phase study included 1,087 patients aged 3-18 months who had been diagnosed with AD for no more than 3 months prior to enrollment in the study and had at least mild disease activity. During the first phase of the study, which was a 36-month, randomized, double-blind, vehicle-controlled phase, half of patients were treated with placebo cream and the other half were treated with 1% pimecrolimus cream. In the second phase of the study, which was open-label, all patients received 1% pimecrolimus cream for up to 33 months or the patient’s 6th birthday, whichever occurred sooner. Patients were excluded if they received treatment with topical or systemic agents within 7 days before the first application of cream in the study.
The researchers followed food allergy development during both phases of the study. Other data collected by the researchers included sIgE levels for various foods at baseline and at the end of both phases of the study, with sIgE decision points having been assigned to each food.
By the end of the second phase of the trial, 15.9% of patients had developed a food allergy, with 292 days having been the median period of time that passed before the initial diagnosis of a food allergy was made. The most common food allergies were to peanuts, cow’s milk, and egg whites, occurring in 7%, 4%, and 4% of patients respectively. The percentage of patients with any allergy to food other than fish decreased over time. Higher levels of AD severity were predictive for the development of food allergy, with the percentage of patients who developed one or more food allergies by the end of the study having increased with increasing AD severity at baseline.
Total serum immunoglobulin E (IgE) and sIgE for milk, eggs, and peanuts measured at the end of the second phase also were increased in patients with increasing AD severity. Despite these findings, the positive predictive values for sIgE decision points for the foods tested were low (less than 0.6 for all values tested).
“SIgE decision points, both published values and the novel decision points used in this study, had high [negative predictive values], in particular for peanut[s], egg white[s], and cow’s milk. Thus, patients with mild AD with sIgE levels below these cutoffs would be unlikely to have or develop these specific allergies and would not benefit from food challenges or elimination diets. Similarly, elevated sIgE, as defined by the decision points tested, had very low [positive predictive values] for food allergy, both for sIgE values at baseline and at the end of the [first phase of the study] ... Thus, despite an increased likelihood of allergy development with increasing sIgE shown for cow’s milk, egg[s], and peanut[s], our data do not support the use of sIgE testing for the diagnosis of food allergy in subjects without a history of reaction to that food,” said Dr. Jonathan M. Spergelof the Children’s Hospital of Philadelphia and his colleagues.
Read the full study in Pediatrics (doi: 10.1542/peds.2015-1444).
Food-antigen–specific immunoglobulin E (sIgE) levels were not clinically useful for predicting food allergy development in a study of infants with atopic dermatitis (AD).
The dual-phase study included 1,087 patients aged 3-18 months who had been diagnosed with AD for no more than 3 months prior to enrollment in the study and had at least mild disease activity. During the first phase of the study, which was a 36-month, randomized, double-blind, vehicle-controlled phase, half of patients were treated with placebo cream and the other half were treated with 1% pimecrolimus cream. In the second phase of the study, which was open-label, all patients received 1% pimecrolimus cream for up to 33 months or the patient’s 6th birthday, whichever occurred sooner. Patients were excluded if they received treatment with topical or systemic agents within 7 days before the first application of cream in the study.
The researchers followed food allergy development during both phases of the study. Other data collected by the researchers included sIgE levels for various foods at baseline and at the end of both phases of the study, with sIgE decision points having been assigned to each food.
By the end of the second phase of the trial, 15.9% of patients had developed a food allergy, with 292 days having been the median period of time that passed before the initial diagnosis of a food allergy was made. The most common food allergies were to peanuts, cow’s milk, and egg whites, occurring in 7%, 4%, and 4% of patients respectively. The percentage of patients with any allergy to food other than fish decreased over time. Higher levels of AD severity were predictive for the development of food allergy, with the percentage of patients who developed one or more food allergies by the end of the study having increased with increasing AD severity at baseline.
Total serum immunoglobulin E (IgE) and sIgE for milk, eggs, and peanuts measured at the end of the second phase also were increased in patients with increasing AD severity. Despite these findings, the positive predictive values for sIgE decision points for the foods tested were low (less than 0.6 for all values tested).
“SIgE decision points, both published values and the novel decision points used in this study, had high [negative predictive values], in particular for peanut[s], egg white[s], and cow’s milk. Thus, patients with mild AD with sIgE levels below these cutoffs would be unlikely to have or develop these specific allergies and would not benefit from food challenges or elimination diets. Similarly, elevated sIgE, as defined by the decision points tested, had very low [positive predictive values] for food allergy, both for sIgE values at baseline and at the end of the [first phase of the study] ... Thus, despite an increased likelihood of allergy development with increasing sIgE shown for cow’s milk, egg[s], and peanut[s], our data do not support the use of sIgE testing for the diagnosis of food allergy in subjects without a history of reaction to that food,” said Dr. Jonathan M. Spergelof the Children’s Hospital of Philadelphia and his colleagues.
Read the full study in Pediatrics (doi: 10.1542/peds.2015-1444).
Food-antigen–specific immunoglobulin E (sIgE) levels were not clinically useful for predicting food allergy development in a study of infants with atopic dermatitis (AD).
The dual-phase study included 1,087 patients aged 3-18 months who had been diagnosed with AD for no more than 3 months prior to enrollment in the study and had at least mild disease activity. During the first phase of the study, which was a 36-month, randomized, double-blind, vehicle-controlled phase, half of patients were treated with placebo cream and the other half were treated with 1% pimecrolimus cream. In the second phase of the study, which was open-label, all patients received 1% pimecrolimus cream for up to 33 months or the patient’s 6th birthday, whichever occurred sooner. Patients were excluded if they received treatment with topical or systemic agents within 7 days before the first application of cream in the study.
The researchers followed food allergy development during both phases of the study. Other data collected by the researchers included sIgE levels for various foods at baseline and at the end of both phases of the study, with sIgE decision points having been assigned to each food.
By the end of the second phase of the trial, 15.9% of patients had developed a food allergy, with 292 days having been the median period of time that passed before the initial diagnosis of a food allergy was made. The most common food allergies were to peanuts, cow’s milk, and egg whites, occurring in 7%, 4%, and 4% of patients respectively. The percentage of patients with any allergy to food other than fish decreased over time. Higher levels of AD severity were predictive for the development of food allergy, with the percentage of patients who developed one or more food allergies by the end of the study having increased with increasing AD severity at baseline.
Total serum immunoglobulin E (IgE) and sIgE for milk, eggs, and peanuts measured at the end of the second phase also were increased in patients with increasing AD severity. Despite these findings, the positive predictive values for sIgE decision points for the foods tested were low (less than 0.6 for all values tested).
“SIgE decision points, both published values and the novel decision points used in this study, had high [negative predictive values], in particular for peanut[s], egg white[s], and cow’s milk. Thus, patients with mild AD with sIgE levels below these cutoffs would be unlikely to have or develop these specific allergies and would not benefit from food challenges or elimination diets. Similarly, elevated sIgE, as defined by the decision points tested, had very low [positive predictive values] for food allergy, both for sIgE values at baseline and at the end of the [first phase of the study] ... Thus, despite an increased likelihood of allergy development with increasing sIgE shown for cow’s milk, egg[s], and peanut[s], our data do not support the use of sIgE testing for the diagnosis of food allergy in subjects without a history of reaction to that food,” said Dr. Jonathan M. Spergelof the Children’s Hospital of Philadelphia and his colleagues.
Read the full study in Pediatrics (doi: 10.1542/peds.2015-1444).
FROM PEDIATRICS