User login
Acute kidney injury common in children, young adults in ICU
Acute kidney injury is common in children and young adults admitted to ICUs, and cannot always be identified by plasma creatinine level alone, according to the authors of a study presented at the meeting sponsored by the American Society of Nephrology.
The Assessment of Worldwide Acute Kidney Injury, Renal Angina, and Epidemiology (AWARE) study was a prospective, international, observational study in 4,683 patients aged 3 months to 25 years, recruited from 32 pediatric ICUs over the course of 3 months.
Ahmad Kaddourah, MD, from the Center for Acute Care Nephrology at the Cincinnati Children’s Hospital Medical Center, and his coauthors found that 27% of the participants developed acute kidney injury and 12% developed severe acute kidney injury – defined as stage 2 or 3 acute kidney injury – within the first 7 days after admission.
The risk of death within 28 days was 77% higher among individuals with severe acute kidney injury, even after accounting for their original diagnosis when they were admitted to the ICU. Mortality among these individuals was 11%, compared with 2.5% among patients without severe acute kidney injury. These patients also had an increased use of renal replacement therapy and mechanical ventilation, and were more likely to have longer stays in hospital.
Researchers also saw a stepwise increase in 28-day mortality associated with maximum stage of acute kidney injury.
“The common and early occurrence of acute kidney injury reinforces the need for systematic surveillance for acute kidney injury at the time of admission to the ICU,” Dr. Kaddourah and his associates wrote. “Early identification of modifiable risk factors for acute kidney injury (e.g., nephrotoxic medications) or adverse sequelae (e.g., fluid overload) has the potential to decrease morbidity and mortality.”
Of particular note was the observation that 67% of the patients who met the urine-output criteria for acute kidney injury would not have been diagnosed using the plasma creatinine criteria alone. Furthermore, “mortality was higher among patients diagnosed with stage 3 acute kidney injury according to urine output than among those diagnosed according to plasma creatinine levels,” the authors reported.
There was a steady increase in the daily prevalence of acute kidney disease, from 15% on day 1 after admission to 20% by day 7. Patients with stage 1 acute kidney injury on day 1 also were more likely to progress to stage 2 or 3 by day 7, compared with patients who did not have acute kidney injury on admission.
However, around three-quarters of this increase in stage occurred within the first 4 days after admission, which the authors suggested would support a 4-day time frame for future studies on acute kidney injury in children. They also stressed that as their assessments for acute kidney injury stopped at day 7 after admission, there may have been incidents that were missed.
Dr. Kaddourah and his associates noted that although the rates of severe and acute kidney injury seen in the study were slightly lower than those observed in studies in adults, the associations with morbidity and mortality were similar.
“The presence of chronic systemic diseases contributes to residual confounding in studies of acute kidney injury in adults,” they wrote. “Children have a low prevalence of such chronic diseases; thus, although the incremental association between acute kidney injury and risk of death mirrors that seen in adults, our study suggests that acute kidney injury itself may be key to the associated morbidity and mortality.”
The study was supported by the Pediatric Nephrology Center for Excellence at Cincinnati Children’s Hospital Medical Center. The authors declared grants, consultancies, speaking engagements, and other support from private industry, some related to and some outside of the submitted work.
A strength of this study is the definition of acute kidney injury, with the use of precise and validated criteria. Limitations of the study, beyond its observational nature, include the lack of data about diuretic and other treatment that may have influenced urine output, and the requirement for just a single baseline plasma creatinine level for study entry.
However, the study results indicate that acute injury is not only common among critically ill children and young adults, but is associated with adverse outcomes, implying that we should look more carefully for markers of acute kidney injury. Given the link between acute kidney injury and subsequent chronic kidney disease, it possible that identifying and treating acute kidney injury promptly might reduce the prevalence of chronic kidney disease, now estimated as roughly one in eight adults in the United States.
Julie R. Ingelfinger, MD, is a pediatric nephrologist at Massachusetts General Hospital and deputy editor of the New England Journal of Medicine. These comments are excerpted from an accompanying editorial (N Eng J Med. 2016 Nov 18. doi: 10.1056/NEJMe613456). No conflicts of interest were declared.
A strength of this study is the definition of acute kidney injury, with the use of precise and validated criteria. Limitations of the study, beyond its observational nature, include the lack of data about diuretic and other treatment that may have influenced urine output, and the requirement for just a single baseline plasma creatinine level for study entry.
However, the study results indicate that acute injury is not only common among critically ill children and young adults, but is associated with adverse outcomes, implying that we should look more carefully for markers of acute kidney injury. Given the link between acute kidney injury and subsequent chronic kidney disease, it possible that identifying and treating acute kidney injury promptly might reduce the prevalence of chronic kidney disease, now estimated as roughly one in eight adults in the United States.
Julie R. Ingelfinger, MD, is a pediatric nephrologist at Massachusetts General Hospital and deputy editor of the New England Journal of Medicine. These comments are excerpted from an accompanying editorial (N Eng J Med. 2016 Nov 18. doi: 10.1056/NEJMe613456). No conflicts of interest were declared.
A strength of this study is the definition of acute kidney injury, with the use of precise and validated criteria. Limitations of the study, beyond its observational nature, include the lack of data about diuretic and other treatment that may have influenced urine output, and the requirement for just a single baseline plasma creatinine level for study entry.
However, the study results indicate that acute injury is not only common among critically ill children and young adults, but is associated with adverse outcomes, implying that we should look more carefully for markers of acute kidney injury. Given the link between acute kidney injury and subsequent chronic kidney disease, it possible that identifying and treating acute kidney injury promptly might reduce the prevalence of chronic kidney disease, now estimated as roughly one in eight adults in the United States.
Julie R. Ingelfinger, MD, is a pediatric nephrologist at Massachusetts General Hospital and deputy editor of the New England Journal of Medicine. These comments are excerpted from an accompanying editorial (N Eng J Med. 2016 Nov 18. doi: 10.1056/NEJMe613456). No conflicts of interest were declared.
Acute kidney injury is common in children and young adults admitted to ICUs, and cannot always be identified by plasma creatinine level alone, according to the authors of a study presented at the meeting sponsored by the American Society of Nephrology.
The Assessment of Worldwide Acute Kidney Injury, Renal Angina, and Epidemiology (AWARE) study was a prospective, international, observational study in 4,683 patients aged 3 months to 25 years, recruited from 32 pediatric ICUs over the course of 3 months.
Ahmad Kaddourah, MD, from the Center for Acute Care Nephrology at the Cincinnati Children’s Hospital Medical Center, and his coauthors found that 27% of the participants developed acute kidney injury and 12% developed severe acute kidney injury – defined as stage 2 or 3 acute kidney injury – within the first 7 days after admission.
The risk of death within 28 days was 77% higher among individuals with severe acute kidney injury, even after accounting for their original diagnosis when they were admitted to the ICU. Mortality among these individuals was 11%, compared with 2.5% among patients without severe acute kidney injury. These patients also had an increased use of renal replacement therapy and mechanical ventilation, and were more likely to have longer stays in hospital.
Researchers also saw a stepwise increase in 28-day mortality associated with maximum stage of acute kidney injury.
“The common and early occurrence of acute kidney injury reinforces the need for systematic surveillance for acute kidney injury at the time of admission to the ICU,” Dr. Kaddourah and his associates wrote. “Early identification of modifiable risk factors for acute kidney injury (e.g., nephrotoxic medications) or adverse sequelae (e.g., fluid overload) has the potential to decrease morbidity and mortality.”
Of particular note was the observation that 67% of the patients who met the urine-output criteria for acute kidney injury would not have been diagnosed using the plasma creatinine criteria alone. Furthermore, “mortality was higher among patients diagnosed with stage 3 acute kidney injury according to urine output than among those diagnosed according to plasma creatinine levels,” the authors reported.
There was a steady increase in the daily prevalence of acute kidney disease, from 15% on day 1 after admission to 20% by day 7. Patients with stage 1 acute kidney injury on day 1 also were more likely to progress to stage 2 or 3 by day 7, compared with patients who did not have acute kidney injury on admission.
However, around three-quarters of this increase in stage occurred within the first 4 days after admission, which the authors suggested would support a 4-day time frame for future studies on acute kidney injury in children. They also stressed that as their assessments for acute kidney injury stopped at day 7 after admission, there may have been incidents that were missed.
Dr. Kaddourah and his associates noted that although the rates of severe and acute kidney injury seen in the study were slightly lower than those observed in studies in adults, the associations with morbidity and mortality were similar.
“The presence of chronic systemic diseases contributes to residual confounding in studies of acute kidney injury in adults,” they wrote. “Children have a low prevalence of such chronic diseases; thus, although the incremental association between acute kidney injury and risk of death mirrors that seen in adults, our study suggests that acute kidney injury itself may be key to the associated morbidity and mortality.”
The study was supported by the Pediatric Nephrology Center for Excellence at Cincinnati Children’s Hospital Medical Center. The authors declared grants, consultancies, speaking engagements, and other support from private industry, some related to and some outside of the submitted work.
Acute kidney injury is common in children and young adults admitted to ICUs, and cannot always be identified by plasma creatinine level alone, according to the authors of a study presented at the meeting sponsored by the American Society of Nephrology.
The Assessment of Worldwide Acute Kidney Injury, Renal Angina, and Epidemiology (AWARE) study was a prospective, international, observational study in 4,683 patients aged 3 months to 25 years, recruited from 32 pediatric ICUs over the course of 3 months.
Ahmad Kaddourah, MD, from the Center for Acute Care Nephrology at the Cincinnati Children’s Hospital Medical Center, and his coauthors found that 27% of the participants developed acute kidney injury and 12% developed severe acute kidney injury – defined as stage 2 or 3 acute kidney injury – within the first 7 days after admission.
The risk of death within 28 days was 77% higher among individuals with severe acute kidney injury, even after accounting for their original diagnosis when they were admitted to the ICU. Mortality among these individuals was 11%, compared with 2.5% among patients without severe acute kidney injury. These patients also had an increased use of renal replacement therapy and mechanical ventilation, and were more likely to have longer stays in hospital.
Researchers also saw a stepwise increase in 28-day mortality associated with maximum stage of acute kidney injury.
“The common and early occurrence of acute kidney injury reinforces the need for systematic surveillance for acute kidney injury at the time of admission to the ICU,” Dr. Kaddourah and his associates wrote. “Early identification of modifiable risk factors for acute kidney injury (e.g., nephrotoxic medications) or adverse sequelae (e.g., fluid overload) has the potential to decrease morbidity and mortality.”
Of particular note was the observation that 67% of the patients who met the urine-output criteria for acute kidney injury would not have been diagnosed using the plasma creatinine criteria alone. Furthermore, “mortality was higher among patients diagnosed with stage 3 acute kidney injury according to urine output than among those diagnosed according to plasma creatinine levels,” the authors reported.
There was a steady increase in the daily prevalence of acute kidney disease, from 15% on day 1 after admission to 20% by day 7. Patients with stage 1 acute kidney injury on day 1 also were more likely to progress to stage 2 or 3 by day 7, compared with patients who did not have acute kidney injury on admission.
However, around three-quarters of this increase in stage occurred within the first 4 days after admission, which the authors suggested would support a 4-day time frame for future studies on acute kidney injury in children. They also stressed that as their assessments for acute kidney injury stopped at day 7 after admission, there may have been incidents that were missed.
Dr. Kaddourah and his associates noted that although the rates of severe and acute kidney injury seen in the study were slightly lower than those observed in studies in adults, the associations with morbidity and mortality were similar.
“The presence of chronic systemic diseases contributes to residual confounding in studies of acute kidney injury in adults,” they wrote. “Children have a low prevalence of such chronic diseases; thus, although the incremental association between acute kidney injury and risk of death mirrors that seen in adults, our study suggests that acute kidney injury itself may be key to the associated morbidity and mortality.”
The study was supported by the Pediatric Nephrology Center for Excellence at Cincinnati Children’s Hospital Medical Center. The authors declared grants, consultancies, speaking engagements, and other support from private industry, some related to and some outside of the submitted work.
FROM KIDNEY WEEK 2016
Key clinical point: Acute kidney injury is common in children and young adults admitted to ICU, but many cases may be missed using plasma creatinine criteria alone.
Major finding: Among children and young adults admitted to intensive care, as many as 1 in 4 may have acute kidney injury and 1 in 10 may have severe acute kidney injury.
Data source: Prospective observational study in 4,683 patients aged 3 months to 25 years admitted to pediatric intensive care.
Disclosures: The study was supported by the Pediatric Nephrology Center for Excellence at Cincinnati Children’s Hospital Medical Center. The authors declared grants, consultancies, speaking engagements and other support from private industry, some related to and some outside of the submitted work.
No primary prevention gains from low-dose aspirin in diabetes
Low-dose aspirin does not appear to reduce the risk of cardiovascular events in individuals with type 2 diabetes but without preexisting cardiovascular disease, according to a study presented at the American Heart Association scientific sessions and published simultaneously in the Nov. 15 edition of Circulation.
In the long-term follow-up of participants in an open-label controlled trial, Japanese researchers followed 2,539 patients with type 2 diabetes who were randomized to daily aspirin (81 mg or 100 mg) or no aspirin, for a median of 10.3 years to see the impact on the incidence of cardiovascular events.
In their study, they found that a daily regimen of low-dose aspirin was not associated with a significant change in the risk of cardiovascular events including sudden death, fatal or nonfatal coronary artery disease, fatal or nonfatal stroke, and peripheral vascular disease (hazard ratio, 1.14; 95% confidence interval, 0.91-1.42). They also found no significant difference between the two groups in secondary outcomes, which were a composite of coronary artery, cerebrovascular, and vascular events.
This lack of impact persisted even after age, sex, glycemic control, kidney function, smoking status, hypertension, and dyslipidemia were accounted for, and it was also seen in sensitivity analyses on the intention-to-treat cohort.
However, the investigators did find a significantly higher rate of gastrointestinal bleeding in patients taking aspirin, compared with the control group (2% vs. 0.9%, P = .03) but no difference in the rate of hemorrhagic stroke.
“Meta-analyses in patients with diabetes have reported that aspirin has a smaller benefit for primary prevention than in general populations, although patients with diabetes are at high risk for cardiovascular events,” the authors wrote. “It seems there are differential effects of low-dose aspirin therapy on preventing cardiovascular events in patients with and without diabetes.”
The study was supported by the Ministry of Health, Labour, and Welfare of Japan and the Japan Heart Foundation. Eight authors declared funding, grants, honoraria, and other support from the pharmaceutical industry. No other conflicts of interest were declared.
Low-dose aspirin does not appear to reduce the risk of cardiovascular events in individuals with type 2 diabetes but without preexisting cardiovascular disease, according to a study presented at the American Heart Association scientific sessions and published simultaneously in the Nov. 15 edition of Circulation.
In the long-term follow-up of participants in an open-label controlled trial, Japanese researchers followed 2,539 patients with type 2 diabetes who were randomized to daily aspirin (81 mg or 100 mg) or no aspirin, for a median of 10.3 years to see the impact on the incidence of cardiovascular events.
In their study, they found that a daily regimen of low-dose aspirin was not associated with a significant change in the risk of cardiovascular events including sudden death, fatal or nonfatal coronary artery disease, fatal or nonfatal stroke, and peripheral vascular disease (hazard ratio, 1.14; 95% confidence interval, 0.91-1.42). They also found no significant difference between the two groups in secondary outcomes, which were a composite of coronary artery, cerebrovascular, and vascular events.
This lack of impact persisted even after age, sex, glycemic control, kidney function, smoking status, hypertension, and dyslipidemia were accounted for, and it was also seen in sensitivity analyses on the intention-to-treat cohort.
However, the investigators did find a significantly higher rate of gastrointestinal bleeding in patients taking aspirin, compared with the control group (2% vs. 0.9%, P = .03) but no difference in the rate of hemorrhagic stroke.
“Meta-analyses in patients with diabetes have reported that aspirin has a smaller benefit for primary prevention than in general populations, although patients with diabetes are at high risk for cardiovascular events,” the authors wrote. “It seems there are differential effects of low-dose aspirin therapy on preventing cardiovascular events in patients with and without diabetes.”
The study was supported by the Ministry of Health, Labour, and Welfare of Japan and the Japan Heart Foundation. Eight authors declared funding, grants, honoraria, and other support from the pharmaceutical industry. No other conflicts of interest were declared.
Low-dose aspirin does not appear to reduce the risk of cardiovascular events in individuals with type 2 diabetes but without preexisting cardiovascular disease, according to a study presented at the American Heart Association scientific sessions and published simultaneously in the Nov. 15 edition of Circulation.
In the long-term follow-up of participants in an open-label controlled trial, Japanese researchers followed 2,539 patients with type 2 diabetes who were randomized to daily aspirin (81 mg or 100 mg) or no aspirin, for a median of 10.3 years to see the impact on the incidence of cardiovascular events.
In their study, they found that a daily regimen of low-dose aspirin was not associated with a significant change in the risk of cardiovascular events including sudden death, fatal or nonfatal coronary artery disease, fatal or nonfatal stroke, and peripheral vascular disease (hazard ratio, 1.14; 95% confidence interval, 0.91-1.42). They also found no significant difference between the two groups in secondary outcomes, which were a composite of coronary artery, cerebrovascular, and vascular events.
This lack of impact persisted even after age, sex, glycemic control, kidney function, smoking status, hypertension, and dyslipidemia were accounted for, and it was also seen in sensitivity analyses on the intention-to-treat cohort.
However, the investigators did find a significantly higher rate of gastrointestinal bleeding in patients taking aspirin, compared with the control group (2% vs. 0.9%, P = .03) but no difference in the rate of hemorrhagic stroke.
“Meta-analyses in patients with diabetes have reported that aspirin has a smaller benefit for primary prevention than in general populations, although patients with diabetes are at high risk for cardiovascular events,” the authors wrote. “It seems there are differential effects of low-dose aspirin therapy on preventing cardiovascular events in patients with and without diabetes.”
The study was supported by the Ministry of Health, Labour, and Welfare of Japan and the Japan Heart Foundation. Eight authors declared funding, grants, honoraria, and other support from the pharmaceutical industry. No other conflicts of interest were declared.
FROM THE AHA SCIENTIFIC SESSIONS
Key clinical point:
Major finding: Patients with type 2 diabetes taking daily low-dose aspirin showed no significant reductions in cardiovascular events, compared with a control group not taking aspirin.
Data source: Long-term follow-up in a randomized controlled trial in 2,539 patients with type 2 diabetes in the absence of preexisting cardiovascular disease.
Disclosures: The study was supported by the Ministry of Health, Labour, and Welfare of Japan and the Japan Heart Foundation. Eight authors declared funding, grants, honoraria, and other support from the pharmaceutical industry. No other conflicts of interest were declared.
Preschool ADHD diagnoses plateaued after 2011 AAP guideline
The introduction of the 2011 American Academy of Pediatrics practice guidelines on attention-deficit/hyperactivity disorder was associated with a leveling off in the number of diagnoses in preschool children.
“In the preguideline period, the trajectory of ADHD diagnosis increased slightly but significantly across practices,” Alexander G. Fiks, MD, from the Children’s Hospital of Philadelphia, and his coinvestigators wrote. “However, the rate of ADHD diagnosis no longer increased significantly after guideline release.”
They found that the rate of ADHD diagnoses was 0.7% before the release of the 2011 guidelines and 0.9% after, while the rate of stimulant prescriptions remained constant at 0.4% across the entire study period (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2025).
While the levels of stimulants prescribed remained the same across the period of the analysis, the proportion of children diagnosed with ADHD who were prescribed stimulants had already been in significant decline before the release of the guidelines. After the guidelines, this rate also plateaued, signifying that before – but not after – the guidelines, children were becoming less likely to be prescribed stimulant medication following an ADHD diagnosis.
Commenting on the change in diagnostic and prescribing patterns, the investigators noted that the primary goal of practice guidelines was to standardize care.
“In the case of preschool ADHD, such standardization might have resulted in an increasing trajectory in diagnosis of preschool children if pediatric clinicians had not previously been evaluating ADHD when an evaluation was warranted,” they wrote. “Alternatively, a decrease in diagnosis could have occurred if clinicians were applying more rigorous standards to the diagnosis and therefore excluding certain children who might have previously been diagnosed or no change if a combination of these two patterns was occurring or if there was no change in the standard used.”
They suggested that the observation of a decreasing likelihood of stimulant prescriptions for ADHD before the guidelines may have been driven by the results of the 2006 Preschool ADHD Treatment Study, which showed a lower effect size of stimulant medication in preschool-aged children, compared with school-aged children.
“Alternatively, findings may have resulted from a decrease in the severity of preschool children diagnosed with ADHD as the proportion of all preschoolers diagnosed with ADHD increased,” they wrote.
The study was supported by the U.S. Department of Health & Human Services. Dr. Fiks reported receiving a research grant from Pfizer for work on ADHD unrelated to this study. The other investigators reported having no financial disclosures.
It is encouraging for those of us who worked on crafting the revised guidelines to find some evidence about the impact of those recommendations. However, as the investigators point out, although they were able to find out that, in preschool-aged children with ADHD, recommended criteria for the use of stimulant medications, specifically methylphenidate, did not result in an increase in its use in this age group, the frequency of behavioral parent training, the first-line recommended treatment, could not be determined.
In addition, to address the issue that was the focus of this study, examining the implementation of evidence into practice, there needs to be greater standardization of assessment and treatment modalities so that we can better examine the outcomes of changes in treatment. Studies of prevalence and treatments of children with ADHD have indicated wide variations across the country. Clarifying those differences will require the improved ability to examine the various factors responsible for these variations, particularly across the systems of care that go beyond just medication use.
Mark L. Wolraich, MD, is from the University of Oklahoma Health Sciences Center, Oklahoma City. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2928). He reported having no financial disclosures.
It is encouraging for those of us who worked on crafting the revised guidelines to find some evidence about the impact of those recommendations. However, as the investigators point out, although they were able to find out that, in preschool-aged children with ADHD, recommended criteria for the use of stimulant medications, specifically methylphenidate, did not result in an increase in its use in this age group, the frequency of behavioral parent training, the first-line recommended treatment, could not be determined.
In addition, to address the issue that was the focus of this study, examining the implementation of evidence into practice, there needs to be greater standardization of assessment and treatment modalities so that we can better examine the outcomes of changes in treatment. Studies of prevalence and treatments of children with ADHD have indicated wide variations across the country. Clarifying those differences will require the improved ability to examine the various factors responsible for these variations, particularly across the systems of care that go beyond just medication use.
Mark L. Wolraich, MD, is from the University of Oklahoma Health Sciences Center, Oklahoma City. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2928). He reported having no financial disclosures.
It is encouraging for those of us who worked on crafting the revised guidelines to find some evidence about the impact of those recommendations. However, as the investigators point out, although they were able to find out that, in preschool-aged children with ADHD, recommended criteria for the use of stimulant medications, specifically methylphenidate, did not result in an increase in its use in this age group, the frequency of behavioral parent training, the first-line recommended treatment, could not be determined.
In addition, to address the issue that was the focus of this study, examining the implementation of evidence into practice, there needs to be greater standardization of assessment and treatment modalities so that we can better examine the outcomes of changes in treatment. Studies of prevalence and treatments of children with ADHD have indicated wide variations across the country. Clarifying those differences will require the improved ability to examine the various factors responsible for these variations, particularly across the systems of care that go beyond just medication use.
Mark L. Wolraich, MD, is from the University of Oklahoma Health Sciences Center, Oklahoma City. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2928). He reported having no financial disclosures.
The introduction of the 2011 American Academy of Pediatrics practice guidelines on attention-deficit/hyperactivity disorder was associated with a leveling off in the number of diagnoses in preschool children.
“In the preguideline period, the trajectory of ADHD diagnosis increased slightly but significantly across practices,” Alexander G. Fiks, MD, from the Children’s Hospital of Philadelphia, and his coinvestigators wrote. “However, the rate of ADHD diagnosis no longer increased significantly after guideline release.”
They found that the rate of ADHD diagnoses was 0.7% before the release of the 2011 guidelines and 0.9% after, while the rate of stimulant prescriptions remained constant at 0.4% across the entire study period (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2025).
While the levels of stimulants prescribed remained the same across the period of the analysis, the proportion of children diagnosed with ADHD who were prescribed stimulants had already been in significant decline before the release of the guidelines. After the guidelines, this rate also plateaued, signifying that before – but not after – the guidelines, children were becoming less likely to be prescribed stimulant medication following an ADHD diagnosis.
Commenting on the change in diagnostic and prescribing patterns, the investigators noted that the primary goal of practice guidelines was to standardize care.
“In the case of preschool ADHD, such standardization might have resulted in an increasing trajectory in diagnosis of preschool children if pediatric clinicians had not previously been evaluating ADHD when an evaluation was warranted,” they wrote. “Alternatively, a decrease in diagnosis could have occurred if clinicians were applying more rigorous standards to the diagnosis and therefore excluding certain children who might have previously been diagnosed or no change if a combination of these two patterns was occurring or if there was no change in the standard used.”
They suggested that the observation of a decreasing likelihood of stimulant prescriptions for ADHD before the guidelines may have been driven by the results of the 2006 Preschool ADHD Treatment Study, which showed a lower effect size of stimulant medication in preschool-aged children, compared with school-aged children.
“Alternatively, findings may have resulted from a decrease in the severity of preschool children diagnosed with ADHD as the proportion of all preschoolers diagnosed with ADHD increased,” they wrote.
The study was supported by the U.S. Department of Health & Human Services. Dr. Fiks reported receiving a research grant from Pfizer for work on ADHD unrelated to this study. The other investigators reported having no financial disclosures.
The introduction of the 2011 American Academy of Pediatrics practice guidelines on attention-deficit/hyperactivity disorder was associated with a leveling off in the number of diagnoses in preschool children.
“In the preguideline period, the trajectory of ADHD diagnosis increased slightly but significantly across practices,” Alexander G. Fiks, MD, from the Children’s Hospital of Philadelphia, and his coinvestigators wrote. “However, the rate of ADHD diagnosis no longer increased significantly after guideline release.”
They found that the rate of ADHD diagnoses was 0.7% before the release of the 2011 guidelines and 0.9% after, while the rate of stimulant prescriptions remained constant at 0.4% across the entire study period (Pediatrics. 2016 Nov 15. doi: 10.1542/peds.2016-2025).
While the levels of stimulants prescribed remained the same across the period of the analysis, the proportion of children diagnosed with ADHD who were prescribed stimulants had already been in significant decline before the release of the guidelines. After the guidelines, this rate also plateaued, signifying that before – but not after – the guidelines, children were becoming less likely to be prescribed stimulant medication following an ADHD diagnosis.
Commenting on the change in diagnostic and prescribing patterns, the investigators noted that the primary goal of practice guidelines was to standardize care.
“In the case of preschool ADHD, such standardization might have resulted in an increasing trajectory in diagnosis of preschool children if pediatric clinicians had not previously been evaluating ADHD when an evaluation was warranted,” they wrote. “Alternatively, a decrease in diagnosis could have occurred if clinicians were applying more rigorous standards to the diagnosis and therefore excluding certain children who might have previously been diagnosed or no change if a combination of these two patterns was occurring or if there was no change in the standard used.”
They suggested that the observation of a decreasing likelihood of stimulant prescriptions for ADHD before the guidelines may have been driven by the results of the 2006 Preschool ADHD Treatment Study, which showed a lower effect size of stimulant medication in preschool-aged children, compared with school-aged children.
“Alternatively, findings may have resulted from a decrease in the severity of preschool children diagnosed with ADHD as the proportion of all preschoolers diagnosed with ADHD increased,” they wrote.
The study was supported by the U.S. Department of Health & Human Services. Dr. Fiks reported receiving a research grant from Pfizer for work on ADHD unrelated to this study. The other investigators reported having no financial disclosures.
Key clinical point:
Major finding: The rate of ADHD diagnoses was 0.7% before the guidelines and 0.9% after, while stimulant prescriptions remained constant at 0.4% across the study period.
Data source: An analysis of electronic health record data from 143,881 children across 63 primary care practice from January 2008 to July 2014.
Disclosures: The study was supported by the U.S. Department of Health & Human Services. Dr. Fiks reported receiving a research grant from Pfizer for work on ADHD unrelated to this study. The other investigators reported having no financial disclosures.
Inhaled laninamivir reduces risk of influenza in young children
The inhaled neuraminidase inhibitor laninamivir has been shown to significantly reduce the likelihood of developing influenza among children exposed to a family member with the infection, according to a study recently published in Pediatrics.
In a double-blind, placebo-controlled study, researchers randomized 343 children under 10 years old – who had an influenza-infected family member – to a single 20-mg dose of inhaled laninamivir octanoate or placebo.
Subgroup analyses suggested the treatment was more effective in children under 7 years old, with a relative risk reduction of 64%, compared with a non–statistically significant 28% reduction in those aged 7-10 years (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-0109).
The treatment was also effective among children where the index case was infected with influenza A (H3N2).
Dr. Takashi Nakano, from Kawasaki Hospital in Okayama, Japan, and coauthors reported a similar incidence of adverse events in the laninamivir and placebo groups, with no serious adverse events and no withdrawals due to adverse events. However, the authors noted that there were very few study participants considered at high risk, such as patients with chronic respiratory disease, and suggested further studies of the impact and efficacy of treatment in high-risk groups.
The researchers noted that, despite increasing rates of influenza vaccination and the availability of other neuraminidase inhibitors, such as oseltamivir and peramivir, pandemic outbreaks of influenza are still occurring. There has also been evidence of resistance to both oseltamivir and peramivir, for example, in the 2013/2014 outbreak of influenza A (H1N1) in Japan. “Given the limitations of vaccination, extensive variations in the option for antiinfluenza prophylaxis are desirable as an adjunct to influenza vaccine,” the researchers wrote.
Laninamivir has been studied in adults and children and shown to be effective at treating influenza infection, but its efficacy as prophylaxis in children under 10 years old had not previously been studied.
“Since a single 20-mg dose of laninamivir octanoate revealed prophylactic effect, the regimen in the current study is a highly user-friendly option,” the researchers wrote. “Although the numbers of infected individuals may differ by season, the number needed to treat based on the incidence of clinical influenza for the two groups in the current study was 11.”
The study was funded by Daiichi Sankyo. Two of the study authors reported being consultants for Daiichi Sankyo, as well as having financial relationships with other pharmaceutical companies. The other study authors are employees of Daiichi Sankyo.
Although vaccination remains the preferred approach for influenza prevention, additional options for influenza prophylaxis in children are important, given concerns for the emergence of resistance, the known antiviral adverse side effect profiles, possible limited supplies, and the potential for spotty patient compliance. This drug was well tolerated, without significant adverse events reported, and there were no neurologic symptoms or abnormal behavior, which have occurred with influenza illness and with other neuraminidase inhibitors in Japan.
Prompt initiation of influenza prophylaxis is necessary to ensure efficacy, which hinges on proper and prompt identification of index cases. Therefore, efforts to educate parents and families on the early signs and symptoms of influenza and the importance of seeking medical attention to confirm the diagnosis in the index case are crucial for timely initiation of prophylaxis in household contacts.
Flor M. Munoz, MD, is from the department of pediatrics at the Baylor College of Medicine and Texas Children’s Hospital in Houston, and Henry H. Bernstein, DO, is from the department of pediatrics, Hofstra Northwell School of Medicine, Hempstead, N.Y., and Cohen Children’s Medical Center of New York in New Hyde Park. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-2371). The authors reported having no relevant financial disclosures.
Although vaccination remains the preferred approach for influenza prevention, additional options for influenza prophylaxis in children are important, given concerns for the emergence of resistance, the known antiviral adverse side effect profiles, possible limited supplies, and the potential for spotty patient compliance. This drug was well tolerated, without significant adverse events reported, and there were no neurologic symptoms or abnormal behavior, which have occurred with influenza illness and with other neuraminidase inhibitors in Japan.
Prompt initiation of influenza prophylaxis is necessary to ensure efficacy, which hinges on proper and prompt identification of index cases. Therefore, efforts to educate parents and families on the early signs and symptoms of influenza and the importance of seeking medical attention to confirm the diagnosis in the index case are crucial for timely initiation of prophylaxis in household contacts.
Flor M. Munoz, MD, is from the department of pediatrics at the Baylor College of Medicine and Texas Children’s Hospital in Houston, and Henry H. Bernstein, DO, is from the department of pediatrics, Hofstra Northwell School of Medicine, Hempstead, N.Y., and Cohen Children’s Medical Center of New York in New Hyde Park. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-2371). The authors reported having no relevant financial disclosures.
Although vaccination remains the preferred approach for influenza prevention, additional options for influenza prophylaxis in children are important, given concerns for the emergence of resistance, the known antiviral adverse side effect profiles, possible limited supplies, and the potential for spotty patient compliance. This drug was well tolerated, without significant adverse events reported, and there were no neurologic symptoms or abnormal behavior, which have occurred with influenza illness and with other neuraminidase inhibitors in Japan.
Prompt initiation of influenza prophylaxis is necessary to ensure efficacy, which hinges on proper and prompt identification of index cases. Therefore, efforts to educate parents and families on the early signs and symptoms of influenza and the importance of seeking medical attention to confirm the diagnosis in the index case are crucial for timely initiation of prophylaxis in household contacts.
Flor M. Munoz, MD, is from the department of pediatrics at the Baylor College of Medicine and Texas Children’s Hospital in Houston, and Henry H. Bernstein, DO, is from the department of pediatrics, Hofstra Northwell School of Medicine, Hempstead, N.Y., and Cohen Children’s Medical Center of New York in New Hyde Park. These comments are adapted from an accompanying editorial (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-2371). The authors reported having no relevant financial disclosures.
The inhaled neuraminidase inhibitor laninamivir has been shown to significantly reduce the likelihood of developing influenza among children exposed to a family member with the infection, according to a study recently published in Pediatrics.
In a double-blind, placebo-controlled study, researchers randomized 343 children under 10 years old – who had an influenza-infected family member – to a single 20-mg dose of inhaled laninamivir octanoate or placebo.
Subgroup analyses suggested the treatment was more effective in children under 7 years old, with a relative risk reduction of 64%, compared with a non–statistically significant 28% reduction in those aged 7-10 years (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-0109).
The treatment was also effective among children where the index case was infected with influenza A (H3N2).
Dr. Takashi Nakano, from Kawasaki Hospital in Okayama, Japan, and coauthors reported a similar incidence of adverse events in the laninamivir and placebo groups, with no serious adverse events and no withdrawals due to adverse events. However, the authors noted that there were very few study participants considered at high risk, such as patients with chronic respiratory disease, and suggested further studies of the impact and efficacy of treatment in high-risk groups.
The researchers noted that, despite increasing rates of influenza vaccination and the availability of other neuraminidase inhibitors, such as oseltamivir and peramivir, pandemic outbreaks of influenza are still occurring. There has also been evidence of resistance to both oseltamivir and peramivir, for example, in the 2013/2014 outbreak of influenza A (H1N1) in Japan. “Given the limitations of vaccination, extensive variations in the option for antiinfluenza prophylaxis are desirable as an adjunct to influenza vaccine,” the researchers wrote.
Laninamivir has been studied in adults and children and shown to be effective at treating influenza infection, but its efficacy as prophylaxis in children under 10 years old had not previously been studied.
“Since a single 20-mg dose of laninamivir octanoate revealed prophylactic effect, the regimen in the current study is a highly user-friendly option,” the researchers wrote. “Although the numbers of infected individuals may differ by season, the number needed to treat based on the incidence of clinical influenza for the two groups in the current study was 11.”
The study was funded by Daiichi Sankyo. Two of the study authors reported being consultants for Daiichi Sankyo, as well as having financial relationships with other pharmaceutical companies. The other study authors are employees of Daiichi Sankyo.
The inhaled neuraminidase inhibitor laninamivir has been shown to significantly reduce the likelihood of developing influenza among children exposed to a family member with the infection, according to a study recently published in Pediatrics.
In a double-blind, placebo-controlled study, researchers randomized 343 children under 10 years old – who had an influenza-infected family member – to a single 20-mg dose of inhaled laninamivir octanoate or placebo.
Subgroup analyses suggested the treatment was more effective in children under 7 years old, with a relative risk reduction of 64%, compared with a non–statistically significant 28% reduction in those aged 7-10 years (Pediatrics. 2016 Nov 2. doi: 10.1542/peds.2016-0109).
The treatment was also effective among children where the index case was infected with influenza A (H3N2).
Dr. Takashi Nakano, from Kawasaki Hospital in Okayama, Japan, and coauthors reported a similar incidence of adverse events in the laninamivir and placebo groups, with no serious adverse events and no withdrawals due to adverse events. However, the authors noted that there were very few study participants considered at high risk, such as patients with chronic respiratory disease, and suggested further studies of the impact and efficacy of treatment in high-risk groups.
The researchers noted that, despite increasing rates of influenza vaccination and the availability of other neuraminidase inhibitors, such as oseltamivir and peramivir, pandemic outbreaks of influenza are still occurring. There has also been evidence of resistance to both oseltamivir and peramivir, for example, in the 2013/2014 outbreak of influenza A (H1N1) in Japan. “Given the limitations of vaccination, extensive variations in the option for antiinfluenza prophylaxis are desirable as an adjunct to influenza vaccine,” the researchers wrote.
Laninamivir has been studied in adults and children and shown to be effective at treating influenza infection, but its efficacy as prophylaxis in children under 10 years old had not previously been studied.
“Since a single 20-mg dose of laninamivir octanoate revealed prophylactic effect, the regimen in the current study is a highly user-friendly option,” the researchers wrote. “Although the numbers of infected individuals may differ by season, the number needed to treat based on the incidence of clinical influenza for the two groups in the current study was 11.”
The study was funded by Daiichi Sankyo. Two of the study authors reported being consultants for Daiichi Sankyo, as well as having financial relationships with other pharmaceutical companies. The other study authors are employees of Daiichi Sankyo.
FROM PEDIATRICS
Key clinical point:
Major finding: Children treated with laninamivir showed a 45.8% reduction in the risk of influenza, compared with the placebo group.
Data source: Randomized, double-blind, placebo-controlled trial in 343 children under 10 years old.
Disclosures: The study was funded by Daiichi Sankyo. Two of the study authors reported being consultants for Daiichi Sankyo, as well as having financial relationships with other pharmaceutical companies. The other study authors are employees of Daiichi Sankyo.
Broadly neutralizing antibody VRC01 fails to sustain HIV viral suppression
Passive immunization against HIV using the broadly neutralizing antibody VRC01 is associated with a delay in plasma viral rebound in individuals undergoing interruption of antiretroviral therapy, according to a new study, but the viral suppression is not sustained.
Two open-label trials investigated the impact of different dosing regiments of VRC01 in a total of 24 patients who were taking a break from antiretroviral therapy, according to a paper published Nov. 9 in the New England Journal of Medicine.
Katharine J. Bar, MD, of the Penn Center for AIDS Research at the University of Pennsylvania, Philadelphia, and her coauthors suggested broadly neutralizing antibodies such as VRC01 could target the persistent viral reservoir that leads to rapid viral rebound as soon as antiretroviral therapy is stopped (N Engl J Med. 2016 Nov 9. doi: 10.1056/NEJMoa1608243).
However, in these two studies, VRC01 did not achieve durable viral suppression. Overall, participants in both trials were significantly more likely than historical controls to maintain viral suppression 4 weeks after interrupting antiretroviral therapy (38% and 80% vs. 13%) but this difference was no longer significant by week 8.
In one trial – the A5340 – 12 of the 13 participants with data that could be evaluated showed viral rebound to more than 200 copies/mL by week 8, and in the second NIH trial, the median time to rebound of 40 copies/mL was 39 days.
All participants showed similar plasma levels of VRC01 that had been observed in previous trials – significantly above 50 mcg/mL for 8 weeks in the A5340 trial and above 100 mcg/mL in the NIH trial. Plasma VRC01 levels were consistently above 50 mcg/mL even at the time of viral rebound, except in one patient.
Researchers performed post hoc analyses of the sequence diversity at the time of viral rebound and compared these to samples from eight participants taken before initiation of antiretroviral therapy.
“Sequence-based and neutralization analyses suggest that VRC01 can restrict the clonality of rebounding virus in some participants, selecting for pre-existing resistance, and drive the emergence of VRC01-resistant virus,” the authors wrote.
However, they pointed out that the early years of antiretroviral drug development showed how quickly resistance could develop in a single-agent situation. Since that time, a multiagent approach directed at different targets has achieved much more potent and sustained viral suppression.
“Analogous to current regimens of highly successful combination ART that targets multiple HIV gene products, our data suggest that immunotherapy will probably require multiple bNAbs [broadly neutralizing antibodies] that target different sites on the HIV envelope glycoprotein,” the authors concluded.
The study was supported by the National Institute of Allergy and Infectious Diseases, the Penn Center for AIDS Research, the Penn Clinical Trials Unit, the University of Alabama at Birmingham Center for AIDS Research, the UAB Clinical Trials Unit, the AIDS Clinical Trials Group Statistical and Data Analysis Center, a Ruth L. Kirschstein National Research Service Award, and the National Institutes of Health.
Two authors declared personal fees from pharmaceutical industry outside the submitted work, and one author served as a contractor to the NIH through Columbus Technologies. No other conflicts of interest were declared.
Passive immunization against HIV using the broadly neutralizing antibody VRC01 is associated with a delay in plasma viral rebound in individuals undergoing interruption of antiretroviral therapy, according to a new study, but the viral suppression is not sustained.
Two open-label trials investigated the impact of different dosing regiments of VRC01 in a total of 24 patients who were taking a break from antiretroviral therapy, according to a paper published Nov. 9 in the New England Journal of Medicine.
Katharine J. Bar, MD, of the Penn Center for AIDS Research at the University of Pennsylvania, Philadelphia, and her coauthors suggested broadly neutralizing antibodies such as VRC01 could target the persistent viral reservoir that leads to rapid viral rebound as soon as antiretroviral therapy is stopped (N Engl J Med. 2016 Nov 9. doi: 10.1056/NEJMoa1608243).
However, in these two studies, VRC01 did not achieve durable viral suppression. Overall, participants in both trials were significantly more likely than historical controls to maintain viral suppression 4 weeks after interrupting antiretroviral therapy (38% and 80% vs. 13%) but this difference was no longer significant by week 8.
In one trial – the A5340 – 12 of the 13 participants with data that could be evaluated showed viral rebound to more than 200 copies/mL by week 8, and in the second NIH trial, the median time to rebound of 40 copies/mL was 39 days.
All participants showed similar plasma levels of VRC01 that had been observed in previous trials – significantly above 50 mcg/mL for 8 weeks in the A5340 trial and above 100 mcg/mL in the NIH trial. Plasma VRC01 levels were consistently above 50 mcg/mL even at the time of viral rebound, except in one patient.
Researchers performed post hoc analyses of the sequence diversity at the time of viral rebound and compared these to samples from eight participants taken before initiation of antiretroviral therapy.
“Sequence-based and neutralization analyses suggest that VRC01 can restrict the clonality of rebounding virus in some participants, selecting for pre-existing resistance, and drive the emergence of VRC01-resistant virus,” the authors wrote.
However, they pointed out that the early years of antiretroviral drug development showed how quickly resistance could develop in a single-agent situation. Since that time, a multiagent approach directed at different targets has achieved much more potent and sustained viral suppression.
“Analogous to current regimens of highly successful combination ART that targets multiple HIV gene products, our data suggest that immunotherapy will probably require multiple bNAbs [broadly neutralizing antibodies] that target different sites on the HIV envelope glycoprotein,” the authors concluded.
The study was supported by the National Institute of Allergy and Infectious Diseases, the Penn Center for AIDS Research, the Penn Clinical Trials Unit, the University of Alabama at Birmingham Center for AIDS Research, the UAB Clinical Trials Unit, the AIDS Clinical Trials Group Statistical and Data Analysis Center, a Ruth L. Kirschstein National Research Service Award, and the National Institutes of Health.
Two authors declared personal fees from pharmaceutical industry outside the submitted work, and one author served as a contractor to the NIH through Columbus Technologies. No other conflicts of interest were declared.
Passive immunization against HIV using the broadly neutralizing antibody VRC01 is associated with a delay in plasma viral rebound in individuals undergoing interruption of antiretroviral therapy, according to a new study, but the viral suppression is not sustained.
Two open-label trials investigated the impact of different dosing regiments of VRC01 in a total of 24 patients who were taking a break from antiretroviral therapy, according to a paper published Nov. 9 in the New England Journal of Medicine.
Katharine J. Bar, MD, of the Penn Center for AIDS Research at the University of Pennsylvania, Philadelphia, and her coauthors suggested broadly neutralizing antibodies such as VRC01 could target the persistent viral reservoir that leads to rapid viral rebound as soon as antiretroviral therapy is stopped (N Engl J Med. 2016 Nov 9. doi: 10.1056/NEJMoa1608243).
However, in these two studies, VRC01 did not achieve durable viral suppression. Overall, participants in both trials were significantly more likely than historical controls to maintain viral suppression 4 weeks after interrupting antiretroviral therapy (38% and 80% vs. 13%) but this difference was no longer significant by week 8.
In one trial – the A5340 – 12 of the 13 participants with data that could be evaluated showed viral rebound to more than 200 copies/mL by week 8, and in the second NIH trial, the median time to rebound of 40 copies/mL was 39 days.
All participants showed similar plasma levels of VRC01 that had been observed in previous trials – significantly above 50 mcg/mL for 8 weeks in the A5340 trial and above 100 mcg/mL in the NIH trial. Plasma VRC01 levels were consistently above 50 mcg/mL even at the time of viral rebound, except in one patient.
Researchers performed post hoc analyses of the sequence diversity at the time of viral rebound and compared these to samples from eight participants taken before initiation of antiretroviral therapy.
“Sequence-based and neutralization analyses suggest that VRC01 can restrict the clonality of rebounding virus in some participants, selecting for pre-existing resistance, and drive the emergence of VRC01-resistant virus,” the authors wrote.
However, they pointed out that the early years of antiretroviral drug development showed how quickly resistance could develop in a single-agent situation. Since that time, a multiagent approach directed at different targets has achieved much more potent and sustained viral suppression.
“Analogous to current regimens of highly successful combination ART that targets multiple HIV gene products, our data suggest that immunotherapy will probably require multiple bNAbs [broadly neutralizing antibodies] that target different sites on the HIV envelope glycoprotein,” the authors concluded.
The study was supported by the National Institute of Allergy and Infectious Diseases, the Penn Center for AIDS Research, the Penn Clinical Trials Unit, the University of Alabama at Birmingham Center for AIDS Research, the UAB Clinical Trials Unit, the AIDS Clinical Trials Group Statistical and Data Analysis Center, a Ruth L. Kirschstein National Research Service Award, and the National Institutes of Health.
Two authors declared personal fees from pharmaceutical industry outside the submitted work, and one author served as a contractor to the NIH through Columbus Technologies. No other conflicts of interest were declared.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Key clinical point: Passive immunization against HIV using the broadly-neutralizing antibody VRC01 is associated with a brief delay in viral rebound in individuals undergoing interruption of antiretroviral therapy.
Major finding: Treatment with the broadly neutralizing antibody VRC01 was associated with a significant delay in viral rebound in individuals who have stopped antiretroviral therapy but this was not sustained beyond eight weeks.
Data source: Two prospective studies in 24 individuals with HIV infection undergoing a break from antiretroviral therapy.
Disclosures: The study was supported by the National Institute of Allergy and Infectious Diseases, the Penn Center for AIDS Research, the Penn Clinical Trials Unit, the University of Alabama at Birmingham Center for AIDS Research, the UAB Clinical Trials Unit, the AIDS Clinical Trials Group Statistical and Data Analysis Center, a Ruth L. Kirschstein National Research Service Award, and the National Institutes of Health. Two authors declared personal fees from pharmaceutical industry outside the submitted work, and one author served as a contractor to the NIH through Columbus Technologies. No other conflicts of interest were declared.
CDC: Seven cases of multidrug resistant C. auris have occurred in United States
The Centers for Disease Control and Prevention have reported the first cases of the multidrug-resistant fungal infection Candida auris in the United States, with evidence suggesting transmission may have occurred within U.S. health care facilities.
The report, published in the Nov. 4 edition of Morbidity and Mortality Weekly Report, described seven cases of patients infected with C. auris, which was isolated from blood in five cases, urine in one, and the ear in one. All the patients with bloodstream infections had central venous catheters at the time of diagnosis, and four of these patients died in the weeks and months after diagnosis of the infection.
Patients’ underlying conditions usually involved immune system suppression resulting from corticisteroid therapy, malignancty, short gut syndrome, or parapleglia with a long-term, indwelling Foley catheter.
C. auris was first isolated in 2009 in Japan, but has since been reported in countries including Colombia, India, South Africa, Israel, and the United Kingdom. Snigdha Vallabhaneni, MD, of the mycotic diseases branch of CDC’s division of food water and environmental diseases, and her coauthors, said its appearance in the United States is a cause for serious concern (MMWR. 2016 Nov 4. doi: 0.15585/mmwr.mm6544e1).
“First, many isolates are multidrug resistant, with some strains having elevated minimum inhibitory concentrations to drugs in all three major classes of antifungal medications, a feature not found in other clinically relevant Candida species,” the authors wrote. All the patients with bloodstream infections were treated with antifungal echinocandins, and one also received liposomal amphotericin B.
“Second, C. auris is challenging to identify, requiring specialized methods such as matrix-assisted laser desorption/ionization time-of-flight or molecular identification based on sequencing the D1-D2 region of the 28s ribosomal DNA.”
They also highlighted that C. auris is known to cause outbreaks in health care settings. Samples taken from the mattress, bedside table, bed rail, chair, and windowsill in the room of one patient all tested positive for C. auris.
The authors also sequenced the genome of the isolates and found that isolates taken from patients admitted to the same hospital in New Jersey or the same Illinois hospital were nearly identical.
“Facilities should ensure thorough daily and terminal cleaning of rooms of patients with C. auris infections, including use of an [Environmental Protection Agency]–registered disinfectant with a fungal claim,” the authors wrote, stressing that facilities and laboratories should continue to report cases and forward suspicious unidentified Candida isolates to state or local health authorities and the CDC.
No conflicts of interest were declared.
The Centers for Disease Control and Prevention have reported the first cases of the multidrug-resistant fungal infection Candida auris in the United States, with evidence suggesting transmission may have occurred within U.S. health care facilities.
The report, published in the Nov. 4 edition of Morbidity and Mortality Weekly Report, described seven cases of patients infected with C. auris, which was isolated from blood in five cases, urine in one, and the ear in one. All the patients with bloodstream infections had central venous catheters at the time of diagnosis, and four of these patients died in the weeks and months after diagnosis of the infection.
Patients’ underlying conditions usually involved immune system suppression resulting from corticisteroid therapy, malignancty, short gut syndrome, or parapleglia with a long-term, indwelling Foley catheter.
C. auris was first isolated in 2009 in Japan, but has since been reported in countries including Colombia, India, South Africa, Israel, and the United Kingdom. Snigdha Vallabhaneni, MD, of the mycotic diseases branch of CDC’s division of food water and environmental diseases, and her coauthors, said its appearance in the United States is a cause for serious concern (MMWR. 2016 Nov 4. doi: 0.15585/mmwr.mm6544e1).
“First, many isolates are multidrug resistant, with some strains having elevated minimum inhibitory concentrations to drugs in all three major classes of antifungal medications, a feature not found in other clinically relevant Candida species,” the authors wrote. All the patients with bloodstream infections were treated with antifungal echinocandins, and one also received liposomal amphotericin B.
“Second, C. auris is challenging to identify, requiring specialized methods such as matrix-assisted laser desorption/ionization time-of-flight or molecular identification based on sequencing the D1-D2 region of the 28s ribosomal DNA.”
They also highlighted that C. auris is known to cause outbreaks in health care settings. Samples taken from the mattress, bedside table, bed rail, chair, and windowsill in the room of one patient all tested positive for C. auris.
The authors also sequenced the genome of the isolates and found that isolates taken from patients admitted to the same hospital in New Jersey or the same Illinois hospital were nearly identical.
“Facilities should ensure thorough daily and terminal cleaning of rooms of patients with C. auris infections, including use of an [Environmental Protection Agency]–registered disinfectant with a fungal claim,” the authors wrote, stressing that facilities and laboratories should continue to report cases and forward suspicious unidentified Candida isolates to state or local health authorities and the CDC.
No conflicts of interest were declared.
The Centers for Disease Control and Prevention have reported the first cases of the multidrug-resistant fungal infection Candida auris in the United States, with evidence suggesting transmission may have occurred within U.S. health care facilities.
The report, published in the Nov. 4 edition of Morbidity and Mortality Weekly Report, described seven cases of patients infected with C. auris, which was isolated from blood in five cases, urine in one, and the ear in one. All the patients with bloodstream infections had central venous catheters at the time of diagnosis, and four of these patients died in the weeks and months after diagnosis of the infection.
Patients’ underlying conditions usually involved immune system suppression resulting from corticisteroid therapy, malignancty, short gut syndrome, or parapleglia with a long-term, indwelling Foley catheter.
C. auris was first isolated in 2009 in Japan, but has since been reported in countries including Colombia, India, South Africa, Israel, and the United Kingdom. Snigdha Vallabhaneni, MD, of the mycotic diseases branch of CDC’s division of food water and environmental diseases, and her coauthors, said its appearance in the United States is a cause for serious concern (MMWR. 2016 Nov 4. doi: 0.15585/mmwr.mm6544e1).
“First, many isolates are multidrug resistant, with some strains having elevated minimum inhibitory concentrations to drugs in all three major classes of antifungal medications, a feature not found in other clinically relevant Candida species,” the authors wrote. All the patients with bloodstream infections were treated with antifungal echinocandins, and one also received liposomal amphotericin B.
“Second, C. auris is challenging to identify, requiring specialized methods such as matrix-assisted laser desorption/ionization time-of-flight or molecular identification based on sequencing the D1-D2 region of the 28s ribosomal DNA.”
They also highlighted that C. auris is known to cause outbreaks in health care settings. Samples taken from the mattress, bedside table, bed rail, chair, and windowsill in the room of one patient all tested positive for C. auris.
The authors also sequenced the genome of the isolates and found that isolates taken from patients admitted to the same hospital in New Jersey or the same Illinois hospital were nearly identical.
“Facilities should ensure thorough daily and terminal cleaning of rooms of patients with C. auris infections, including use of an [Environmental Protection Agency]–registered disinfectant with a fungal claim,” the authors wrote, stressing that facilities and laboratories should continue to report cases and forward suspicious unidentified Candida isolates to state or local health authorities and the CDC.
No conflicts of interest were declared.
Key clinical point: The first cases of the multidrug-resistant fungal infection C. auris have been reported in the United States.
Major finding: Seven cases of infection with the multidrug-resistant emerging fungal infection C. auris have been reported in the United States, five of which were bloodstream infections.
Data source: Case series.
Disclosures: No conflicts of interest were declared.
Study finds no increase in microcephaly with Tdap vaccine in pregnancy
The combined tetanus, diphtheria, and acellular pertussis (Tdap) vaccine is not associated with an increased risk of microcephaly and other structural birth defects when administered during pregnancy, according to findings from a large, retrospective cohort study.
The U.S. Advisory Committee on Immunization Practices currently recommends administration of the Tdap vaccine between 27 and 36 weeks’ gestation in every pregnancy. However, the overlap of the start of Brazil’s maternal Tdap immunization in November 2014 with the substantial increase in microcephaly cases in 2015 prompted concerns of an association between the vaccine and structural birth defects.
They found that Tdap immunization was not significantly associated with an increased risk for microcephaly during any week of pregnancy (adjusted prevalence ratio, 0.86; 95% CI, 0.60-1.24). They also saw no increased risk of microcephaly when vaccinations occurred before 14 weeks’ gestation (adjusted prevalence ratio, 0.96; 95% CI, 0.36-2.58), or when vaccinations were administered between 27 weeks’ and 36 weeks’ gestation (adjusted prevalence ratio, 1.01; 95% CI, 0.63-1.61). The findings were similar for other structural defects, including congenital heart defects, spina bifida, encephalocele, and anophthalmia (JAMA. 2016;316[17]:1823-5).
“These results expand upon what is known about maternal Tdap vaccination safety to include information about structural birth defects and microcephaly in offspring,” the investigators wrote. “The findings support recommendations for routine Tdap administration during pregnancy.”
However, they noted that the study findings may have been limited by incomplete data on women’s immunization status, birth defects, and defects that may have resulted in pregnancy loss or elective termination.
The study was funded by the Centers for Disease Control and Prevention. The investigators reported having no relevant financial disclosures.
The combined tetanus, diphtheria, and acellular pertussis (Tdap) vaccine is not associated with an increased risk of microcephaly and other structural birth defects when administered during pregnancy, according to findings from a large, retrospective cohort study.
The U.S. Advisory Committee on Immunization Practices currently recommends administration of the Tdap vaccine between 27 and 36 weeks’ gestation in every pregnancy. However, the overlap of the start of Brazil’s maternal Tdap immunization in November 2014 with the substantial increase in microcephaly cases in 2015 prompted concerns of an association between the vaccine and structural birth defects.
They found that Tdap immunization was not significantly associated with an increased risk for microcephaly during any week of pregnancy (adjusted prevalence ratio, 0.86; 95% CI, 0.60-1.24). They also saw no increased risk of microcephaly when vaccinations occurred before 14 weeks’ gestation (adjusted prevalence ratio, 0.96; 95% CI, 0.36-2.58), or when vaccinations were administered between 27 weeks’ and 36 weeks’ gestation (adjusted prevalence ratio, 1.01; 95% CI, 0.63-1.61). The findings were similar for other structural defects, including congenital heart defects, spina bifida, encephalocele, and anophthalmia (JAMA. 2016;316[17]:1823-5).
“These results expand upon what is known about maternal Tdap vaccination safety to include information about structural birth defects and microcephaly in offspring,” the investigators wrote. “The findings support recommendations for routine Tdap administration during pregnancy.”
However, they noted that the study findings may have been limited by incomplete data on women’s immunization status, birth defects, and defects that may have resulted in pregnancy loss or elective termination.
The study was funded by the Centers for Disease Control and Prevention. The investigators reported having no relevant financial disclosures.
The combined tetanus, diphtheria, and acellular pertussis (Tdap) vaccine is not associated with an increased risk of microcephaly and other structural birth defects when administered during pregnancy, according to findings from a large, retrospective cohort study.
The U.S. Advisory Committee on Immunization Practices currently recommends administration of the Tdap vaccine between 27 and 36 weeks’ gestation in every pregnancy. However, the overlap of the start of Brazil’s maternal Tdap immunization in November 2014 with the substantial increase in microcephaly cases in 2015 prompted concerns of an association between the vaccine and structural birth defects.
They found that Tdap immunization was not significantly associated with an increased risk for microcephaly during any week of pregnancy (adjusted prevalence ratio, 0.86; 95% CI, 0.60-1.24). They also saw no increased risk of microcephaly when vaccinations occurred before 14 weeks’ gestation (adjusted prevalence ratio, 0.96; 95% CI, 0.36-2.58), or when vaccinations were administered between 27 weeks’ and 36 weeks’ gestation (adjusted prevalence ratio, 1.01; 95% CI, 0.63-1.61). The findings were similar for other structural defects, including congenital heart defects, spina bifida, encephalocele, and anophthalmia (JAMA. 2016;316[17]:1823-5).
“These results expand upon what is known about maternal Tdap vaccination safety to include information about structural birth defects and microcephaly in offspring,” the investigators wrote. “The findings support recommendations for routine Tdap administration during pregnancy.”
However, they noted that the study findings may have been limited by incomplete data on women’s immunization status, birth defects, and defects that may have resulted in pregnancy loss or elective termination.
The study was funded by the Centers for Disease Control and Prevention. The investigators reported having no relevant financial disclosures.
FROM JAMA
Key clinical point:
Major finding: Tdap immunization was not significantly associated with an increased risk for microcephaly during any week of pregnancy (adjusted prevalence ratio, 0.86; 95% CI, 0.60-1.24).
Data source: A retrospective cohort study in 41,654 singleton infants born to women who received Tdap during pregnancy and a control group of 282,809 babies born to unvaccinated women.
Disclosures: The study was funded by the Centers for Disease Control and Prevention. The investigators reported having no relevant financial disclosures.
Home-based intervention improves cognitive impairment in cancer survivors
A home-based intervention designed to address cognitive impairment in cancer survivors led to significant improvements in perceived cognitive impairment, anxiety, stress, and quality of life, compared with usual care.
The intervention, a computerized neurocognitive learning program, “targets cognitive domains including visual precision, divided attention, working memory, field of view, and visual processing speed, which are frequently affected in patients with cancer,” wrote Victoria J. Bray, MD, of the University of Sydney, and coauthors.
Investigators evaluated the program in a randomized controlled trial of 242 adult cancer survivors. The majority were female (95%) and had been treated for breast cancer (89%). The mean time since completion of chemotherapy was 27 months (6-60 months).
The program used, Insight From Posit Science, involved four 40-minute sessions a week for 15 weeks.
At the end of the 15-week intervention, the 121 patients in the intervention group showed significantly less perceived cognitive impairment, according to the Functional Assessment of Cancer Therapy Cognitive Function questionnaire, than the 121 patients in the standard care control group. This improvement persisted at the 6-month follow-up (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2016.67.8201).
Participants in the intervention group also reported significantly better perceived cognitive abilities, and significantly less impact on their quality of life from cognitive impairment. They also reported having fewer comments from others on their cognitive impairment after the intervention finished, although this difference between the two groups disappeared by 6 months.
The authors saw no significant differences between the two groups in neuropsychological function during the follow-up; however, they stressed this result should be interpreted with caution because of missing data at both the 15-week and 6-month follow-up.
The intervention was also associated with significantly less anxiety, depression,and fatigue at the end of the 15-week period but not at the 6-month follow-up. Participants did show significant improvements in perceived stress at both follow-up points, compared with those in the control group.
Overall, only 27% of participants finished the program in the recommended 15-week time frame, and 14% never started the program.
The authors said there was a large unmet need for effective treatment options for cancer survivors experiencing cognitive symptoms after cancer treatment, even though previous research had suggested that cognitive rehabilitation strategies were feasible.
“Our large RCT [randomized controlled trial] adds weight to this evidence, confirming that the use of Insight led to an improvement in cognitive symptoms,” they wrote, pointing out the advantage of this relatively inexpensive, home-based treatment approach. “The program has the potential to provide a new treatment option for patients with cancer with cognitive symptoms, where previously none existed.”
A home-based intervention designed to address cognitive impairment in cancer survivors led to significant improvements in perceived cognitive impairment, anxiety, stress, and quality of life, compared with usual care.
The intervention, a computerized neurocognitive learning program, “targets cognitive domains including visual precision, divided attention, working memory, field of view, and visual processing speed, which are frequently affected in patients with cancer,” wrote Victoria J. Bray, MD, of the University of Sydney, and coauthors.
Investigators evaluated the program in a randomized controlled trial of 242 adult cancer survivors. The majority were female (95%) and had been treated for breast cancer (89%). The mean time since completion of chemotherapy was 27 months (6-60 months).
The program used, Insight From Posit Science, involved four 40-minute sessions a week for 15 weeks.
At the end of the 15-week intervention, the 121 patients in the intervention group showed significantly less perceived cognitive impairment, according to the Functional Assessment of Cancer Therapy Cognitive Function questionnaire, than the 121 patients in the standard care control group. This improvement persisted at the 6-month follow-up (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2016.67.8201).
Participants in the intervention group also reported significantly better perceived cognitive abilities, and significantly less impact on their quality of life from cognitive impairment. They also reported having fewer comments from others on their cognitive impairment after the intervention finished, although this difference between the two groups disappeared by 6 months.
The authors saw no significant differences between the two groups in neuropsychological function during the follow-up; however, they stressed this result should be interpreted with caution because of missing data at both the 15-week and 6-month follow-up.
The intervention was also associated with significantly less anxiety, depression,and fatigue at the end of the 15-week period but not at the 6-month follow-up. Participants did show significant improvements in perceived stress at both follow-up points, compared with those in the control group.
Overall, only 27% of participants finished the program in the recommended 15-week time frame, and 14% never started the program.
The authors said there was a large unmet need for effective treatment options for cancer survivors experiencing cognitive symptoms after cancer treatment, even though previous research had suggested that cognitive rehabilitation strategies were feasible.
“Our large RCT [randomized controlled trial] adds weight to this evidence, confirming that the use of Insight led to an improvement in cognitive symptoms,” they wrote, pointing out the advantage of this relatively inexpensive, home-based treatment approach. “The program has the potential to provide a new treatment option for patients with cancer with cognitive symptoms, where previously none existed.”
A home-based intervention designed to address cognitive impairment in cancer survivors led to significant improvements in perceived cognitive impairment, anxiety, stress, and quality of life, compared with usual care.
The intervention, a computerized neurocognitive learning program, “targets cognitive domains including visual precision, divided attention, working memory, field of view, and visual processing speed, which are frequently affected in patients with cancer,” wrote Victoria J. Bray, MD, of the University of Sydney, and coauthors.
Investigators evaluated the program in a randomized controlled trial of 242 adult cancer survivors. The majority were female (95%) and had been treated for breast cancer (89%). The mean time since completion of chemotherapy was 27 months (6-60 months).
The program used, Insight From Posit Science, involved four 40-minute sessions a week for 15 weeks.
At the end of the 15-week intervention, the 121 patients in the intervention group showed significantly less perceived cognitive impairment, according to the Functional Assessment of Cancer Therapy Cognitive Function questionnaire, than the 121 patients in the standard care control group. This improvement persisted at the 6-month follow-up (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2016.67.8201).
Participants in the intervention group also reported significantly better perceived cognitive abilities, and significantly less impact on their quality of life from cognitive impairment. They also reported having fewer comments from others on their cognitive impairment after the intervention finished, although this difference between the two groups disappeared by 6 months.
The authors saw no significant differences between the two groups in neuropsychological function during the follow-up; however, they stressed this result should be interpreted with caution because of missing data at both the 15-week and 6-month follow-up.
The intervention was also associated with significantly less anxiety, depression,and fatigue at the end of the 15-week period but not at the 6-month follow-up. Participants did show significant improvements in perceived stress at both follow-up points, compared with those in the control group.
Overall, only 27% of participants finished the program in the recommended 15-week time frame, and 14% never started the program.
The authors said there was a large unmet need for effective treatment options for cancer survivors experiencing cognitive symptoms after cancer treatment, even though previous research had suggested that cognitive rehabilitation strategies were feasible.
“Our large RCT [randomized controlled trial] adds weight to this evidence, confirming that the use of Insight led to an improvement in cognitive symptoms,” they wrote, pointing out the advantage of this relatively inexpensive, home-based treatment approach. “The program has the potential to provide a new treatment option for patients with cancer with cognitive symptoms, where previously none existed.”
FROM JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: A home-based intervention for cancer survivors can improve perceived cognitive impairment, anxiety, stress, and quality of life.
Major finding: Patients who undertook a home-based cognitive impairment intervention for cancer survivors showed significantly lower scores for perceived cognitive impairment, compared with those in the standard care control group.
Data source: A randomized controlled trial in 242 adult cancer survivors.
Disclosures: The study was supported by the Cancer Council New South Wales, Friends of the Mater Foundation, the Clinical Oncology Society of Australia/ Roche Hematology Oncology Targeted Therapies Fellowship, a Pfizer Cancer Research Grant, and the National Breast Cancer Foundation. Three authors declared consultancies, travel support, and research funding from the pharmaceutical industry.
ReACT: No benefit from routine coronary angiography after PCI
Routine follow-up coronary angiography after percutaneous coronary intervention leads to increased rates of coronary revascularization but without any significant benefits for outcomes, according to a study presented at the Transcatheter Cardiovascular Therapeutics annual meeting and published simultaneously on Nov. 1 in the Journal of the American College of Cardiology: Cardiovascular Interventions.
Hiroki Shiomi, MD, from Kyoto University, and his coauthors reported on ReACT, a prospective, open-label randomized controlled trial of routine follow-up coronary angiography in 700 patients who underwent successful percutaneous coronary intervention (PCI).
Among the 349 patients randomized to follow-up coronary angiography (FUCAG), 12.8% underwent any coronary revascularization within the first year after PCI, compared with 3.8% of the 351 patients randomized to standard clinical follow-up. The routine angiography group also had a higher incidence of target lesion revascularization in the first year after the index PCI (7.0% vs. 1.7%).
In both these cases, the cumulative 5-year incidence of coronary or target lesion revascularization was not significantly different between the routine angiography and control groups. However researchers saw no significant benefit from routine FUCAG in terms of the cumulative 5-year incidence of all-cause death, myocardial infarction, stroke, or emergency hospitalizations for acute coronary syndrome or heart failure, compared with clinical follow-up (22.4% vs. 24.7%; P = 0.70).
Nor were there any significant differences between the two groups in these individual components, or in the cumulative 5-year incidence of major bleeding (JACC Cardiovasc Interv. 2016 Nov 1.)
The authors commented that several previous studies have shown that routine FUCAG does not improve clinical outcomes, although it is still commonly performed in Japan after PCI.
“However, previous studies in the drug-eluting stents (DES) era were conducted in the context of pivotal randomized trials of DES and there have been no randomized clinical trials evaluating long-term clinical impact of routine FUCAG after PCI in the real world clinical practice including high-risk patients for cardiovascular events risk such as complex coronary artery disease and acute myocardial infarction (AMI) presentation,” the authors wrote.
Overall, 85.4% of patients in the routine angiography group and 12% of those in the clinical care group underwent coronary angiography in the first year, including for clinical reasons.
In the clinical follow-up group, coronary angiography was performed because of acute coronary syndrome (14%), recurrence of angina (60%), other clinical reasons (14%), or no clinical reason (12%). The control group also had more noninvasive physiological stress testing such as treadmill exercise test and stress nuclear study.
“Considering the invasive nature of coronary angiography and increased medical expenses, routine FUCAG after PCI would not be allowed as the usual clinical practice, unless patients have recurrent symptoms or objective evidence of ischemia,” the authors wrote.
“On the other hand, there was no excess of adverse clinical events with routine angiographic follow-up strategy except for the increased rate of 1-year repeat coronary revascularization.”
Given this, they suggested that scheduled angiographic follow-up might still be considered acceptable for early in vivo or significant coronary device trials.
While the authors said the trial ended up being underpowered because of a reduced final sample size and lower-than-anticipated event rate, it did warrant further larger-scale studies. In particular, they highlighted the question of what impact routine follow-up angiography might have in higher-risk patients, such as those with left main or multivessel coronary artery disease.
“Finally, because patient demographics, practice patterns including the indication of coronary revascularization, and clinical outcomes in Japan may be different from those outside Japan, generalizing the present study results to populations outside Japan should be done with caution.”
This study was supported by an educational grant from the Research Institute for Production Development (Kyoto). One author declared honoraria for education consulting from Boston Scientific Corporation.
Routine follow-up coronary angiography after percutaneous coronary intervention leads to increased rates of coronary revascularization but without any significant benefits for outcomes, according to a study presented at the Transcatheter Cardiovascular Therapeutics annual meeting and published simultaneously on Nov. 1 in the Journal of the American College of Cardiology: Cardiovascular Interventions.
Hiroki Shiomi, MD, from Kyoto University, and his coauthors reported on ReACT, a prospective, open-label randomized controlled trial of routine follow-up coronary angiography in 700 patients who underwent successful percutaneous coronary intervention (PCI).
Among the 349 patients randomized to follow-up coronary angiography (FUCAG), 12.8% underwent any coronary revascularization within the first year after PCI, compared with 3.8% of the 351 patients randomized to standard clinical follow-up. The routine angiography group also had a higher incidence of target lesion revascularization in the first year after the index PCI (7.0% vs. 1.7%).
In both these cases, the cumulative 5-year incidence of coronary or target lesion revascularization was not significantly different between the routine angiography and control groups. However researchers saw no significant benefit from routine FUCAG in terms of the cumulative 5-year incidence of all-cause death, myocardial infarction, stroke, or emergency hospitalizations for acute coronary syndrome or heart failure, compared with clinical follow-up (22.4% vs. 24.7%; P = 0.70).
Nor were there any significant differences between the two groups in these individual components, or in the cumulative 5-year incidence of major bleeding (JACC Cardiovasc Interv. 2016 Nov 1.)
The authors commented that several previous studies have shown that routine FUCAG does not improve clinical outcomes, although it is still commonly performed in Japan after PCI.
“However, previous studies in the drug-eluting stents (DES) era were conducted in the context of pivotal randomized trials of DES and there have been no randomized clinical trials evaluating long-term clinical impact of routine FUCAG after PCI in the real world clinical practice including high-risk patients for cardiovascular events risk such as complex coronary artery disease and acute myocardial infarction (AMI) presentation,” the authors wrote.
Overall, 85.4% of patients in the routine angiography group and 12% of those in the clinical care group underwent coronary angiography in the first year, including for clinical reasons.
In the clinical follow-up group, coronary angiography was performed because of acute coronary syndrome (14%), recurrence of angina (60%), other clinical reasons (14%), or no clinical reason (12%). The control group also had more noninvasive physiological stress testing such as treadmill exercise test and stress nuclear study.
“Considering the invasive nature of coronary angiography and increased medical expenses, routine FUCAG after PCI would not be allowed as the usual clinical practice, unless patients have recurrent symptoms or objective evidence of ischemia,” the authors wrote.
“On the other hand, there was no excess of adverse clinical events with routine angiographic follow-up strategy except for the increased rate of 1-year repeat coronary revascularization.”
Given this, they suggested that scheduled angiographic follow-up might still be considered acceptable for early in vivo or significant coronary device trials.
While the authors said the trial ended up being underpowered because of a reduced final sample size and lower-than-anticipated event rate, it did warrant further larger-scale studies. In particular, they highlighted the question of what impact routine follow-up angiography might have in higher-risk patients, such as those with left main or multivessel coronary artery disease.
“Finally, because patient demographics, practice patterns including the indication of coronary revascularization, and clinical outcomes in Japan may be different from those outside Japan, generalizing the present study results to populations outside Japan should be done with caution.”
This study was supported by an educational grant from the Research Institute for Production Development (Kyoto). One author declared honoraria for education consulting from Boston Scientific Corporation.
Routine follow-up coronary angiography after percutaneous coronary intervention leads to increased rates of coronary revascularization but without any significant benefits for outcomes, according to a study presented at the Transcatheter Cardiovascular Therapeutics annual meeting and published simultaneously on Nov. 1 in the Journal of the American College of Cardiology: Cardiovascular Interventions.
Hiroki Shiomi, MD, from Kyoto University, and his coauthors reported on ReACT, a prospective, open-label randomized controlled trial of routine follow-up coronary angiography in 700 patients who underwent successful percutaneous coronary intervention (PCI).
Among the 349 patients randomized to follow-up coronary angiography (FUCAG), 12.8% underwent any coronary revascularization within the first year after PCI, compared with 3.8% of the 351 patients randomized to standard clinical follow-up. The routine angiography group also had a higher incidence of target lesion revascularization in the first year after the index PCI (7.0% vs. 1.7%).
In both these cases, the cumulative 5-year incidence of coronary or target lesion revascularization was not significantly different between the routine angiography and control groups. However researchers saw no significant benefit from routine FUCAG in terms of the cumulative 5-year incidence of all-cause death, myocardial infarction, stroke, or emergency hospitalizations for acute coronary syndrome or heart failure, compared with clinical follow-up (22.4% vs. 24.7%; P = 0.70).
Nor were there any significant differences between the two groups in these individual components, or in the cumulative 5-year incidence of major bleeding (JACC Cardiovasc Interv. 2016 Nov 1.)
The authors commented that several previous studies have shown that routine FUCAG does not improve clinical outcomes, although it is still commonly performed in Japan after PCI.
“However, previous studies in the drug-eluting stents (DES) era were conducted in the context of pivotal randomized trials of DES and there have been no randomized clinical trials evaluating long-term clinical impact of routine FUCAG after PCI in the real world clinical practice including high-risk patients for cardiovascular events risk such as complex coronary artery disease and acute myocardial infarction (AMI) presentation,” the authors wrote.
Overall, 85.4% of patients in the routine angiography group and 12% of those in the clinical care group underwent coronary angiography in the first year, including for clinical reasons.
In the clinical follow-up group, coronary angiography was performed because of acute coronary syndrome (14%), recurrence of angina (60%), other clinical reasons (14%), or no clinical reason (12%). The control group also had more noninvasive physiological stress testing such as treadmill exercise test and stress nuclear study.
“Considering the invasive nature of coronary angiography and increased medical expenses, routine FUCAG after PCI would not be allowed as the usual clinical practice, unless patients have recurrent symptoms or objective evidence of ischemia,” the authors wrote.
“On the other hand, there was no excess of adverse clinical events with routine angiographic follow-up strategy except for the increased rate of 1-year repeat coronary revascularization.”
Given this, they suggested that scheduled angiographic follow-up might still be considered acceptable for early in vivo or significant coronary device trials.
While the authors said the trial ended up being underpowered because of a reduced final sample size and lower-than-anticipated event rate, it did warrant further larger-scale studies. In particular, they highlighted the question of what impact routine follow-up angiography might have in higher-risk patients, such as those with left main or multivessel coronary artery disease.
“Finally, because patient demographics, practice patterns including the indication of coronary revascularization, and clinical outcomes in Japan may be different from those outside Japan, generalizing the present study results to populations outside Japan should be done with caution.”
This study was supported by an educational grant from the Research Institute for Production Development (Kyoto). One author declared honoraria for education consulting from Boston Scientific Corporation.
Key clinical point: Routine follow-up coronary angiography after percutaneous coronary intervention increases rates of coronary revascularization but does not improve outcomes.
Major finding: Patients who underwent routine angiographic follow-up had a similar cumulative 5-year incidence of all-cause death, myocardial infarction, stroke, or emergency hospitalizations for acute coronary syndrome or heart failure as those who had standard clinical follow-up.
Data source: ReACT: a prospective, open-label randomized controlled trial in 700 patients after percutaneous coronary intervention.
Disclosures: This study was supported by an educational grant from the Research Institute for Production Development (Kyoto). One author declared honoraria for education consulting from Boston Scientific Corporation.
AAA screening showed no mortality reduction in new trial
In contrast to previous studies, screening for abdominal aortic aneurysms in older men does not appear to have a significant effect on overall mortality, according to a prospective, randomized study.
Mortality from ruptured AAA remains high in older men, which has prompted four previous large randomized trials to explore whether screening men aged 65 years and older might reduce mortality.
Writing in the October 31 online edition of JAMA Internal Medicine, the authors reported the long-term outcomes of an Australian population-based trial of screening for abdominal aortic aneurysms in 49,801 men aged 64-83 years, of whom 19 249 were invited to screening and 12,203 of those underwent screening (isrctn.org Identifier: ISRCTN16171472).
After a mean 12.8 years of follow-up, there was a non-significant 9% lower mortality in the invited screening group compared to the control group and a non-significant 8% lower mortality among men aged 65-74 years.
Overall, there were 90 deaths from ruptured AAA in the screening group and 98 in the control group (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6633).
The prevalence of abdominal aortic aneurysms with a diameter at or above 30 mm was 6.6% in men aged 65-74, and 0.4% for those with a diameter of 55 mm or above.
While the rate of ruptured abdominal aortic aneurysms was significantly lower in the invited group compared to the control group (72 vs. 99, P = .04), the 30-day mortality after surgery for rupture was higher in the invited group compared to the control group (61.5% vs. 43.2%).
Screening had no meaningful impact on the risk of all-cause, cardiovascular, and other mortality, but men who had smoked had a higher risk of rupture and of death from a rupture than those who had never smoked, regardless of screening status.
The rate of total elective operations was significantly higher in the invited group compared to controls (536 vs. 414, P < .001), mainly in the first year after screening.
The authors calculated that to prevent one death from a ruptured abdominal aortic aneurysm in five years, 4784 men aged 64-83 years or 3290 men aged 65-74 years would need to be invited for screening.
While the strength of the study was that it was truly population-based – using the electoral roll – the authors said the lack of a benefit from screening was likely due to the relatively low rate of rupture and death from AAA, as well as a high rate of elective surgery for this condition, in the control group.
The non-significant 8% reduction in mortality observed in the study was significantly less than the 42% and 66% reductions seen in previous trials with a similar length of follow-up.
The authors suggested this may also have been related to a lower fraction of invited men participating in screening, but pointed out that the incidence of AAA in men is declining.
“The reason for the decrease in incidence and prevalence is multifactorial but is probably driven by differences in rates of smoking and cessation because the relative risk for AAA events is 3- to 6-fold higher in smokers compared with non-smokers,” they wrote.
The authors said selective screening of smokers or ex-smokers may be more effective, but pointed out that this approach would miss around one-quarter of aneurysms. However they suggested more targeted screening may yet achieve a benefit.
“The small overall benefit of population-wide screening does not mean that finding AAAs in suitable older men is not worthwhile because deaths from AAAs in men who actually attended for screening were halved by early detection and successful treatment.”
The study was supported by the National Health and Medical Research Council Project. The authors reported that they had no conflicts of interest.
These new data will not change the finding of robust reduction in AAA-related mortality from screening seen in all previous meta-analyses. However, the most recently updated meta-analyses now reveal the small reduction in all-cause mortality with screening to be statistically significant.
So although the findings of the Western Australian trial remain negative and raise some concerns about screening, their aggregation with other studies does not change the overall conclusions that screening substantially reduced AAA-related mortality and also resulted in a statistically significant reduction in all-cause mortality. Restricting screening to men who have smoked (the strongest risk factor for AAA) further lowers cost and increases efficiency.
Frank A. Lederle, MD, is from the Center for Chronic Disease Outcomes Research at the Veterans Affairs Medical Center. These comments are taken from an accompanying editorial (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6663). No conflicts of interest were declared.
These new data will not change the finding of robust reduction in AAA-related mortality from screening seen in all previous meta-analyses. However, the most recently updated meta-analyses now reveal the small reduction in all-cause mortality with screening to be statistically significant.
So although the findings of the Western Australian trial remain negative and raise some concerns about screening, their aggregation with other studies does not change the overall conclusions that screening substantially reduced AAA-related mortality and also resulted in a statistically significant reduction in all-cause mortality. Restricting screening to men who have smoked (the strongest risk factor for AAA) further lowers cost and increases efficiency.
Frank A. Lederle, MD, is from the Center for Chronic Disease Outcomes Research at the Veterans Affairs Medical Center. These comments are taken from an accompanying editorial (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6663). No conflicts of interest were declared.
These new data will not change the finding of robust reduction in AAA-related mortality from screening seen in all previous meta-analyses. However, the most recently updated meta-analyses now reveal the small reduction in all-cause mortality with screening to be statistically significant.
So although the findings of the Western Australian trial remain negative and raise some concerns about screening, their aggregation with other studies does not change the overall conclusions that screening substantially reduced AAA-related mortality and also resulted in a statistically significant reduction in all-cause mortality. Restricting screening to men who have smoked (the strongest risk factor for AAA) further lowers cost and increases efficiency.
Frank A. Lederle, MD, is from the Center for Chronic Disease Outcomes Research at the Veterans Affairs Medical Center. These comments are taken from an accompanying editorial (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6663). No conflicts of interest were declared.
In contrast to previous studies, screening for abdominal aortic aneurysms in older men does not appear to have a significant effect on overall mortality, according to a prospective, randomized study.
Mortality from ruptured AAA remains high in older men, which has prompted four previous large randomized trials to explore whether screening men aged 65 years and older might reduce mortality.
Writing in the October 31 online edition of JAMA Internal Medicine, the authors reported the long-term outcomes of an Australian population-based trial of screening for abdominal aortic aneurysms in 49,801 men aged 64-83 years, of whom 19 249 were invited to screening and 12,203 of those underwent screening (isrctn.org Identifier: ISRCTN16171472).
After a mean 12.8 years of follow-up, there was a non-significant 9% lower mortality in the invited screening group compared to the control group and a non-significant 8% lower mortality among men aged 65-74 years.
Overall, there were 90 deaths from ruptured AAA in the screening group and 98 in the control group (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6633).
The prevalence of abdominal aortic aneurysms with a diameter at or above 30 mm was 6.6% in men aged 65-74, and 0.4% for those with a diameter of 55 mm or above.
While the rate of ruptured abdominal aortic aneurysms was significantly lower in the invited group compared to the control group (72 vs. 99, P = .04), the 30-day mortality after surgery for rupture was higher in the invited group compared to the control group (61.5% vs. 43.2%).
Screening had no meaningful impact on the risk of all-cause, cardiovascular, and other mortality, but men who had smoked had a higher risk of rupture and of death from a rupture than those who had never smoked, regardless of screening status.
The rate of total elective operations was significantly higher in the invited group compared to controls (536 vs. 414, P < .001), mainly in the first year after screening.
The authors calculated that to prevent one death from a ruptured abdominal aortic aneurysm in five years, 4784 men aged 64-83 years or 3290 men aged 65-74 years would need to be invited for screening.
While the strength of the study was that it was truly population-based – using the electoral roll – the authors said the lack of a benefit from screening was likely due to the relatively low rate of rupture and death from AAA, as well as a high rate of elective surgery for this condition, in the control group.
The non-significant 8% reduction in mortality observed in the study was significantly less than the 42% and 66% reductions seen in previous trials with a similar length of follow-up.
The authors suggested this may also have been related to a lower fraction of invited men participating in screening, but pointed out that the incidence of AAA in men is declining.
“The reason for the decrease in incidence and prevalence is multifactorial but is probably driven by differences in rates of smoking and cessation because the relative risk for AAA events is 3- to 6-fold higher in smokers compared with non-smokers,” they wrote.
The authors said selective screening of smokers or ex-smokers may be more effective, but pointed out that this approach would miss around one-quarter of aneurysms. However they suggested more targeted screening may yet achieve a benefit.
“The small overall benefit of population-wide screening does not mean that finding AAAs in suitable older men is not worthwhile because deaths from AAAs in men who actually attended for screening were halved by early detection and successful treatment.”
The study was supported by the National Health and Medical Research Council Project. The authors reported that they had no conflicts of interest.
In contrast to previous studies, screening for abdominal aortic aneurysms in older men does not appear to have a significant effect on overall mortality, according to a prospective, randomized study.
Mortality from ruptured AAA remains high in older men, which has prompted four previous large randomized trials to explore whether screening men aged 65 years and older might reduce mortality.
Writing in the October 31 online edition of JAMA Internal Medicine, the authors reported the long-term outcomes of an Australian population-based trial of screening for abdominal aortic aneurysms in 49,801 men aged 64-83 years, of whom 19 249 were invited to screening and 12,203 of those underwent screening (isrctn.org Identifier: ISRCTN16171472).
After a mean 12.8 years of follow-up, there was a non-significant 9% lower mortality in the invited screening group compared to the control group and a non-significant 8% lower mortality among men aged 65-74 years.
Overall, there were 90 deaths from ruptured AAA in the screening group and 98 in the control group (JAMA Internal Medicine 2016, October 31. DOI:10.1001/jamainternmed.2016.6633).
The prevalence of abdominal aortic aneurysms with a diameter at or above 30 mm was 6.6% in men aged 65-74, and 0.4% for those with a diameter of 55 mm or above.
While the rate of ruptured abdominal aortic aneurysms was significantly lower in the invited group compared to the control group (72 vs. 99, P = .04), the 30-day mortality after surgery for rupture was higher in the invited group compared to the control group (61.5% vs. 43.2%).
Screening had no meaningful impact on the risk of all-cause, cardiovascular, and other mortality, but men who had smoked had a higher risk of rupture and of death from a rupture than those who had never smoked, regardless of screening status.
The rate of total elective operations was significantly higher in the invited group compared to controls (536 vs. 414, P < .001), mainly in the first year after screening.
The authors calculated that to prevent one death from a ruptured abdominal aortic aneurysm in five years, 4784 men aged 64-83 years or 3290 men aged 65-74 years would need to be invited for screening.
While the strength of the study was that it was truly population-based – using the electoral roll – the authors said the lack of a benefit from screening was likely due to the relatively low rate of rupture and death from AAA, as well as a high rate of elective surgery for this condition, in the control group.
The non-significant 8% reduction in mortality observed in the study was significantly less than the 42% and 66% reductions seen in previous trials with a similar length of follow-up.
The authors suggested this may also have been related to a lower fraction of invited men participating in screening, but pointed out that the incidence of AAA in men is declining.
“The reason for the decrease in incidence and prevalence is multifactorial but is probably driven by differences in rates of smoking and cessation because the relative risk for AAA events is 3- to 6-fold higher in smokers compared with non-smokers,” they wrote.
The authors said selective screening of smokers or ex-smokers may be more effective, but pointed out that this approach would miss around one-quarter of aneurysms. However they suggested more targeted screening may yet achieve a benefit.
“The small overall benefit of population-wide screening does not mean that finding AAAs in suitable older men is not worthwhile because deaths from AAAs in men who actually attended for screening were halved by early detection and successful treatment.”
The study was supported by the National Health and Medical Research Council Project. The authors reported that they had no conflicts of interest.
Key clinical point:
Major finding: Men invited to undergo screening for abdominal aortic aneurysms had a non-significant 9% lower mortality compared to a control group.
Data source: Prospective, population-based randomized controlled trial in 49,801 men aged 64-83 years.
Disclosures: The study was supported by the National Health and Medical Research Council Project. The authors reported that they had no conflicts of interest.