User login
Experts emphasize scrupulous procedures for duodenoscopes, ERCP
Contaminated duodenoscopes have caused multiple outbreaks of multidrug-resistant infections with sometimes lethal consequences; until these instruments become easier to clean, personnel must strictly follow recommendations for sterilization, surveillance, and unit-by-unit quality control, according to an extensive commentary accompanying an American Gastroenterological Association Clinical Practice Update.
“Patients and physicians want and expect no transmission of infections by any medical instrument,” wrote Bret Petersen, M.D., of the Mayo Clinic, Rochester, Minn., Johannes Koch, M.D., of Virginia Mason Medical Center, Seattle, and Gregory Ginsberg, M.D., of the University of Pennsylvania, Philadelphia. It is the collective responsibility of endoscope manufacturers, health systems, and providers to ensure endoscope reprocessing is mistake proof, establishing systems to identify and eliminate the risk of infection for patients undergoing flexible endoscopy.”
More than 650,000 endoscopic retrograde cholangiopancreatographies (ERCPs) occur in the United States annually, and “even the lowest reported defect rate of 0.7% will expose 4,500 patients to a preventable risk,” the experts noted. Carbapenem-resistant Enterobacteriaceae (CRE) are becoming more prevalent and have been transmitted during ERCP, even when personnel seem to have followed sterilization protocols to the letter. Clinical CRE infections have a fatality rate of at least 50%, months may elapse between exposure and symptom onset, and infections may involve distant organs. These factors, along with the phenomenon of “silent carriers,” have linked duodenoscopes to at least 250 multidrug-resistant infections and at least 20 fatalities worldwide, the experts wrote (Gastroenterology 2016 May 27. doi: 10.1053/j.gastro.2016.05.040).
Current duodenoscopes can be tough to sterilize. Between 1 billion and 1 trillion organisms typically cover a used instrument. Bedside cleaning cuts this number about 1,000-fold, and manual washing kills about another million organisms, leaving up to 1 billion bugs to be killed by high-level disinfection. That’s “a tall order” that can strain space, time, and staffing resources, especially given the fact that duodenoscopes have “tight crevices and mechanical joints that are exposed repeatedly to highly infectious bioburden,” the experts wrote. Furthermore, slips in processing enable the formation of biofilms that resist both cleaning and high-level disinfection.
The key to stopping duodenoscopes from transmitting dangerous pathogens is manual cleaning, including wiping the outside of the duodenoscope, flushing its channels, and brushing the elevator lever “immediately after use and before the surfaces have become dried,” the experts stressed. Disinfectants should be used at the right concentration and temperature, and for the intended amount of time. Biofilms form on moist surfaces only, so channels should be flushed with alcohol (a desiccant), dried with forced air, and stored in a dry environment.
But recent outbreaks spurred the Food and Drug Administration to recommend further steps – including better oversight and training of reprocessing staff and closer attention to precleaning, manual cleaning, and manufacturer recommendations for use, including determining whether the company used its own “proprietary” cleaning brushes in its validation studies, the experts noted. “Optional supplemental measures” include surveillance cultures of duodenoscopes, ethylene oxide sterilization, and double reprocessing, in which each scope undergoes two cycles of manual cleaning and high-intensity sterilization between patients. Double reprocessing might be the simplest and most easily adopted of these measures, the experts said. The AGA, for its part, recommends active surveillance of patients who undergo ERCP, surveillance cultures of scopes, and recording of the serial number of every scope used in every procedure.
Surveillance culture makes sense, but can be costly and hard to conduct and interpret because sampling detects vast numbers of nonpathogenic organisms in addition to any pathogens, the experts noted. The Centers for Disease Control and Prevention recommends that each institution follow its own complex outbreak sampling protocol and quarantine duodenoscopes for 2-3 days, pending negative results. That may mean buying more duodenoscopes. A less costly option is to culture a subset of scopes at the end of every workweek, the experts said. Real-time tests that reliably reflect bacterial culture results remain “elusive,” but testing for adenosine triphosphate after manual washing is easiest and best studied, they added.
Clearly, industry is responsible for making endoscopes that can be reliably disinfected. “Recent submissions by all three manufacturers (Olympus, Pentax, and Fujinon) have validated current reprocessing outcomes in test environments, and the FDA has ruled that postmarket studies of reprocessing in clinical settings are expected, but these results will not be forthcoming for several years,” the experts wrote. Redesigning duodenoscopes may be “the ultimate solution,” but in the meantime, endoscopists should carefully review indications for ERCP and ensure thorough informed consent. Doing so “will uphold the trust that we must achieve and maintain with our patients,” the authors said.
They had no funding sources. Dr. Koch has consulted for Sedasys, and Dr. Ginsberg has consulted for Olympus.
Contaminated duodenoscopes have caused multiple outbreaks of multidrug-resistant infections with sometimes lethal consequences; until these instruments become easier to clean, personnel must strictly follow recommendations for sterilization, surveillance, and unit-by-unit quality control, according to an extensive commentary accompanying an American Gastroenterological Association Clinical Practice Update.
“Patients and physicians want and expect no transmission of infections by any medical instrument,” wrote Bret Petersen, M.D., of the Mayo Clinic, Rochester, Minn., Johannes Koch, M.D., of Virginia Mason Medical Center, Seattle, and Gregory Ginsberg, M.D., of the University of Pennsylvania, Philadelphia. It is the collective responsibility of endoscope manufacturers, health systems, and providers to ensure endoscope reprocessing is mistake proof, establishing systems to identify and eliminate the risk of infection for patients undergoing flexible endoscopy.”
More than 650,000 endoscopic retrograde cholangiopancreatographies (ERCPs) occur in the United States annually, and “even the lowest reported defect rate of 0.7% will expose 4,500 patients to a preventable risk,” the experts noted. Carbapenem-resistant Enterobacteriaceae (CRE) are becoming more prevalent and have been transmitted during ERCP, even when personnel seem to have followed sterilization protocols to the letter. Clinical CRE infections have a fatality rate of at least 50%, months may elapse between exposure and symptom onset, and infections may involve distant organs. These factors, along with the phenomenon of “silent carriers,” have linked duodenoscopes to at least 250 multidrug-resistant infections and at least 20 fatalities worldwide, the experts wrote (Gastroenterology 2016 May 27. doi: 10.1053/j.gastro.2016.05.040).
Current duodenoscopes can be tough to sterilize. Between 1 billion and 1 trillion organisms typically cover a used instrument. Bedside cleaning cuts this number about 1,000-fold, and manual washing kills about another million organisms, leaving up to 1 billion bugs to be killed by high-level disinfection. That’s “a tall order” that can strain space, time, and staffing resources, especially given the fact that duodenoscopes have “tight crevices and mechanical joints that are exposed repeatedly to highly infectious bioburden,” the experts wrote. Furthermore, slips in processing enable the formation of biofilms that resist both cleaning and high-level disinfection.
The key to stopping duodenoscopes from transmitting dangerous pathogens is manual cleaning, including wiping the outside of the duodenoscope, flushing its channels, and brushing the elevator lever “immediately after use and before the surfaces have become dried,” the experts stressed. Disinfectants should be used at the right concentration and temperature, and for the intended amount of time. Biofilms form on moist surfaces only, so channels should be flushed with alcohol (a desiccant), dried with forced air, and stored in a dry environment.
But recent outbreaks spurred the Food and Drug Administration to recommend further steps – including better oversight and training of reprocessing staff and closer attention to precleaning, manual cleaning, and manufacturer recommendations for use, including determining whether the company used its own “proprietary” cleaning brushes in its validation studies, the experts noted. “Optional supplemental measures” include surveillance cultures of duodenoscopes, ethylene oxide sterilization, and double reprocessing, in which each scope undergoes two cycles of manual cleaning and high-intensity sterilization between patients. Double reprocessing might be the simplest and most easily adopted of these measures, the experts said. The AGA, for its part, recommends active surveillance of patients who undergo ERCP, surveillance cultures of scopes, and recording of the serial number of every scope used in every procedure.
Surveillance culture makes sense, but can be costly and hard to conduct and interpret because sampling detects vast numbers of nonpathogenic organisms in addition to any pathogens, the experts noted. The Centers for Disease Control and Prevention recommends that each institution follow its own complex outbreak sampling protocol and quarantine duodenoscopes for 2-3 days, pending negative results. That may mean buying more duodenoscopes. A less costly option is to culture a subset of scopes at the end of every workweek, the experts said. Real-time tests that reliably reflect bacterial culture results remain “elusive,” but testing for adenosine triphosphate after manual washing is easiest and best studied, they added.
Clearly, industry is responsible for making endoscopes that can be reliably disinfected. “Recent submissions by all three manufacturers (Olympus, Pentax, and Fujinon) have validated current reprocessing outcomes in test environments, and the FDA has ruled that postmarket studies of reprocessing in clinical settings are expected, but these results will not be forthcoming for several years,” the experts wrote. Redesigning duodenoscopes may be “the ultimate solution,” but in the meantime, endoscopists should carefully review indications for ERCP and ensure thorough informed consent. Doing so “will uphold the trust that we must achieve and maintain with our patients,” the authors said.
They had no funding sources. Dr. Koch has consulted for Sedasys, and Dr. Ginsberg has consulted for Olympus.
Contaminated duodenoscopes have caused multiple outbreaks of multidrug-resistant infections with sometimes lethal consequences; until these instruments become easier to clean, personnel must strictly follow recommendations for sterilization, surveillance, and unit-by-unit quality control, according to an extensive commentary accompanying an American Gastroenterological Association Clinical Practice Update.
“Patients and physicians want and expect no transmission of infections by any medical instrument,” wrote Bret Petersen, M.D., of the Mayo Clinic, Rochester, Minn., Johannes Koch, M.D., of Virginia Mason Medical Center, Seattle, and Gregory Ginsberg, M.D., of the University of Pennsylvania, Philadelphia. It is the collective responsibility of endoscope manufacturers, health systems, and providers to ensure endoscope reprocessing is mistake proof, establishing systems to identify and eliminate the risk of infection for patients undergoing flexible endoscopy.”
More than 650,000 endoscopic retrograde cholangiopancreatographies (ERCPs) occur in the United States annually, and “even the lowest reported defect rate of 0.7% will expose 4,500 patients to a preventable risk,” the experts noted. Carbapenem-resistant Enterobacteriaceae (CRE) are becoming more prevalent and have been transmitted during ERCP, even when personnel seem to have followed sterilization protocols to the letter. Clinical CRE infections have a fatality rate of at least 50%, months may elapse between exposure and symptom onset, and infections may involve distant organs. These factors, along with the phenomenon of “silent carriers,” have linked duodenoscopes to at least 250 multidrug-resistant infections and at least 20 fatalities worldwide, the experts wrote (Gastroenterology 2016 May 27. doi: 10.1053/j.gastro.2016.05.040).
Current duodenoscopes can be tough to sterilize. Between 1 billion and 1 trillion organisms typically cover a used instrument. Bedside cleaning cuts this number about 1,000-fold, and manual washing kills about another million organisms, leaving up to 1 billion bugs to be killed by high-level disinfection. That’s “a tall order” that can strain space, time, and staffing resources, especially given the fact that duodenoscopes have “tight crevices and mechanical joints that are exposed repeatedly to highly infectious bioburden,” the experts wrote. Furthermore, slips in processing enable the formation of biofilms that resist both cleaning and high-level disinfection.
The key to stopping duodenoscopes from transmitting dangerous pathogens is manual cleaning, including wiping the outside of the duodenoscope, flushing its channels, and brushing the elevator lever “immediately after use and before the surfaces have become dried,” the experts stressed. Disinfectants should be used at the right concentration and temperature, and for the intended amount of time. Biofilms form on moist surfaces only, so channels should be flushed with alcohol (a desiccant), dried with forced air, and stored in a dry environment.
But recent outbreaks spurred the Food and Drug Administration to recommend further steps – including better oversight and training of reprocessing staff and closer attention to precleaning, manual cleaning, and manufacturer recommendations for use, including determining whether the company used its own “proprietary” cleaning brushes in its validation studies, the experts noted. “Optional supplemental measures” include surveillance cultures of duodenoscopes, ethylene oxide sterilization, and double reprocessing, in which each scope undergoes two cycles of manual cleaning and high-intensity sterilization between patients. Double reprocessing might be the simplest and most easily adopted of these measures, the experts said. The AGA, for its part, recommends active surveillance of patients who undergo ERCP, surveillance cultures of scopes, and recording of the serial number of every scope used in every procedure.
Surveillance culture makes sense, but can be costly and hard to conduct and interpret because sampling detects vast numbers of nonpathogenic organisms in addition to any pathogens, the experts noted. The Centers for Disease Control and Prevention recommends that each institution follow its own complex outbreak sampling protocol and quarantine duodenoscopes for 2-3 days, pending negative results. That may mean buying more duodenoscopes. A less costly option is to culture a subset of scopes at the end of every workweek, the experts said. Real-time tests that reliably reflect bacterial culture results remain “elusive,” but testing for adenosine triphosphate after manual washing is easiest and best studied, they added.
Clearly, industry is responsible for making endoscopes that can be reliably disinfected. “Recent submissions by all three manufacturers (Olympus, Pentax, and Fujinon) have validated current reprocessing outcomes in test environments, and the FDA has ruled that postmarket studies of reprocessing in clinical settings are expected, but these results will not be forthcoming for several years,” the experts wrote. Redesigning duodenoscopes may be “the ultimate solution,” but in the meantime, endoscopists should carefully review indications for ERCP and ensure thorough informed consent. Doing so “will uphold the trust that we must achieve and maintain with our patients,” the authors said.
They had no funding sources. Dr. Koch has consulted for Sedasys, and Dr. Ginsberg has consulted for Olympus.
FROM GASTROENTEROLOGY
New antibiotics targeting MDR pathogens are expensive, but not impressive
The U.S. Food and Drug Administration has approved a number of new antibiotics targeting multidrug-resistant bacteria in the past 5 years, but the new drugs have not led to a substantial improvement in patient outcomes when compared with existing antibiotics, according to a recent analysis in the Annals of Internal Medicine.
The eight new antibiotics approved by the FDA between January 2010 and December 2015 were ceftaroline, fidaxomicin, bedaquiline, dalbavancin, tedizolid, oritavancin, ceftolozane/tazobactam, and ceftazidime/avibactam. Of those eight drugs, only three showed in vitro activity against the so-called ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumonia, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species). Only one drug, fidaxomicin, demonstrated in vitro activity against an urgent-threat pathogen from the Centers for Disease Control and Prevention, Clostridium difficile. Bedaquiline was the only new antibiotic specifically indicated for a disease from a multidrug-resistant pathogen, although the investigators said most of the drugs demonstrated in vitro activity against gram-positive drug-resistant pathogens.
Importantly, the authors noted that in vitro activity does not necessarily reflect benefits on actual patient clinical outcomes, as exemplified by such drugs as tigecycline and doripenem.
The researchers found what they called “important deficiencies in the clinical trials leading to approval of these new antibiotic products.” Most pivotal trial designs were primarily noninferiority trials, and the antibiotics were not studied to evaluate whether they have substantial benefits in efficacy over what is currently available, they noted. Additionally, none of the trials evaluated direct patient outcomes as primary end points, and some drugs did not have confirmatory evidence from a second independent trial or did not have any confirmatory trials.
Researchers also examined the prices of a single dose of the new antibiotics. The prices ranged from $1,195 to $4,183 (4-14 days of ceftolozane/tazobactam for acute pyelonephritis and intra-abdominal infections) to $69,702 (24 weeks of bedaquiline) – quite a premium for antibiotics showing unclear evidence of additional benefit.
“As antibiotic innovation continues to move forward, greater attention needs to be paid to incentives for developing high-quality new products with demonstrated superiority to existing products on outcomes in patients with multidrug-resistant disease, replacing the current focus on quantity and presumed future benefits,” researchers concluded.
Read the full study in the Annals of Internal Medicine (doi: 10.7326/M16-0291).
The U.S. Food and Drug Administration has approved a number of new antibiotics targeting multidrug-resistant bacteria in the past 5 years, but the new drugs have not led to a substantial improvement in patient outcomes when compared with existing antibiotics, according to a recent analysis in the Annals of Internal Medicine.
The eight new antibiotics approved by the FDA between January 2010 and December 2015 were ceftaroline, fidaxomicin, bedaquiline, dalbavancin, tedizolid, oritavancin, ceftolozane/tazobactam, and ceftazidime/avibactam. Of those eight drugs, only three showed in vitro activity against the so-called ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumonia, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species). Only one drug, fidaxomicin, demonstrated in vitro activity against an urgent-threat pathogen from the Centers for Disease Control and Prevention, Clostridium difficile. Bedaquiline was the only new antibiotic specifically indicated for a disease from a multidrug-resistant pathogen, although the investigators said most of the drugs demonstrated in vitro activity against gram-positive drug-resistant pathogens.
Importantly, the authors noted that in vitro activity does not necessarily reflect benefits on actual patient clinical outcomes, as exemplified by such drugs as tigecycline and doripenem.
The researchers found what they called “important deficiencies in the clinical trials leading to approval of these new antibiotic products.” Most pivotal trial designs were primarily noninferiority trials, and the antibiotics were not studied to evaluate whether they have substantial benefits in efficacy over what is currently available, they noted. Additionally, none of the trials evaluated direct patient outcomes as primary end points, and some drugs did not have confirmatory evidence from a second independent trial or did not have any confirmatory trials.
Researchers also examined the prices of a single dose of the new antibiotics. The prices ranged from $1,195 to $4,183 (4-14 days of ceftolozane/tazobactam for acute pyelonephritis and intra-abdominal infections) to $69,702 (24 weeks of bedaquiline) – quite a premium for antibiotics showing unclear evidence of additional benefit.
“As antibiotic innovation continues to move forward, greater attention needs to be paid to incentives for developing high-quality new products with demonstrated superiority to existing products on outcomes in patients with multidrug-resistant disease, replacing the current focus on quantity and presumed future benefits,” researchers concluded.
Read the full study in the Annals of Internal Medicine (doi: 10.7326/M16-0291).
The U.S. Food and Drug Administration has approved a number of new antibiotics targeting multidrug-resistant bacteria in the past 5 years, but the new drugs have not led to a substantial improvement in patient outcomes when compared with existing antibiotics, according to a recent analysis in the Annals of Internal Medicine.
The eight new antibiotics approved by the FDA between January 2010 and December 2015 were ceftaroline, fidaxomicin, bedaquiline, dalbavancin, tedizolid, oritavancin, ceftolozane/tazobactam, and ceftazidime/avibactam. Of those eight drugs, only three showed in vitro activity against the so-called ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumonia, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species). Only one drug, fidaxomicin, demonstrated in vitro activity against an urgent-threat pathogen from the Centers for Disease Control and Prevention, Clostridium difficile. Bedaquiline was the only new antibiotic specifically indicated for a disease from a multidrug-resistant pathogen, although the investigators said most of the drugs demonstrated in vitro activity against gram-positive drug-resistant pathogens.
Importantly, the authors noted that in vitro activity does not necessarily reflect benefits on actual patient clinical outcomes, as exemplified by such drugs as tigecycline and doripenem.
The researchers found what they called “important deficiencies in the clinical trials leading to approval of these new antibiotic products.” Most pivotal trial designs were primarily noninferiority trials, and the antibiotics were not studied to evaluate whether they have substantial benefits in efficacy over what is currently available, they noted. Additionally, none of the trials evaluated direct patient outcomes as primary end points, and some drugs did not have confirmatory evidence from a second independent trial or did not have any confirmatory trials.
Researchers also examined the prices of a single dose of the new antibiotics. The prices ranged from $1,195 to $4,183 (4-14 days of ceftolozane/tazobactam for acute pyelonephritis and intra-abdominal infections) to $69,702 (24 weeks of bedaquiline) – quite a premium for antibiotics showing unclear evidence of additional benefit.
“As antibiotic innovation continues to move forward, greater attention needs to be paid to incentives for developing high-quality new products with demonstrated superiority to existing products on outcomes in patients with multidrug-resistant disease, replacing the current focus on quantity and presumed future benefits,” researchers concluded.
Read the full study in the Annals of Internal Medicine (doi: 10.7326/M16-0291).
FROM ANNALS OF INTERNAL MEDICINE
No evidence to show Alzheimer’s or Parkinson’s transmission via blood transfusion
A retrospective study of nearly 1.5 million patients over the course of 44 years showed no evidence to support the claim that neurodegenerative disorders can be transmitted between individuals through blood transfusions.
“Given that the neurodegenerative diseases on the dementia spectrum are relatively common, the misfolded proteins in affected persons have a wide tissue distribution, and the diseases may have a protracted prediagnostic course, potential transmission through transfusion could have important public health implications,” explained study authors led by Gustaf Edgren, MD, PhD, of the Karolinska Institute in Stockholm.
The retrospective cohort study analyzed data on 1,465,845 patients listed in nationwide transfusion registers, all of whom had undergone transfusions between 1968 and 2012 in Sweden, or during 1981-2012 in Denmark. The investigators looked for transfusions in which the donor had a diagnosis – either before or after the transfusion – of Alzheimer’s disease, Parkinson’s disease, or amyotrophic lateral sclerosis, and compared them with transfusions involving two healthy control groups. Creutzfeldt-Jakob disease was not included in the analysis (Ann Intern Med. 2016 Jun 28. doi: 10.7326/M15-2421).
All transfusion recipients were followed from 180 days after the transfusion date in order to minimize the risk of including a patient who had a neurodegenerative disorder that was present but not registered at the time of the transfusion. Follow-ups lasted until either the first diagnosis of a neurodegenerative disorder, death, emigration, or the end of the trial period on Dec. 31, 2012.
If a donor had received a diagnosis of a neurodegenerative disorder within 20 years of the transfusion, all recipients from said donor were considered to be exposed. “The maximum latency of 20 years was used as a compromise between allowing a long preclinical phase – during which donors may still be infectious – and avoiding classifying clearly healthy donors as potentially infectious,” the authors wrote.
Only 42,254 (2.9%) of subjects were deemed to be exposed to a neurodegenerative disorder based on their blood transfusion donor. However, there was no evidence of any increased risk of dementia of any type from exposure to blood from affected donors in any of the recipients over the course of the follow-up period (hazard ratio, 1.04; 95% confidence interval, 0.99-1.09), and similar results were obtained individually for Alzheimer’s disease and Parkinson’s disease. As a control measure, the investigators also tested for hepatitis C transmission before and after transfusions, and were “as expected, readily able to detect transmission of viral hepatitis,” despite finding no traces of neurodegenerative conditions.
“Despite accumulating scientific support for horizontal transmissibility of a range of disorders in the neurodegenerative spectrum through various methods of inoculation in animal models and of variant Creutzfeldt-Jakob disease from human and animal model data, this study provides evidence against blood transfusion as an important route of transmission of neurodegenerative diseases between humans,” the authors concluded.
The authors also added that “given that the study was based on the entire computerized blood banking history in two countries with several decades of follow-up, more comprehensive data are unlikely to be available in the foreseeable future.”
Funding for the study was provided by the Swedish Research Council, the Swedish Heart-Lung Foundation, the Swedish Society for Medical Research, and the Danish Council for Independent Research. One coauthor reported receiving a grant from the Danish Council for Independent Research, and another reported receiving grants from the Swedish Research Council and the Swedish Heart-Lung Foundation during the conduct of the study. No other authors reported any relevant financial disclosures.
A retrospective study of nearly 1.5 million patients over the course of 44 years showed no evidence to support the claim that neurodegenerative disorders can be transmitted between individuals through blood transfusions.
“Given that the neurodegenerative diseases on the dementia spectrum are relatively common, the misfolded proteins in affected persons have a wide tissue distribution, and the diseases may have a protracted prediagnostic course, potential transmission through transfusion could have important public health implications,” explained study authors led by Gustaf Edgren, MD, PhD, of the Karolinska Institute in Stockholm.
The retrospective cohort study analyzed data on 1,465,845 patients listed in nationwide transfusion registers, all of whom had undergone transfusions between 1968 and 2012 in Sweden, or during 1981-2012 in Denmark. The investigators looked for transfusions in which the donor had a diagnosis – either before or after the transfusion – of Alzheimer’s disease, Parkinson’s disease, or amyotrophic lateral sclerosis, and compared them with transfusions involving two healthy control groups. Creutzfeldt-Jakob disease was not included in the analysis (Ann Intern Med. 2016 Jun 28. doi: 10.7326/M15-2421).
All transfusion recipients were followed from 180 days after the transfusion date in order to minimize the risk of including a patient who had a neurodegenerative disorder that was present but not registered at the time of the transfusion. Follow-ups lasted until either the first diagnosis of a neurodegenerative disorder, death, emigration, or the end of the trial period on Dec. 31, 2012.
If a donor had received a diagnosis of a neurodegenerative disorder within 20 years of the transfusion, all recipients from said donor were considered to be exposed. “The maximum latency of 20 years was used as a compromise between allowing a long preclinical phase – during which donors may still be infectious – and avoiding classifying clearly healthy donors as potentially infectious,” the authors wrote.
Only 42,254 (2.9%) of subjects were deemed to be exposed to a neurodegenerative disorder based on their blood transfusion donor. However, there was no evidence of any increased risk of dementia of any type from exposure to blood from affected donors in any of the recipients over the course of the follow-up period (hazard ratio, 1.04; 95% confidence interval, 0.99-1.09), and similar results were obtained individually for Alzheimer’s disease and Parkinson’s disease. As a control measure, the investigators also tested for hepatitis C transmission before and after transfusions, and were “as expected, readily able to detect transmission of viral hepatitis,” despite finding no traces of neurodegenerative conditions.
“Despite accumulating scientific support for horizontal transmissibility of a range of disorders in the neurodegenerative spectrum through various methods of inoculation in animal models and of variant Creutzfeldt-Jakob disease from human and animal model data, this study provides evidence against blood transfusion as an important route of transmission of neurodegenerative diseases between humans,” the authors concluded.
The authors also added that “given that the study was based on the entire computerized blood banking history in two countries with several decades of follow-up, more comprehensive data are unlikely to be available in the foreseeable future.”
Funding for the study was provided by the Swedish Research Council, the Swedish Heart-Lung Foundation, the Swedish Society for Medical Research, and the Danish Council for Independent Research. One coauthor reported receiving a grant from the Danish Council for Independent Research, and another reported receiving grants from the Swedish Research Council and the Swedish Heart-Lung Foundation during the conduct of the study. No other authors reported any relevant financial disclosures.
A retrospective study of nearly 1.5 million patients over the course of 44 years showed no evidence to support the claim that neurodegenerative disorders can be transmitted between individuals through blood transfusions.
“Given that the neurodegenerative diseases on the dementia spectrum are relatively common, the misfolded proteins in affected persons have a wide tissue distribution, and the diseases may have a protracted prediagnostic course, potential transmission through transfusion could have important public health implications,” explained study authors led by Gustaf Edgren, MD, PhD, of the Karolinska Institute in Stockholm.
The retrospective cohort study analyzed data on 1,465,845 patients listed in nationwide transfusion registers, all of whom had undergone transfusions between 1968 and 2012 in Sweden, or during 1981-2012 in Denmark. The investigators looked for transfusions in which the donor had a diagnosis – either before or after the transfusion – of Alzheimer’s disease, Parkinson’s disease, or amyotrophic lateral sclerosis, and compared them with transfusions involving two healthy control groups. Creutzfeldt-Jakob disease was not included in the analysis (Ann Intern Med. 2016 Jun 28. doi: 10.7326/M15-2421).
All transfusion recipients were followed from 180 days after the transfusion date in order to minimize the risk of including a patient who had a neurodegenerative disorder that was present but not registered at the time of the transfusion. Follow-ups lasted until either the first diagnosis of a neurodegenerative disorder, death, emigration, or the end of the trial period on Dec. 31, 2012.
If a donor had received a diagnosis of a neurodegenerative disorder within 20 years of the transfusion, all recipients from said donor were considered to be exposed. “The maximum latency of 20 years was used as a compromise between allowing a long preclinical phase – during which donors may still be infectious – and avoiding classifying clearly healthy donors as potentially infectious,” the authors wrote.
Only 42,254 (2.9%) of subjects were deemed to be exposed to a neurodegenerative disorder based on their blood transfusion donor. However, there was no evidence of any increased risk of dementia of any type from exposure to blood from affected donors in any of the recipients over the course of the follow-up period (hazard ratio, 1.04; 95% confidence interval, 0.99-1.09), and similar results were obtained individually for Alzheimer’s disease and Parkinson’s disease. As a control measure, the investigators also tested for hepatitis C transmission before and after transfusions, and were “as expected, readily able to detect transmission of viral hepatitis,” despite finding no traces of neurodegenerative conditions.
“Despite accumulating scientific support for horizontal transmissibility of a range of disorders in the neurodegenerative spectrum through various methods of inoculation in animal models and of variant Creutzfeldt-Jakob disease from human and animal model data, this study provides evidence against blood transfusion as an important route of transmission of neurodegenerative diseases between humans,” the authors concluded.
The authors also added that “given that the study was based on the entire computerized blood banking history in two countries with several decades of follow-up, more comprehensive data are unlikely to be available in the foreseeable future.”
Funding for the study was provided by the Swedish Research Council, the Swedish Heart-Lung Foundation, the Swedish Society for Medical Research, and the Danish Council for Independent Research. One coauthor reported receiving a grant from the Danish Council for Independent Research, and another reported receiving grants from the Swedish Research Council and the Swedish Heart-Lung Foundation during the conduct of the study. No other authors reported any relevant financial disclosures.
FROM ANNALS OF INTERNAL MEDICINE
Key clinical point: There are no data to support the notion that human transmission of Alzheimer’s or Parkinson’s diseases can occur via blood transfusions.
Major finding: There was no evidence of any increased risk of dementia of any type from exposure to blood from affected donors in any of the recipients over the course of the follow-up period (HR, 1.04; 95% CI, 0.99-1.09).
Data source: Retrospective case-control study of 1,465,845 transfusion patients in Sweden and Denmark during 1968-2012.
Disclosures: The study was funded by the Swedish Research Council, the Swedish Heart-Lung Foundation, the Swedish Society for Medical Research, and the Danish Council for Independent Research. Two coauthors disclosed receiving grants from the funding institutions.
Seasonal variation not seen in C. difficile rates
BOSTON – No winter spike in Clostridium difficile infection (CDI) rates was seen among hospitalized patients after testing methodologies and frequency were accounted for, according to a large multinational study.
A total of 180 hospitals in five European countries had wide variation in CDI testing methods and testing density. However, among the hospitals that used a currently recommended toxin-detecting testing algorithm, there was no significant seasonal variation in cases, defined as mean cases per 10,000 patient bed-days per hospital per month (C/PBDs/H/M). The hospitals using toxin-detecting algorithms had summer C/PBDs/H/M rates of 9.6, compared to 8.0 in winter months (P = .27).
These results, presented at the annual meeting of the American Society for Microbiology by Kerrie Davies, clinical scientist at the University of Leeds (England), stand in contrast to some other studies that have shown a wintertime peak in CDI incidence. The data presented help in “understanding the context in which published reported rate data have been generated,” said Ms. Davies, enabling a better understanding both of how samples are tested, and who gets tested.
The study enrolled 180 hospitals – 38 each in France and Italy, 37 each in Germany and the United Kingdom, and 30 in Spain. Institutions reported patient demographics, as well as CDI testing data and patient bed-days for CDI cases, for 1 year.
Current European and U.K. CDI testing algorithms, said Ms. Davies, begin either with testing for glutamate dehydrogenase (GDH) or with nucleic acid amplification testing (NAAT), and then proceed to enzyme-linked immunosorbent assay (ELISA) testing for C. difficile toxins A and B.
Other algorithms, for example those that begin with toxin testing, are not recommended, said Ms. Davies. Some institutions may diagnose CDI only by toxin detection, GDH testing, or NAAT testing.
For data analysis, Ms. Davies and her collaborators compared CDI-related PBDs and testing density during June, July, and August to data collected in December, January, and February. Testing methods were dichotomized to toxin-detecting CDI testing algorithms (TCTA, using GDH/toxin or NAAT/toxin), or non-TCTA methods, which included all other algorithms or stand-alone testing methods.
Wide variation was seen between countries in testing methodologies. The United Kingdom had the highest rate of TCTA testing at 89%, while Germany had the lowest, at 8%, with 30 of 37 (81%) of participating German hospitals using non–toxin detection methods.
In addition, both testing density and case incidence rates varied between countries. Standardizing test density to mean number of tests per 10,000 PBDs per hospital per month (T/PBDs/H/M), the United Kingdom had the highest density, at 96.0 T/PBDs/H/M, while France had the lowest, at 34.4 T/PBDs/H/M. Overall per-nation case rates ranged from 2.55 C/PBDs/H/M in the United Kingdom to 6.9 C/PBDs/H/M in Spain.
Ms. Davies and her collaborators also analyzed data for all of the hospitals in any country according to testing method. That analysis saw no significant difference in seasonal variation testing rates for TCTA-using hospitals (mean T/PBDs/H/M in summer, 119.2 versus 102.4 in winter, P = .11), and no significant seasonal variation in CDI incidence. However, “the largest variation in CDI rates was seen in those hospitals using toxin-only diagnostic methods,” said Ms. Davies.
By contrast, for hospitals using non-TCTA methods, though testing rates did not change significantly, incidence was significantly higher in winter months, at a mean 13.5 wintertime versus 10.0 summertime C/PBDs/H/M (P = .49).
One country, Italy, stood out for having both higher overall wintertime testing (mean 57.2 summertime versus 78.8 wintertime T/PBDs/H/M, P = .041), and higher incidence (mean 6.6 summertime versus 10.1 wintertime C/PBDs/H/M, P = .017).
“Reported CDI rates only increase in winter if testing rates increase concurrently, or if hospitals use nonrecommended testing methods for diagnosis, especially non–toxin detection methods,” said Ms. Davies.
The study investigators reported receiving financial support from Sanofi Pasteur.
On Twitter @karioakes
BOSTON – No winter spike in Clostridium difficile infection (CDI) rates was seen among hospitalized patients after testing methodologies and frequency were accounted for, according to a large multinational study.
A total of 180 hospitals in five European countries had wide variation in CDI testing methods and testing density. However, among the hospitals that used a currently recommended toxin-detecting testing algorithm, there was no significant seasonal variation in cases, defined as mean cases per 10,000 patient bed-days per hospital per month (C/PBDs/H/M). The hospitals using toxin-detecting algorithms had summer C/PBDs/H/M rates of 9.6, compared to 8.0 in winter months (P = .27).
These results, presented at the annual meeting of the American Society for Microbiology by Kerrie Davies, clinical scientist at the University of Leeds (England), stand in contrast to some other studies that have shown a wintertime peak in CDI incidence. The data presented help in “understanding the context in which published reported rate data have been generated,” said Ms. Davies, enabling a better understanding both of how samples are tested, and who gets tested.
The study enrolled 180 hospitals – 38 each in France and Italy, 37 each in Germany and the United Kingdom, and 30 in Spain. Institutions reported patient demographics, as well as CDI testing data and patient bed-days for CDI cases, for 1 year.
Current European and U.K. CDI testing algorithms, said Ms. Davies, begin either with testing for glutamate dehydrogenase (GDH) or with nucleic acid amplification testing (NAAT), and then proceed to enzyme-linked immunosorbent assay (ELISA) testing for C. difficile toxins A and B.
Other algorithms, for example those that begin with toxin testing, are not recommended, said Ms. Davies. Some institutions may diagnose CDI only by toxin detection, GDH testing, or NAAT testing.
For data analysis, Ms. Davies and her collaborators compared CDI-related PBDs and testing density during June, July, and August to data collected in December, January, and February. Testing methods were dichotomized to toxin-detecting CDI testing algorithms (TCTA, using GDH/toxin or NAAT/toxin), or non-TCTA methods, which included all other algorithms or stand-alone testing methods.
Wide variation was seen between countries in testing methodologies. The United Kingdom had the highest rate of TCTA testing at 89%, while Germany had the lowest, at 8%, with 30 of 37 (81%) of participating German hospitals using non–toxin detection methods.
In addition, both testing density and case incidence rates varied between countries. Standardizing test density to mean number of tests per 10,000 PBDs per hospital per month (T/PBDs/H/M), the United Kingdom had the highest density, at 96.0 T/PBDs/H/M, while France had the lowest, at 34.4 T/PBDs/H/M. Overall per-nation case rates ranged from 2.55 C/PBDs/H/M in the United Kingdom to 6.9 C/PBDs/H/M in Spain.
Ms. Davies and her collaborators also analyzed data for all of the hospitals in any country according to testing method. That analysis saw no significant difference in seasonal variation testing rates for TCTA-using hospitals (mean T/PBDs/H/M in summer, 119.2 versus 102.4 in winter, P = .11), and no significant seasonal variation in CDI incidence. However, “the largest variation in CDI rates was seen in those hospitals using toxin-only diagnostic methods,” said Ms. Davies.
By contrast, for hospitals using non-TCTA methods, though testing rates did not change significantly, incidence was significantly higher in winter months, at a mean 13.5 wintertime versus 10.0 summertime C/PBDs/H/M (P = .49).
One country, Italy, stood out for having both higher overall wintertime testing (mean 57.2 summertime versus 78.8 wintertime T/PBDs/H/M, P = .041), and higher incidence (mean 6.6 summertime versus 10.1 wintertime C/PBDs/H/M, P = .017).
“Reported CDI rates only increase in winter if testing rates increase concurrently, or if hospitals use nonrecommended testing methods for diagnosis, especially non–toxin detection methods,” said Ms. Davies.
The study investigators reported receiving financial support from Sanofi Pasteur.
On Twitter @karioakes
BOSTON – No winter spike in Clostridium difficile infection (CDI) rates was seen among hospitalized patients after testing methodologies and frequency were accounted for, according to a large multinational study.
A total of 180 hospitals in five European countries had wide variation in CDI testing methods and testing density. However, among the hospitals that used a currently recommended toxin-detecting testing algorithm, there was no significant seasonal variation in cases, defined as mean cases per 10,000 patient bed-days per hospital per month (C/PBDs/H/M). The hospitals using toxin-detecting algorithms had summer C/PBDs/H/M rates of 9.6, compared to 8.0 in winter months (P = .27).
These results, presented at the annual meeting of the American Society for Microbiology by Kerrie Davies, clinical scientist at the University of Leeds (England), stand in contrast to some other studies that have shown a wintertime peak in CDI incidence. The data presented help in “understanding the context in which published reported rate data have been generated,” said Ms. Davies, enabling a better understanding both of how samples are tested, and who gets tested.
The study enrolled 180 hospitals – 38 each in France and Italy, 37 each in Germany and the United Kingdom, and 30 in Spain. Institutions reported patient demographics, as well as CDI testing data and patient bed-days for CDI cases, for 1 year.
Current European and U.K. CDI testing algorithms, said Ms. Davies, begin either with testing for glutamate dehydrogenase (GDH) or with nucleic acid amplification testing (NAAT), and then proceed to enzyme-linked immunosorbent assay (ELISA) testing for C. difficile toxins A and B.
Other algorithms, for example those that begin with toxin testing, are not recommended, said Ms. Davies. Some institutions may diagnose CDI only by toxin detection, GDH testing, or NAAT testing.
For data analysis, Ms. Davies and her collaborators compared CDI-related PBDs and testing density during June, July, and August to data collected in December, January, and February. Testing methods were dichotomized to toxin-detecting CDI testing algorithms (TCTA, using GDH/toxin or NAAT/toxin), or non-TCTA methods, which included all other algorithms or stand-alone testing methods.
Wide variation was seen between countries in testing methodologies. The United Kingdom had the highest rate of TCTA testing at 89%, while Germany had the lowest, at 8%, with 30 of 37 (81%) of participating German hospitals using non–toxin detection methods.
In addition, both testing density and case incidence rates varied between countries. Standardizing test density to mean number of tests per 10,000 PBDs per hospital per month (T/PBDs/H/M), the United Kingdom had the highest density, at 96.0 T/PBDs/H/M, while France had the lowest, at 34.4 T/PBDs/H/M. Overall per-nation case rates ranged from 2.55 C/PBDs/H/M in the United Kingdom to 6.9 C/PBDs/H/M in Spain.
Ms. Davies and her collaborators also analyzed data for all of the hospitals in any country according to testing method. That analysis saw no significant difference in seasonal variation testing rates for TCTA-using hospitals (mean T/PBDs/H/M in summer, 119.2 versus 102.4 in winter, P = .11), and no significant seasonal variation in CDI incidence. However, “the largest variation in CDI rates was seen in those hospitals using toxin-only diagnostic methods,” said Ms. Davies.
By contrast, for hospitals using non-TCTA methods, though testing rates did not change significantly, incidence was significantly higher in winter months, at a mean 13.5 wintertime versus 10.0 summertime C/PBDs/H/M (P = .49).
One country, Italy, stood out for having both higher overall wintertime testing (mean 57.2 summertime versus 78.8 wintertime T/PBDs/H/M, P = .041), and higher incidence (mean 6.6 summertime versus 10.1 wintertime C/PBDs/H/M, P = .017).
“Reported CDI rates only increase in winter if testing rates increase concurrently, or if hospitals use nonrecommended testing methods for diagnosis, especially non–toxin detection methods,” said Ms. Davies.
The study investigators reported receiving financial support from Sanofi Pasteur.
On Twitter @karioakes
AT ASM MICROBE 2016
Key clinical point: After researchers accounted for testing frequency and methods, Clostridium difficile infection (CDI) rates were not higher in the winter months.
Major finding: In five European countries, hospitals that used direct toxin-detecting algorithms to test for CDI had no seasonal variation in CDI incidence (mean cases/patient bed-days/hospital/month in summer, 9.6; in winter, 8.0; P = .27).
Data source: Demographic and testing data collection from 180 hospitals in five European countries to ascertain CDI testing methods, rates, cases, and patient bed-days per month.
Disclosures: The study investigators reported financial support from Sanofi Pasteur.
Staffing, work environment drive VAP risk in the ICU
SAN FRANCISCO – The work environment for nurses and the physician staffing model in the intensive care unit influence patients’ likelihood of acquiring ventilator-associated pneumonia (VAP), based on a cohort study of 25 ICUs.
Overall, each 1-point increase in the score for the nurse work environment – indicating that nurses had a greater sense of playing an important role in patient care – was unexpectedly associated with a roughly sixfold higher rate of VAP among the ICU’s patients, according to data reported in a session and press briefing at an international conference of the American Thoracic Society. However, additional analyses showed that the rate of VAP was higher in closed units where a board-certified critical care physician (intensivist) managed and led care rather than an open unit where care is shared.
“We think that the organization of the ICU is actually influencing nursing practice, which is a really novel finding,” commented first author Deena Kelly Costa, PhD, RN, of the University of Michigan School of Nursing in Ann Arbor. “In closed ICUs, when you have a board-certified physician and an ICU team managing and leading care, even if the work environment is better, nurses may not feel as empowered to standardize their care or practice.”
“ICU nurses are the ones who are primarily responsible for VAP preventive practices: they keep the head of the bed higher than 45 degrees, they conduct oral care, they conduct (patient) surveillance. ICU physicians are involved with writing the orders and ventilator setting management. So how these providers work together could theoretically influence the risk for patients developing VAP,” Dr. Costa said.
“We need to be thinking a little bit more critically about not only the care that’s happening at the bedside... but also at an organizational level. How are these providers organized, and can we work together to improve patient outcomes?”
“I’m not suggesting that we get rid of all closed ICUs because I don’t think that’s the solution,” Dr. Costa maintained. “I think from an administrative perspective, we need to be considering what’s the organization of these clinicians and this unit, and [in a context-specific manner], how can we improve it for better patient outcomes? That may be both working on improving the work environment and making the nurses feel more empowered, or it could be potentially considering other staffing models.”
Some data have already linked a more favorable nurse work environment and the presence of a board-certified critical care physician independently with better patient outcomes in the ICU. But studies of their joint impact are lacking.
The investigators performed a secondary, unit-level analysis of nurse survey data collected during 2005 and 2006 in ICUs in southern Michigan.
In all, 462 nurses working in 25 ICUs completed the Practice Environment Scale of the Nursing Work Index, on which averaged summary scores range between 1 (unfavorable) and 4 (favorable). The scale captures environmental factors such as the adequacy of resources for nurses, support from their managers, and their level of involvement in hospital policy decisions.
The rate of VAP during the same period was assessed using data from more than 1,000 patients from each ICU.
The summary nurse work environment score averaged 2.69 points in the 21 ICUs that had a closed physician staffing model and 2.62 points in the 4 ICUs that had an open physician staffing model. The respective rates of VAP were 7.5% and 2.5%.
In adjusted analysis among all 25 ICUs, each 1-point increase in an ICU’s Practice Environment Scale score was associated with a sharply higher rate of VAP on the unit (adjusted incidence rate ratio, 5.76; P = .02).
However, there was a strong interaction between the score and physician staffing model (P less than .001). In open ICUs, as the score rose, the rate of VAP fell (from about 16% to 5%), whereas in closed ICUs, as the score rose, so did the rate of VAP (from about 3% to 14%).
Dr. Costa disclosed that she had no relevant conflicts of interest. The parent survey was funded by the Blue Cross Blue Shield Foundation of Michigan.
SAN FRANCISCO – The work environment for nurses and the physician staffing model in the intensive care unit influence patients’ likelihood of acquiring ventilator-associated pneumonia (VAP), based on a cohort study of 25 ICUs.
Overall, each 1-point increase in the score for the nurse work environment – indicating that nurses had a greater sense of playing an important role in patient care – was unexpectedly associated with a roughly sixfold higher rate of VAP among the ICU’s patients, according to data reported in a session and press briefing at an international conference of the American Thoracic Society. However, additional analyses showed that the rate of VAP was higher in closed units where a board-certified critical care physician (intensivist) managed and led care rather than an open unit where care is shared.
“We think that the organization of the ICU is actually influencing nursing practice, which is a really novel finding,” commented first author Deena Kelly Costa, PhD, RN, of the University of Michigan School of Nursing in Ann Arbor. “In closed ICUs, when you have a board-certified physician and an ICU team managing and leading care, even if the work environment is better, nurses may not feel as empowered to standardize their care or practice.”
“ICU nurses are the ones who are primarily responsible for VAP preventive practices: they keep the head of the bed higher than 45 degrees, they conduct oral care, they conduct (patient) surveillance. ICU physicians are involved with writing the orders and ventilator setting management. So how these providers work together could theoretically influence the risk for patients developing VAP,” Dr. Costa said.
“We need to be thinking a little bit more critically about not only the care that’s happening at the bedside... but also at an organizational level. How are these providers organized, and can we work together to improve patient outcomes?”
“I’m not suggesting that we get rid of all closed ICUs because I don’t think that’s the solution,” Dr. Costa maintained. “I think from an administrative perspective, we need to be considering what’s the organization of these clinicians and this unit, and [in a context-specific manner], how can we improve it for better patient outcomes? That may be both working on improving the work environment and making the nurses feel more empowered, or it could be potentially considering other staffing models.”
Some data have already linked a more favorable nurse work environment and the presence of a board-certified critical care physician independently with better patient outcomes in the ICU. But studies of their joint impact are lacking.
The investigators performed a secondary, unit-level analysis of nurse survey data collected during 2005 and 2006 in ICUs in southern Michigan.
In all, 462 nurses working in 25 ICUs completed the Practice Environment Scale of the Nursing Work Index, on which averaged summary scores range between 1 (unfavorable) and 4 (favorable). The scale captures environmental factors such as the adequacy of resources for nurses, support from their managers, and their level of involvement in hospital policy decisions.
The rate of VAP during the same period was assessed using data from more than 1,000 patients from each ICU.
The summary nurse work environment score averaged 2.69 points in the 21 ICUs that had a closed physician staffing model and 2.62 points in the 4 ICUs that had an open physician staffing model. The respective rates of VAP were 7.5% and 2.5%.
In adjusted analysis among all 25 ICUs, each 1-point increase in an ICU’s Practice Environment Scale score was associated with a sharply higher rate of VAP on the unit (adjusted incidence rate ratio, 5.76; P = .02).
However, there was a strong interaction between the score and physician staffing model (P less than .001). In open ICUs, as the score rose, the rate of VAP fell (from about 16% to 5%), whereas in closed ICUs, as the score rose, so did the rate of VAP (from about 3% to 14%).
Dr. Costa disclosed that she had no relevant conflicts of interest. The parent survey was funded by the Blue Cross Blue Shield Foundation of Michigan.
SAN FRANCISCO – The work environment for nurses and the physician staffing model in the intensive care unit influence patients’ likelihood of acquiring ventilator-associated pneumonia (VAP), based on a cohort study of 25 ICUs.
Overall, each 1-point increase in the score for the nurse work environment – indicating that nurses had a greater sense of playing an important role in patient care – was unexpectedly associated with a roughly sixfold higher rate of VAP among the ICU’s patients, according to data reported in a session and press briefing at an international conference of the American Thoracic Society. However, additional analyses showed that the rate of VAP was higher in closed units where a board-certified critical care physician (intensivist) managed and led care rather than an open unit where care is shared.
“We think that the organization of the ICU is actually influencing nursing practice, which is a really novel finding,” commented first author Deena Kelly Costa, PhD, RN, of the University of Michigan School of Nursing in Ann Arbor. “In closed ICUs, when you have a board-certified physician and an ICU team managing and leading care, even if the work environment is better, nurses may not feel as empowered to standardize their care or practice.”
“ICU nurses are the ones who are primarily responsible for VAP preventive practices: they keep the head of the bed higher than 45 degrees, they conduct oral care, they conduct (patient) surveillance. ICU physicians are involved with writing the orders and ventilator setting management. So how these providers work together could theoretically influence the risk for patients developing VAP,” Dr. Costa said.
“We need to be thinking a little bit more critically about not only the care that’s happening at the bedside... but also at an organizational level. How are these providers organized, and can we work together to improve patient outcomes?”
“I’m not suggesting that we get rid of all closed ICUs because I don’t think that’s the solution,” Dr. Costa maintained. “I think from an administrative perspective, we need to be considering what’s the organization of these clinicians and this unit, and [in a context-specific manner], how can we improve it for better patient outcomes? That may be both working on improving the work environment and making the nurses feel more empowered, or it could be potentially considering other staffing models.”
Some data have already linked a more favorable nurse work environment and the presence of a board-certified critical care physician independently with better patient outcomes in the ICU. But studies of their joint impact are lacking.
The investigators performed a secondary, unit-level analysis of nurse survey data collected during 2005 and 2006 in ICUs in southern Michigan.
In all, 462 nurses working in 25 ICUs completed the Practice Environment Scale of the Nursing Work Index, on which averaged summary scores range between 1 (unfavorable) and 4 (favorable). The scale captures environmental factors such as the adequacy of resources for nurses, support from their managers, and their level of involvement in hospital policy decisions.
The rate of VAP during the same period was assessed using data from more than 1,000 patients from each ICU.
The summary nurse work environment score averaged 2.69 points in the 21 ICUs that had a closed physician staffing model and 2.62 points in the 4 ICUs that had an open physician staffing model. The respective rates of VAP were 7.5% and 2.5%.
In adjusted analysis among all 25 ICUs, each 1-point increase in an ICU’s Practice Environment Scale score was associated with a sharply higher rate of VAP on the unit (adjusted incidence rate ratio, 5.76; P = .02).
However, there was a strong interaction between the score and physician staffing model (P less than .001). In open ICUs, as the score rose, the rate of VAP fell (from about 16% to 5%), whereas in closed ICUs, as the score rose, so did the rate of VAP (from about 3% to 14%).
Dr. Costa disclosed that she had no relevant conflicts of interest. The parent survey was funded by the Blue Cross Blue Shield Foundation of Michigan.
AT ATS 2016
Key clinical point: The impact of nurse work environment on risk of VAP in the ICU depends on the unit’s physician staffing model.
Major finding: A better nurse work environment was associated with a higher rate of VAP overall (incidence rate ratio, 5.76), but there was an interaction whereby it was positively associated with rate in closed units but negatively so in open units.
Data source: A cohort study of 25 ICUs, 462 nurses, and more than 25,000 patients in southern Michigan between 2005 and 2006.
Disclosures: Dr. Costa disclosed that she had no relevant conflicts of interest. The parent study was funded by the Blue Cross Blue Shield Foundation of Michigan.
Universal decolonization reduces CLABSI rates in large health care system
Universal decolonization of adult intensive care unit patients in a large community health system significantly reduced the incidence of central line–associated bloodstream infections (CLABSIs), according to Dr. Edward Septimus and his associates.
A total of 136 ICUs at 95 hospitals were included in the study. Universal decolonization was completed within 6 months. A total of 672 CLABSIs occurred in the 24 months before intervention, and 181 CLABSIs occurred in the 8 months after intervention. The CLABSI rate was 1.1/1,000 central-line days before intervention and 0.87 after intervention, a reduction of 23.5%.
No evidence of a trend over time was found in either the pre- or postintervention period. The reduction in CLABSIs post intervention was not affected after adjustment for seasonality, number of beds in the ICU, or unit type. The gram-positive CLABSI rate was reduced by 28.7% after invention, and the rate of CLABSI due to Staphylococcus aureus was reduced by 31.9%.
“This rapid dissemination and implementation program demonstrated the utility of a protocol-specific toolkit, coupled with a multistep translation program to implement universal decolonization in a large, complex organization. Doing so demonstrated that use of CHG [chlorhexidine] bathing plus nasal mupirocin in routine practice reduced ICU CLABSIs at a level commensurate with that achieved within the framework of a clinical trial. This supports the use of universal decolonization in a wide variety of acute care facilities across the United States,” the investigators wrote.
Find the full study in Clinical Infectious Diseases (doi: 10.1093/cid/ciw282).
Universal decolonization of adult intensive care unit patients in a large community health system significantly reduced the incidence of central line–associated bloodstream infections (CLABSIs), according to Dr. Edward Septimus and his associates.
A total of 136 ICUs at 95 hospitals were included in the study. Universal decolonization was completed within 6 months. A total of 672 CLABSIs occurred in the 24 months before intervention, and 181 CLABSIs occurred in the 8 months after intervention. The CLABSI rate was 1.1/1,000 central-line days before intervention and 0.87 after intervention, a reduction of 23.5%.
No evidence of a trend over time was found in either the pre- or postintervention period. The reduction in CLABSIs post intervention was not affected after adjustment for seasonality, number of beds in the ICU, or unit type. The gram-positive CLABSI rate was reduced by 28.7% after invention, and the rate of CLABSI due to Staphylococcus aureus was reduced by 31.9%.
“This rapid dissemination and implementation program demonstrated the utility of a protocol-specific toolkit, coupled with a multistep translation program to implement universal decolonization in a large, complex organization. Doing so demonstrated that use of CHG [chlorhexidine] bathing plus nasal mupirocin in routine practice reduced ICU CLABSIs at a level commensurate with that achieved within the framework of a clinical trial. This supports the use of universal decolonization in a wide variety of acute care facilities across the United States,” the investigators wrote.
Find the full study in Clinical Infectious Diseases (doi: 10.1093/cid/ciw282).
Universal decolonization of adult intensive care unit patients in a large community health system significantly reduced the incidence of central line–associated bloodstream infections (CLABSIs), according to Dr. Edward Septimus and his associates.
A total of 136 ICUs at 95 hospitals were included in the study. Universal decolonization was completed within 6 months. A total of 672 CLABSIs occurred in the 24 months before intervention, and 181 CLABSIs occurred in the 8 months after intervention. The CLABSI rate was 1.1/1,000 central-line days before intervention and 0.87 after intervention, a reduction of 23.5%.
No evidence of a trend over time was found in either the pre- or postintervention period. The reduction in CLABSIs post intervention was not affected after adjustment for seasonality, number of beds in the ICU, or unit type. The gram-positive CLABSI rate was reduced by 28.7% after invention, and the rate of CLABSI due to Staphylococcus aureus was reduced by 31.9%.
“This rapid dissemination and implementation program demonstrated the utility of a protocol-specific toolkit, coupled with a multistep translation program to implement universal decolonization in a large, complex organization. Doing so demonstrated that use of CHG [chlorhexidine] bathing plus nasal mupirocin in routine practice reduced ICU CLABSIs at a level commensurate with that achieved within the framework of a clinical trial. This supports the use of universal decolonization in a wide variety of acute care facilities across the United States,” the investigators wrote.
Find the full study in Clinical Infectious Diseases (doi: 10.1093/cid/ciw282).
FROM CLINICAL INFECTIOUS DISEASES
Primary care management of sepsis survivors does not improve mental health quality of life
Patients who have survived sepsis or septic shock do not receive any significant benefit in the quality of their mental health by receiving primary care management intervention, according to a new study published by JAMA.
“Many survivors of sepsis have multiple medical comorbidities that are typically managed in primary care [but] interventions for managing sepsis sequelae in primary care have not been developed,” states the study, which was led by Jochen Gensichen, MD, of the Institute of General Practice & Family Medicine at Jena (Germany) University Hospital.
“To our knowledge, this is the first large-scale, randomized controlled clinical trial of an intervention to improve outcomes in survivors of sepsis in primary care,” Dr. Gensichen and his coinvestigators added.
The study recruited sepsis and septic shock survivors from nine ICUs across Germany between February 2011 and December 2014, excluding any patients with cognitive impairment, defined as a Telephone Interview of Cognitive Status score no greater than 27. Ultimately, 291 patients aged 18 years or older (mean age of 61.6 years) were selected for inclusion and randomized into cohorts receiving either primary care–based intervention (n = 148) or usual care (n = 143) (JAMA. 2016;315:2703-11. doi: 10.1001/jama.2016.7207).
Those assigned to the usual care cohort received the standard care that their primary care providers would normally carry out, which included “periodic contacts, referrals to specialists, and prescription of medication and therapeutic aids at quantities comparable with those for other populations with multiple chronic conditions.” Those in the other cohort were given active monitoring of symptoms from providers who had been given evidence-based care training and clinical decision support from nurses who underwent training to become case managers. Case managers would take patients through an hour-long face-to-face training on sepsis sequelae within 2-20 days of ICU discharge, along with subsequent follow-up conversations over the phone.
“Case managers monitored patients’ symptoms using validated screening tools to assess critical illness polyneuropathy/myopathy, wasting, neurocognitive deficits, [posttraumatic stress disorder], depressive and pain symptoms, as well as patient self-management behaviors focusing on physical activity and individual self-management goals,” the authors said, noting that case managers would report their results to a consulting physician who “supervised the case managers and provided clinical decision support to the [primary care physicians].”
Baseline Mental Component Summary (MCS) scores were taken for subjects in both cohorts to determine mental health-based quality of life, which averaged 49.1 for the intervention cohort and 49.3 for the control. MCS scores at 6 months’ follow-up were 52.9 for the intervention group (95% confidence interval, 1.05-6.54) and 51.0 for the control group (95% CI, –1.22-4.51), for a mean change of 3.8 in the intervention cohort and 1.6 for the control group. The mean treatment effect was 2.15 (95% CI, –1.79-6.09; P = .28), indicating no significant difference between the two.
“There was no evidence for a differential treatment effect on the study’s primary outcome, postsepsis MCS scores,” the authors concluded. “This finding is similar to those from previous trials of care management interventions following critical illness.”
The authors added that “further research is needed to determine if modified approaches to primary care management may be more effective.”
The study was funded by the Center for Sepsis Control and Care, the German Federal Ministry of Education and Research, and the German Sepsis Society. Dr. Gensichen reported receiving personal fees from the Primary Health Care Foundation and receiving a grant from the German Federal Ministry of Education and Research.
Patients who have survived sepsis or septic shock do not receive any significant benefit in the quality of their mental health by receiving primary care management intervention, according to a new study published by JAMA.
“Many survivors of sepsis have multiple medical comorbidities that are typically managed in primary care [but] interventions for managing sepsis sequelae in primary care have not been developed,” states the study, which was led by Jochen Gensichen, MD, of the Institute of General Practice & Family Medicine at Jena (Germany) University Hospital.
“To our knowledge, this is the first large-scale, randomized controlled clinical trial of an intervention to improve outcomes in survivors of sepsis in primary care,” Dr. Gensichen and his coinvestigators added.
The study recruited sepsis and septic shock survivors from nine ICUs across Germany between February 2011 and December 2014, excluding any patients with cognitive impairment, defined as a Telephone Interview of Cognitive Status score no greater than 27. Ultimately, 291 patients aged 18 years or older (mean age of 61.6 years) were selected for inclusion and randomized into cohorts receiving either primary care–based intervention (n = 148) or usual care (n = 143) (JAMA. 2016;315:2703-11. doi: 10.1001/jama.2016.7207).
Those assigned to the usual care cohort received the standard care that their primary care providers would normally carry out, which included “periodic contacts, referrals to specialists, and prescription of medication and therapeutic aids at quantities comparable with those for other populations with multiple chronic conditions.” Those in the other cohort were given active monitoring of symptoms from providers who had been given evidence-based care training and clinical decision support from nurses who underwent training to become case managers. Case managers would take patients through an hour-long face-to-face training on sepsis sequelae within 2-20 days of ICU discharge, along with subsequent follow-up conversations over the phone.
“Case managers monitored patients’ symptoms using validated screening tools to assess critical illness polyneuropathy/myopathy, wasting, neurocognitive deficits, [posttraumatic stress disorder], depressive and pain symptoms, as well as patient self-management behaviors focusing on physical activity and individual self-management goals,” the authors said, noting that case managers would report their results to a consulting physician who “supervised the case managers and provided clinical decision support to the [primary care physicians].”
Baseline Mental Component Summary (MCS) scores were taken for subjects in both cohorts to determine mental health-based quality of life, which averaged 49.1 for the intervention cohort and 49.3 for the control. MCS scores at 6 months’ follow-up were 52.9 for the intervention group (95% confidence interval, 1.05-6.54) and 51.0 for the control group (95% CI, –1.22-4.51), for a mean change of 3.8 in the intervention cohort and 1.6 for the control group. The mean treatment effect was 2.15 (95% CI, –1.79-6.09; P = .28), indicating no significant difference between the two.
“There was no evidence for a differential treatment effect on the study’s primary outcome, postsepsis MCS scores,” the authors concluded. “This finding is similar to those from previous trials of care management interventions following critical illness.”
The authors added that “further research is needed to determine if modified approaches to primary care management may be more effective.”
The study was funded by the Center for Sepsis Control and Care, the German Federal Ministry of Education and Research, and the German Sepsis Society. Dr. Gensichen reported receiving personal fees from the Primary Health Care Foundation and receiving a grant from the German Federal Ministry of Education and Research.
Patients who have survived sepsis or septic shock do not receive any significant benefit in the quality of their mental health by receiving primary care management intervention, according to a new study published by JAMA.
“Many survivors of sepsis have multiple medical comorbidities that are typically managed in primary care [but] interventions for managing sepsis sequelae in primary care have not been developed,” states the study, which was led by Jochen Gensichen, MD, of the Institute of General Practice & Family Medicine at Jena (Germany) University Hospital.
“To our knowledge, this is the first large-scale, randomized controlled clinical trial of an intervention to improve outcomes in survivors of sepsis in primary care,” Dr. Gensichen and his coinvestigators added.
The study recruited sepsis and septic shock survivors from nine ICUs across Germany between February 2011 and December 2014, excluding any patients with cognitive impairment, defined as a Telephone Interview of Cognitive Status score no greater than 27. Ultimately, 291 patients aged 18 years or older (mean age of 61.6 years) were selected for inclusion and randomized into cohorts receiving either primary care–based intervention (n = 148) or usual care (n = 143) (JAMA. 2016;315:2703-11. doi: 10.1001/jama.2016.7207).
Those assigned to the usual care cohort received the standard care that their primary care providers would normally carry out, which included “periodic contacts, referrals to specialists, and prescription of medication and therapeutic aids at quantities comparable with those for other populations with multiple chronic conditions.” Those in the other cohort were given active monitoring of symptoms from providers who had been given evidence-based care training and clinical decision support from nurses who underwent training to become case managers. Case managers would take patients through an hour-long face-to-face training on sepsis sequelae within 2-20 days of ICU discharge, along with subsequent follow-up conversations over the phone.
“Case managers monitored patients’ symptoms using validated screening tools to assess critical illness polyneuropathy/myopathy, wasting, neurocognitive deficits, [posttraumatic stress disorder], depressive and pain symptoms, as well as patient self-management behaviors focusing on physical activity and individual self-management goals,” the authors said, noting that case managers would report their results to a consulting physician who “supervised the case managers and provided clinical decision support to the [primary care physicians].”
Baseline Mental Component Summary (MCS) scores were taken for subjects in both cohorts to determine mental health-based quality of life, which averaged 49.1 for the intervention cohort and 49.3 for the control. MCS scores at 6 months’ follow-up were 52.9 for the intervention group (95% confidence interval, 1.05-6.54) and 51.0 for the control group (95% CI, –1.22-4.51), for a mean change of 3.8 in the intervention cohort and 1.6 for the control group. The mean treatment effect was 2.15 (95% CI, –1.79-6.09; P = .28), indicating no significant difference between the two.
“There was no evidence for a differential treatment effect on the study’s primary outcome, postsepsis MCS scores,” the authors concluded. “This finding is similar to those from previous trials of care management interventions following critical illness.”
The authors added that “further research is needed to determine if modified approaches to primary care management may be more effective.”
The study was funded by the Center for Sepsis Control and Care, the German Federal Ministry of Education and Research, and the German Sepsis Society. Dr. Gensichen reported receiving personal fees from the Primary Health Care Foundation and receiving a grant from the German Federal Ministry of Education and Research.
FROM JAMA
Key clinical point: Primary care intervention does not improve mental health–related quality of life in survivors of sepsis or septic shock.
Major finding: Mean Mental Component Summary (MCS) scores showed no significant change between the time of ICU discharge (49.1) versus at 6 months postdischarge (52.9) (95% CI, 1.05-6.54), compared with the control group: 49.3 at baseline vs. 51.0 at 6 months follow-up (95% CI, –1.22-4.51).
Data source: A multicenter, unblinded, two-group randomized clinical trial of 291 adult sepsis or septic shock survivors recruited from nine German ICUs from February 2011 through December 2014.
Disclosures: Study funded by the Center for Sepsis Control and Care, the German Federal Ministry of Education and Research, and the German Sepsis Society. Dr. Gensichen reported receiving personal fees from The Primary Health Care Foundation and receiving a grant from the German Federal Ministry of Education and Research.
Hospital-acquired respiratory viruses cause significant morbidity, mortality
BOSTON – Hospital-acquired respiratory viral infections may be a significant and underappreciated cause of morbidity and mortality among hospitalized patients.
According to a multisite, retrospective chart review of 44 patients with hospital-acquired respiratory viral illnesses (HA-RVIs), 17 patients (39%) died in-hospital. Further, of the 27 who survived, 18 (66.6%) were discharged to an advanced care setting rather than to home, though just 11/44 (25%) had been living in an advanced care setting before admission.
For the hospitalizations complicated by HA-RVI, the average length of stay was 30.4 days, with a positive respiratory virus panel (RVP) result occurring at a mean 18 days after admission.
“HA-RVIs are an underappreciated event and appear to target the sickest patients in the hospital,” said coauthor Dr. Matthew Sims, director of infectious diseases research at Beaumont Hospital, Rochester, Mich., at a poster session of the annual meeting of the American Society of Microbiology.
First author Dr. Adam K. Skrzynski, also of Beaumont Health, and his coauthors performed the analysis of 4,065 patients with a positive RVP result during hospitalization at a regional hospital system in the September 2011-May 2015 study period; the 1.1% of patients with positive results who formed the study cohort had to have symptoms of a respiratory infection occurring after more than 5 days of hospitalization. Mortality data were collected for the first 33 days of hospitalization.
Positive RVP results for those included in the study came primarily from nasopharyngeal swab (n = 32), with the remainder from bronchoalveolar lavage (n = 11) and sputum (n = 1). Most patients were female (29/44, 66%), and elderly, with an average age of 73.8 years. In an interview, Dr. Sims said that many patients were smokers, and that chronic obstructive pulmonary disease and obesity were common comorbidities.
The prognosis was particularly grim for the 12 patients (27.3%) who were admitted to the ICU: 10 (83.3%) died after an average 9.6 days in the ICU. Advanced interventions did not seem to make a difference, either. “Intubation didn’t help these patients,” said Dr. Sims. Nine patients (20.5%) were intubated within 7 days of their positive RVP results. Intubation lasted an average 7.6 days, and all nine of these patients died.
The RVP came into use in 2011 and made it possible to identify whether a respiratory virus was causing symptoms – and which virus was the culprit – said Dr. Sims. For the studied population, 13 of 44 patients had influenza; 11 of those had influenza A and 2 had influenza B. The next most common pathogen was parainfluenza, with 10 positive RVP results.
Dr. Sims said he and his coinvestigators were surprised to find that, although influenza A was the most common pathogen, only 18.8% of the patients with influenza A died during the study period. “While it is possible that the high frequency of influenza infection in our study may be due to poor vaccine-strain matching for the years in question, the lower mortality rate seen in influenza A infection may be due to our hospital’s mandatory influenza vaccination policy and subsequent protection against mortality,” Dr. Skrzynski and his coauthors wrote.
There were seasonal trends in mortality, with 70.6% of mortality occurring in the spring (April-June) and an additional 23.3% happening in the winter (January-March). Parainfluenza infection peaked in the spring, and influenza peaked in the winter months.
Dr. Sims said the study underlines the importance of encouraging ill hospital staff members to stay home, and family members with respiratory symptoms should not be visiting fragile patients. Dr. Skrzynski and his coauthors also wrote that “immunization of healthcare personnel against influenza should be mandatory.”
Still to be answered, said Dr. Sims, is the association between comorbidities and the potentially lethal effects of HA-RVIs. They are currently performing a matched case-control study to tease out these relationships.
Dr. Skrzynski reported no outside funding source, and the study authors had no financial disclosures.
On Twitter @karioakes
BOSTON – Hospital-acquired respiratory viral infections may be a significant and underappreciated cause of morbidity and mortality among hospitalized patients.
According to a multisite, retrospective chart review of 44 patients with hospital-acquired respiratory viral illnesses (HA-RVIs), 17 patients (39%) died in-hospital. Further, of the 27 who survived, 18 (66.6%) were discharged to an advanced care setting rather than to home, though just 11/44 (25%) had been living in an advanced care setting before admission.
For the hospitalizations complicated by HA-RVI, the average length of stay was 30.4 days, with a positive respiratory virus panel (RVP) result occurring at a mean 18 days after admission.
“HA-RVIs are an underappreciated event and appear to target the sickest patients in the hospital,” said coauthor Dr. Matthew Sims, director of infectious diseases research at Beaumont Hospital, Rochester, Mich., at a poster session of the annual meeting of the American Society of Microbiology.
First author Dr. Adam K. Skrzynski, also of Beaumont Health, and his coauthors performed the analysis of 4,065 patients with a positive RVP result during hospitalization at a regional hospital system in the September 2011-May 2015 study period; the 1.1% of patients with positive results who formed the study cohort had to have symptoms of a respiratory infection occurring after more than 5 days of hospitalization. Mortality data were collected for the first 33 days of hospitalization.
Positive RVP results for those included in the study came primarily from nasopharyngeal swab (n = 32), with the remainder from bronchoalveolar lavage (n = 11) and sputum (n = 1). Most patients were female (29/44, 66%), and elderly, with an average age of 73.8 years. In an interview, Dr. Sims said that many patients were smokers, and that chronic obstructive pulmonary disease and obesity were common comorbidities.
The prognosis was particularly grim for the 12 patients (27.3%) who were admitted to the ICU: 10 (83.3%) died after an average 9.6 days in the ICU. Advanced interventions did not seem to make a difference, either. “Intubation didn’t help these patients,” said Dr. Sims. Nine patients (20.5%) were intubated within 7 days of their positive RVP results. Intubation lasted an average 7.6 days, and all nine of these patients died.
The RVP came into use in 2011 and made it possible to identify whether a respiratory virus was causing symptoms – and which virus was the culprit – said Dr. Sims. For the studied population, 13 of 44 patients had influenza; 11 of those had influenza A and 2 had influenza B. The next most common pathogen was parainfluenza, with 10 positive RVP results.
Dr. Sims said he and his coinvestigators were surprised to find that, although influenza A was the most common pathogen, only 18.8% of the patients with influenza A died during the study period. “While it is possible that the high frequency of influenza infection in our study may be due to poor vaccine-strain matching for the years in question, the lower mortality rate seen in influenza A infection may be due to our hospital’s mandatory influenza vaccination policy and subsequent protection against mortality,” Dr. Skrzynski and his coauthors wrote.
There were seasonal trends in mortality, with 70.6% of mortality occurring in the spring (April-June) and an additional 23.3% happening in the winter (January-March). Parainfluenza infection peaked in the spring, and influenza peaked in the winter months.
Dr. Sims said the study underlines the importance of encouraging ill hospital staff members to stay home, and family members with respiratory symptoms should not be visiting fragile patients. Dr. Skrzynski and his coauthors also wrote that “immunization of healthcare personnel against influenza should be mandatory.”
Still to be answered, said Dr. Sims, is the association between comorbidities and the potentially lethal effects of HA-RVIs. They are currently performing a matched case-control study to tease out these relationships.
Dr. Skrzynski reported no outside funding source, and the study authors had no financial disclosures.
On Twitter @karioakes
BOSTON – Hospital-acquired respiratory viral infections may be a significant and underappreciated cause of morbidity and mortality among hospitalized patients.
According to a multisite, retrospective chart review of 44 patients with hospital-acquired respiratory viral illnesses (HA-RVIs), 17 patients (39%) died in-hospital. Further, of the 27 who survived, 18 (66.6%) were discharged to an advanced care setting rather than to home, though just 11/44 (25%) had been living in an advanced care setting before admission.
For the hospitalizations complicated by HA-RVI, the average length of stay was 30.4 days, with a positive respiratory virus panel (RVP) result occurring at a mean 18 days after admission.
“HA-RVIs are an underappreciated event and appear to target the sickest patients in the hospital,” said coauthor Dr. Matthew Sims, director of infectious diseases research at Beaumont Hospital, Rochester, Mich., at a poster session of the annual meeting of the American Society of Microbiology.
First author Dr. Adam K. Skrzynski, also of Beaumont Health, and his coauthors performed the analysis of 4,065 patients with a positive RVP result during hospitalization at a regional hospital system in the September 2011-May 2015 study period; the 1.1% of patients with positive results who formed the study cohort had to have symptoms of a respiratory infection occurring after more than 5 days of hospitalization. Mortality data were collected for the first 33 days of hospitalization.
Positive RVP results for those included in the study came primarily from nasopharyngeal swab (n = 32), with the remainder from bronchoalveolar lavage (n = 11) and sputum (n = 1). Most patients were female (29/44, 66%), and elderly, with an average age of 73.8 years. In an interview, Dr. Sims said that many patients were smokers, and that chronic obstructive pulmonary disease and obesity were common comorbidities.
The prognosis was particularly grim for the 12 patients (27.3%) who were admitted to the ICU: 10 (83.3%) died after an average 9.6 days in the ICU. Advanced interventions did not seem to make a difference, either. “Intubation didn’t help these patients,” said Dr. Sims. Nine patients (20.5%) were intubated within 7 days of their positive RVP results. Intubation lasted an average 7.6 days, and all nine of these patients died.
The RVP came into use in 2011 and made it possible to identify whether a respiratory virus was causing symptoms – and which virus was the culprit – said Dr. Sims. For the studied population, 13 of 44 patients had influenza; 11 of those had influenza A and 2 had influenza B. The next most common pathogen was parainfluenza, with 10 positive RVP results.
Dr. Sims said he and his coinvestigators were surprised to find that, although influenza A was the most common pathogen, only 18.8% of the patients with influenza A died during the study period. “While it is possible that the high frequency of influenza infection in our study may be due to poor vaccine-strain matching for the years in question, the lower mortality rate seen in influenza A infection may be due to our hospital’s mandatory influenza vaccination policy and subsequent protection against mortality,” Dr. Skrzynski and his coauthors wrote.
There were seasonal trends in mortality, with 70.6% of mortality occurring in the spring (April-June) and an additional 23.3% happening in the winter (January-March). Parainfluenza infection peaked in the spring, and influenza peaked in the winter months.
Dr. Sims said the study underlines the importance of encouraging ill hospital staff members to stay home, and family members with respiratory symptoms should not be visiting fragile patients. Dr. Skrzynski and his coauthors also wrote that “immunization of healthcare personnel against influenza should be mandatory.”
Still to be answered, said Dr. Sims, is the association between comorbidities and the potentially lethal effects of HA-RVIs. They are currently performing a matched case-control study to tease out these relationships.
Dr. Skrzynski reported no outside funding source, and the study authors had no financial disclosures.
On Twitter @karioakes
AT ASM MICROBE 2016
Key clinical point: Hospital-acquired respiratory viral illnesses had a 39% mortality rate.
Major finding: Of 44 symptomatic patients with positive respiratory virus panel screens, 17 died and 2/3 of the survivors went to advanced care settings on discharge.
Data source: Retrospective multisite chart review of 44 patients with HA-RVIs and positive RVP screens.
Disclosures: No external funding source was reported, and the study authors had no disclosures.
Recurrent C. diff infection much more costly to treat
The costs of treating patients with recurrent Clostridium difficile infection are approximately two to three times the treatment costs for patients with primary CDI, according to a study of hospitalized patients in a large tertiary care center.
Kevin W. Garey, Pharm.D., professor of pharmacy practice at the University of Houston College of Pharmacy, and his coauthors wrote in the Journal of Hospital Infection that the introduction of costly therapies, such as the narrow-spectrum macrocyclic antibiotic fidaxomicin and fecal microbiota transplantation, both of which reduce the risk of recurrent CDI, has increased interest in the economic burden of recurrent CDI (J Hosp Infect. 2016 Jul;93:286-9).
Between 2007-2013, Dr. Garey and his colleagues attempted to assess the additional costs of recurrent CDI by calculating total hospital length of stay (LOS), CDI-attributable LOS, and pharmacologic and hospitalization costs for 540 hospitalized adult patients (42% male) with a diagnosis of primary CDI. Of these patients, 95 (18%) experienced 101 recurrent CDI episodes. CDI-attributable hospital admissions occurred in 307 out of 540 (57%) primary CDI episodes, and 64 out of 101 (63%) recurrent CDI episodes.
The investigators estimated total and CDI-attributable hospitalization costs based on Healthcare Cost and Utilization Project data for average daily health care costs, multiplied by the total and CDI-attributable length of hospital stay. They then compared total hospital LOS, CDI-attributable LOS, and pharmacologic and hospitalization costs between patients with primary CDI only versus patients who experienced recurrent CDI.
Dr. Garey and his colleagues found that CDI-attributable median LOS and costs increased from 7 days and $13,168 for patients with primary CDI only, vs .15 days and $28,218 for patients with recurrent CDI (P less than .0001, each). Total hospital median LOS and costs increased from 11 days and $20,693 for patients with primary CDI only, vs. 24 days and $45,148 for patients with recurrent CDI (P less than .0001, each).
The median cost of pharmacologic treatment while hospitalized was $60 for patients with primary CDI only, and $140 for patients with recurrent CDI (P = .0013).
“Patients with CDI experience a significant health care economic burden that can be directly related to CDI,” the authors concluded.
The study was funded by a research grant from Merck. Dr. Garey reported receiving funding from Cubist, Summit, and Merck, while other coauthors reported research support from Merck, Cubist, and Actelion.
On Twitter @richpizzi
The costs of treating patients with recurrent Clostridium difficile infection are approximately two to three times the treatment costs for patients with primary CDI, according to a study of hospitalized patients in a large tertiary care center.
Kevin W. Garey, Pharm.D., professor of pharmacy practice at the University of Houston College of Pharmacy, and his coauthors wrote in the Journal of Hospital Infection that the introduction of costly therapies, such as the narrow-spectrum macrocyclic antibiotic fidaxomicin and fecal microbiota transplantation, both of which reduce the risk of recurrent CDI, has increased interest in the economic burden of recurrent CDI (J Hosp Infect. 2016 Jul;93:286-9).
Between 2007-2013, Dr. Garey and his colleagues attempted to assess the additional costs of recurrent CDI by calculating total hospital length of stay (LOS), CDI-attributable LOS, and pharmacologic and hospitalization costs for 540 hospitalized adult patients (42% male) with a diagnosis of primary CDI. Of these patients, 95 (18%) experienced 101 recurrent CDI episodes. CDI-attributable hospital admissions occurred in 307 out of 540 (57%) primary CDI episodes, and 64 out of 101 (63%) recurrent CDI episodes.
The investigators estimated total and CDI-attributable hospitalization costs based on Healthcare Cost and Utilization Project data for average daily health care costs, multiplied by the total and CDI-attributable length of hospital stay. They then compared total hospital LOS, CDI-attributable LOS, and pharmacologic and hospitalization costs between patients with primary CDI only versus patients who experienced recurrent CDI.
Dr. Garey and his colleagues found that CDI-attributable median LOS and costs increased from 7 days and $13,168 for patients with primary CDI only, vs .15 days and $28,218 for patients with recurrent CDI (P less than .0001, each). Total hospital median LOS and costs increased from 11 days and $20,693 for patients with primary CDI only, vs. 24 days and $45,148 for patients with recurrent CDI (P less than .0001, each).
The median cost of pharmacologic treatment while hospitalized was $60 for patients with primary CDI only, and $140 for patients with recurrent CDI (P = .0013).
“Patients with CDI experience a significant health care economic burden that can be directly related to CDI,” the authors concluded.
The study was funded by a research grant from Merck. Dr. Garey reported receiving funding from Cubist, Summit, and Merck, while other coauthors reported research support from Merck, Cubist, and Actelion.
On Twitter @richpizzi
The costs of treating patients with recurrent Clostridium difficile infection are approximately two to three times the treatment costs for patients with primary CDI, according to a study of hospitalized patients in a large tertiary care center.
Kevin W. Garey, Pharm.D., professor of pharmacy practice at the University of Houston College of Pharmacy, and his coauthors wrote in the Journal of Hospital Infection that the introduction of costly therapies, such as the narrow-spectrum macrocyclic antibiotic fidaxomicin and fecal microbiota transplantation, both of which reduce the risk of recurrent CDI, has increased interest in the economic burden of recurrent CDI (J Hosp Infect. 2016 Jul;93:286-9).
Between 2007-2013, Dr. Garey and his colleagues attempted to assess the additional costs of recurrent CDI by calculating total hospital length of stay (LOS), CDI-attributable LOS, and pharmacologic and hospitalization costs for 540 hospitalized adult patients (42% male) with a diagnosis of primary CDI. Of these patients, 95 (18%) experienced 101 recurrent CDI episodes. CDI-attributable hospital admissions occurred in 307 out of 540 (57%) primary CDI episodes, and 64 out of 101 (63%) recurrent CDI episodes.
The investigators estimated total and CDI-attributable hospitalization costs based on Healthcare Cost and Utilization Project data for average daily health care costs, multiplied by the total and CDI-attributable length of hospital stay. They then compared total hospital LOS, CDI-attributable LOS, and pharmacologic and hospitalization costs between patients with primary CDI only versus patients who experienced recurrent CDI.
Dr. Garey and his colleagues found that CDI-attributable median LOS and costs increased from 7 days and $13,168 for patients with primary CDI only, vs .15 days and $28,218 for patients with recurrent CDI (P less than .0001, each). Total hospital median LOS and costs increased from 11 days and $20,693 for patients with primary CDI only, vs. 24 days and $45,148 for patients with recurrent CDI (P less than .0001, each).
The median cost of pharmacologic treatment while hospitalized was $60 for patients with primary CDI only, and $140 for patients with recurrent CDI (P = .0013).
“Patients with CDI experience a significant health care economic burden that can be directly related to CDI,” the authors concluded.
The study was funded by a research grant from Merck. Dr. Garey reported receiving funding from Cubist, Summit, and Merck, while other coauthors reported research support from Merck, Cubist, and Actelion.
On Twitter @richpizzi
FROM THE JOURNAL OF HOSPITAL INFECTION
Key clinical point: The economic cost burden increased significantly for patients with recurrent C. difficile infections, compared with patients with primary CDI.
Major finding: Median hospital length of stay and costs attributable to C. difficile infections increased from 7 days and $13,168 for patients with primary CDI only, vs. 15 days and $28,218 for patients with recurrent CDI.
Data source: A prospective, observational cohort study of 540 hospitalized adult patients with primary CDI followed for 3 months to assess for recurrent CDI episodes.
Disclosures: The study was funded by a research grant from Merck. Dr. Garey reported receiving funding from Cubist, Summit, and Merck, while other coauthors reported research support from Merck, Cubist, and Actelion.
Building owners, managers must do more to prevent Legionnaires’ disease
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
Building owners, managers, and administrators of hospitals and other health care facilities around the country are being urged to shore up their water system management facilities to prevent further outbreaks of Legionnaires’ disease, which is the focus of the Center for Disease Control and Prevention’s latest Vital Signs report.
“Almost all Legionnaires’ disease outbreaks are preventable with improvements in water system management,” explained CDC Director Tom Frieden, adding that “At the end of the day, building owners and managers need to take steps to reduce the risk of Legionnaires’ disease [and] work together to reduce this risk and limit the number of people exposed, infected, and hospitalized or, potentially, fatally infected.”
For the report, the CDC investigated 27 outbreaks of Legionnaires’ disease in the United States from 2000 through 2014, which involved a total of 415 cases and 65 fatalities. In each outbreak analysis, the location, source of exposure, and problems with environmental controls of Legionella – the bacterium that causes the disease – were evaluated.
Hotels and resorts accounted for 44% of all outbreaks over the 15-year period, followed by long-term care facilities (19%) and hospitals (15%). However, outbreaks at the latter two location types accounted for 85% of all deaths, while outbreaks at hotels and resorts accounted for only 6%. Potable water was the most common direct cause of Legionella infections, followed by water from cooling towers, hot tubs, industrial equipment, and decorative fountains.
Additionally, 23 of the investigations yielded enough information to determine the exact cause of the outbreak, all of which were caused by at least one of 4 issues. The first was process failures, such as not having a proper water system management program in place to handle Legionella; this was found in two-thirds of the outbreaks. The second major cause was human error, such as not replacing filters or tubing as recommended by manufacturers, which was a cause in half of the outbreaks. The third was equipment breakdown, which was found in one-third of the outbreaks. Finally, reasons external to the buildings themselves – such as water main breaks or disruptions caused by nearby construction – factored into one-third of the outbreaks.
“Large, recent outbreaks of Legionnaires’ disease in New York City and Flint, Michigan, have brought attention to the disease and highlight the need for us to understand why these outbreaks happen and how best to prevent them, [which is] why this Vital Signs is targeted to a specific audience that we in public health don’t talk [to] often enough: building owners and managers,” Dr. Frieden said. “It’s not a traditional public health audience, [but] they are the key to environmental controls in buildings that we live in, get our health care in, and work in everyday.”
To that end, Dr. Frieden announced the release of a new CDC toolkit entitled “Developing a Water Management Program to Reduce Legionella Growth & Spread in Buildings: A Practical Guide to Implementing Industry Standards,” which building owners, managers, and administrators can turn to for guidance on how to implement effective water system management protocols in their buildings.
Legionnaires’ disease is a serious lung infection caused by inhalation of the bacteria Legionella, which can be found in water and inhaled as airborne mist. Elderly individuals, as well as those with suppressed immune systems because of underlying illnesses, are at a heightened risk for Legionnaires’ disease, which would explain the higher death rates observed at hospitals and long-term care facilities. Dr. Frieden stated that outbreaks and cases of Legionnaires’ disease are on the rise nationally, with about 5,000 infections and 20 outbreaks occurring annually; roughly 10% of infections result in death.
The uptick in recent cases is likely because of “the aging of the population, the increase in chronic illness, [an] increase in immunosuppression through use of medication to treat a variety of conditions [and] an aging plumbing infrastructure and that makes maintenance all the more challenging,” according to Dr. Frieden. “It is also possible that increased use of diagnostic tests and more reliable reporting are contributing to some of the rising rates.”
FROM CDC VITAL SIGNS