User login
Hormone therapy 10 years post menopause increases risks
Hormone therapy in postmenopausal women does not prevent heart disease but does increase the risk of stroke and blood clots, according to a recently updated Cochrane review.
“Our review findings provide strong evidence that treatment with hormone therapy in postmenopausal women for either primary or secondary prevention of cardiovascular disease events has little if any benefit overall, and causes an increase in the risk of stroke, or venous thromboembolic events,” reported Dr. Henry Boardman of the University of Oxford John Radcliffe Hospital, and his associates.
The researchers updated a review published in 2013 with data from an additional six randomized controlled trials. The total of 19 trials, involving 40,410 postmenopausal women, all compared orally-administered estrogen, with or without progestogen, to a placebo or no treatment for a minimum of 6 months (Cochrane Database Syst. Rev. 2015 March 10 [doi:10.1002/14651858.CD002229.pub4]).
The average age of the women in the studies, mostly from the United States, was older than 60 years, and the women received hormone therapy anywhere from 7 months to 10 years across the studies. The overall quality of the studies was “good” with a low risk of bias.
The sharp rise in cardiovascular disease rates in women after menopause had been hypothesized to be related to a decline in hormone levels that causes a higher androgen-to-estradiol ratio, and observational studies starting in the 1980s showed lower mortality rates and cardiovascular events in women receiving hormone therapy – previously called hormone replacement therapy – compared to those not receiving hormone therapy.
Two subsequent randomized controlled trials contradicted these observational findings, though, leading to further study. In this review, hormone therapy showed no risk reduction for all-cause mortality, cardiovascular death, nonfatal myocardial infarction, angina, or revascularization.
However, the overall risk of stroke for those receiving hormone therapy for both primary and secondary prevention was 24% higher than that of women receiving placebo treatment (relative risk 1.24), with an absolute risk of 6 additional strokes per 1,000 women.
Venous thromboembolic events occurred 92% more and pulmonary emboli occurred 81% more in the hormone treatment groups (RR 1.92 and 1.81, respectively), with increased absolute risks of 8 per 1,000 women and 4 per 1,000 women, respectively.
The researchers calculated the number needed to treat for an additional harm (NNTH) at 165 women for stroke, 118 for venous thromboembolism, and 242 for pulmonary embolism.
Further analysis revealed that the relative risks or protection hormone therapy conferred depended on how long after menopause women started treatment.
Mortality was reduced 30% and coronary heart disease was reduced 48% in women who began hormone therapy less than 10 years after menopause (RR 0.70 and RR 0.52, respectively); these women still faced a 74% increased risk of venous thromboembolism, but no increased risk of stroke.
Meanwhile, women who started hormone therapy more than 10 years after menopause had a 21% increased risk of stroke and a 96% increased risk of venous thromboembolism, but no reduced risk on overall death or coronary heart disease.
“It is worth noting that the benefit seen in survival and coronary heart disease for the group starting treatment less than 10 years after the menopause is from combining five trials all performed in primary prevention populations and all with quite long follow-up, ranging from 3.4 to 10.1 years,” the authors wrote.
These results may reflect the possibility of a time interaction, with coronary heart disease events occurring earlier in predisposed women, making it impossible to say whether short duration therapy is beneficial in this population or not, the researchers wrote .
Eighteen of the 19 trials included in the analysis reported the funding source. One study was exclusively funded by Wyeth-Ayerst. Two studies received partial funding from Novo-Nordisk Pharmaceutical, and one study was funded by the National Institutes of Health with support from Wyeth-Ayerst, Hoffman-LaRoche, Pharmacia, and Upjohn. Eight other studies used medication provided by various pharmaceutical companies.
Hormone therapy in postmenopausal women does not prevent heart disease but does increase the risk of stroke and blood clots, according to a recently updated Cochrane review.
“Our review findings provide strong evidence that treatment with hormone therapy in postmenopausal women for either primary or secondary prevention of cardiovascular disease events has little if any benefit overall, and causes an increase in the risk of stroke, or venous thromboembolic events,” reported Dr. Henry Boardman of the University of Oxford John Radcliffe Hospital, and his associates.
The researchers updated a review published in 2013 with data from an additional six randomized controlled trials. The total of 19 trials, involving 40,410 postmenopausal women, all compared orally-administered estrogen, with or without progestogen, to a placebo or no treatment for a minimum of 6 months (Cochrane Database Syst. Rev. 2015 March 10 [doi:10.1002/14651858.CD002229.pub4]).
The average age of the women in the studies, mostly from the United States, was older than 60 years, and the women received hormone therapy anywhere from 7 months to 10 years across the studies. The overall quality of the studies was “good” with a low risk of bias.
The sharp rise in cardiovascular disease rates in women after menopause had been hypothesized to be related to a decline in hormone levels that causes a higher androgen-to-estradiol ratio, and observational studies starting in the 1980s showed lower mortality rates and cardiovascular events in women receiving hormone therapy – previously called hormone replacement therapy – compared to those not receiving hormone therapy.
Two subsequent randomized controlled trials contradicted these observational findings, though, leading to further study. In this review, hormone therapy showed no risk reduction for all-cause mortality, cardiovascular death, nonfatal myocardial infarction, angina, or revascularization.
However, the overall risk of stroke for those receiving hormone therapy for both primary and secondary prevention was 24% higher than that of women receiving placebo treatment (relative risk 1.24), with an absolute risk of 6 additional strokes per 1,000 women.
Venous thromboembolic events occurred 92% more and pulmonary emboli occurred 81% more in the hormone treatment groups (RR 1.92 and 1.81, respectively), with increased absolute risks of 8 per 1,000 women and 4 per 1,000 women, respectively.
The researchers calculated the number needed to treat for an additional harm (NNTH) at 165 women for stroke, 118 for venous thromboembolism, and 242 for pulmonary embolism.
Further analysis revealed that the relative risks or protection hormone therapy conferred depended on how long after menopause women started treatment.
Mortality was reduced 30% and coronary heart disease was reduced 48% in women who began hormone therapy less than 10 years after menopause (RR 0.70 and RR 0.52, respectively); these women still faced a 74% increased risk of venous thromboembolism, but no increased risk of stroke.
Meanwhile, women who started hormone therapy more than 10 years after menopause had a 21% increased risk of stroke and a 96% increased risk of venous thromboembolism, but no reduced risk on overall death or coronary heart disease.
“It is worth noting that the benefit seen in survival and coronary heart disease for the group starting treatment less than 10 years after the menopause is from combining five trials all performed in primary prevention populations and all with quite long follow-up, ranging from 3.4 to 10.1 years,” the authors wrote.
These results may reflect the possibility of a time interaction, with coronary heart disease events occurring earlier in predisposed women, making it impossible to say whether short duration therapy is beneficial in this population or not, the researchers wrote .
Eighteen of the 19 trials included in the analysis reported the funding source. One study was exclusively funded by Wyeth-Ayerst. Two studies received partial funding from Novo-Nordisk Pharmaceutical, and one study was funded by the National Institutes of Health with support from Wyeth-Ayerst, Hoffman-LaRoche, Pharmacia, and Upjohn. Eight other studies used medication provided by various pharmaceutical companies.
Hormone therapy in postmenopausal women does not prevent heart disease but does increase the risk of stroke and blood clots, according to a recently updated Cochrane review.
“Our review findings provide strong evidence that treatment with hormone therapy in postmenopausal women for either primary or secondary prevention of cardiovascular disease events has little if any benefit overall, and causes an increase in the risk of stroke, or venous thromboembolic events,” reported Dr. Henry Boardman of the University of Oxford John Radcliffe Hospital, and his associates.
The researchers updated a review published in 2013 with data from an additional six randomized controlled trials. The total of 19 trials, involving 40,410 postmenopausal women, all compared orally-administered estrogen, with or without progestogen, to a placebo or no treatment for a minimum of 6 months (Cochrane Database Syst. Rev. 2015 March 10 [doi:10.1002/14651858.CD002229.pub4]).
The average age of the women in the studies, mostly from the United States, was older than 60 years, and the women received hormone therapy anywhere from 7 months to 10 years across the studies. The overall quality of the studies was “good” with a low risk of bias.
The sharp rise in cardiovascular disease rates in women after menopause had been hypothesized to be related to a decline in hormone levels that causes a higher androgen-to-estradiol ratio, and observational studies starting in the 1980s showed lower mortality rates and cardiovascular events in women receiving hormone therapy – previously called hormone replacement therapy – compared to those not receiving hormone therapy.
Two subsequent randomized controlled trials contradicted these observational findings, though, leading to further study. In this review, hormone therapy showed no risk reduction for all-cause mortality, cardiovascular death, nonfatal myocardial infarction, angina, or revascularization.
However, the overall risk of stroke for those receiving hormone therapy for both primary and secondary prevention was 24% higher than that of women receiving placebo treatment (relative risk 1.24), with an absolute risk of 6 additional strokes per 1,000 women.
Venous thromboembolic events occurred 92% more and pulmonary emboli occurred 81% more in the hormone treatment groups (RR 1.92 and 1.81, respectively), with increased absolute risks of 8 per 1,000 women and 4 per 1,000 women, respectively.
The researchers calculated the number needed to treat for an additional harm (NNTH) at 165 women for stroke, 118 for venous thromboembolism, and 242 for pulmonary embolism.
Further analysis revealed that the relative risks or protection hormone therapy conferred depended on how long after menopause women started treatment.
Mortality was reduced 30% and coronary heart disease was reduced 48% in women who began hormone therapy less than 10 years after menopause (RR 0.70 and RR 0.52, respectively); these women still faced a 74% increased risk of venous thromboembolism, but no increased risk of stroke.
Meanwhile, women who started hormone therapy more than 10 years after menopause had a 21% increased risk of stroke and a 96% increased risk of venous thromboembolism, but no reduced risk on overall death or coronary heart disease.
“It is worth noting that the benefit seen in survival and coronary heart disease for the group starting treatment less than 10 years after the menopause is from combining five trials all performed in primary prevention populations and all with quite long follow-up, ranging from 3.4 to 10.1 years,” the authors wrote.
These results may reflect the possibility of a time interaction, with coronary heart disease events occurring earlier in predisposed women, making it impossible to say whether short duration therapy is beneficial in this population or not, the researchers wrote .
Eighteen of the 19 trials included in the analysis reported the funding source. One study was exclusively funded by Wyeth-Ayerst. Two studies received partial funding from Novo-Nordisk Pharmaceutical, and one study was funded by the National Institutes of Health with support from Wyeth-Ayerst, Hoffman-LaRoche, Pharmacia, and Upjohn. Eight other studies used medication provided by various pharmaceutical companies.
FROM COCHRANE DATABASE OF SYSTEMATIC REVIEWS
Key clinical point: Hormone therapy in postmenopausal women increases stroke risk.
Major finding: Stroke increased by 24%, venous thromboembolism by 92%, and pulmonary embolism by 81% in postmenopausal women receiving hormone therapy.
Data source: A review and meta-analysis of 19 randomized controlled trials involving 40,140 postmenopausal women who received orally-administered hormone therapy, placebo, or no treatment for prevention of cardiovascular disease.
Disclosures: One study was funded by Wyeth-Ayerst. Two studies received partial funding from Novo-Nordisk Pharmaceutical, and one study was funded by the National Institutes of Health with support from Wyeth-Ayerst, Hoffman-LaRoche, Pharmacia, and Upjohn. Eight other studies used medication provided by various pharmaceutical companies.
VIDEO: Meet Frankie and Sophie, the thyroid cancer–sniffing dogs
SAN DIEGO – Researchers at the University of Arkansas for Medical Sciences in Little Rock are teaching dogs to detect thyroid cancer from urine samples.
The dogs become alert on samples if they detect cancer, but remain passive if they don’t. The first graduate of the program, a German shepherd mix named Frankie, got it right in 30 of 34 cases, matching final surgical pathology results with a sensitivity of 86.6% and a specificity of 89.5%.
With results like those, it might not be too long before Frankie and his colleagues are providing inexpensive adjunct diagnostic services when test results are uncertain, and helping underserved areas with limited diagnostic capacity, the researchers noted.
At the Endocrine Society meeting, investigator Dr. Andrew Hinson shared clips of Frankie and another recent graduate, a border collie mix named Sophie, and explained the project’s next steps.
Frankie was rescued by principal investigator Dr. Arny Ferrando. Sophie and other dogs in the program were also rescued from local animal shelters.
More information is available at www.thefrankiefoundation.org.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
SAN DIEGO – Researchers at the University of Arkansas for Medical Sciences in Little Rock are teaching dogs to detect thyroid cancer from urine samples.
The dogs become alert on samples if they detect cancer, but remain passive if they don’t. The first graduate of the program, a German shepherd mix named Frankie, got it right in 30 of 34 cases, matching final surgical pathology results with a sensitivity of 86.6% and a specificity of 89.5%.
With results like those, it might not be too long before Frankie and his colleagues are providing inexpensive adjunct diagnostic services when test results are uncertain, and helping underserved areas with limited diagnostic capacity, the researchers noted.
At the Endocrine Society meeting, investigator Dr. Andrew Hinson shared clips of Frankie and another recent graduate, a border collie mix named Sophie, and explained the project’s next steps.
Frankie was rescued by principal investigator Dr. Arny Ferrando. Sophie and other dogs in the program were also rescued from local animal shelters.
More information is available at www.thefrankiefoundation.org.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
SAN DIEGO – Researchers at the University of Arkansas for Medical Sciences in Little Rock are teaching dogs to detect thyroid cancer from urine samples.
The dogs become alert on samples if they detect cancer, but remain passive if they don’t. The first graduate of the program, a German shepherd mix named Frankie, got it right in 30 of 34 cases, matching final surgical pathology results with a sensitivity of 86.6% and a specificity of 89.5%.
With results like those, it might not be too long before Frankie and his colleagues are providing inexpensive adjunct diagnostic services when test results are uncertain, and helping underserved areas with limited diagnostic capacity, the researchers noted.
At the Endocrine Society meeting, investigator Dr. Andrew Hinson shared clips of Frankie and another recent graduate, a border collie mix named Sophie, and explained the project’s next steps.
Frankie was rescued by principal investigator Dr. Arny Ferrando. Sophie and other dogs in the program were also rescued from local animal shelters.
More information is available at www.thefrankiefoundation.org.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
AT ENDO 2015
Heparin, warfarin tied to similar VTE rates after radical cystectomy
Venous thromboembolisms affected 6.4% of patients who underwent radical cystectomy, even though all patients received heparin in the hospital as recommended by the American Urological Association, researchers reported.
“Using an in-house, heparin-based anticoagulation protocol consistent with current AUA guidelines has not decreased the rate of venous thromboembolism compared to historical warfarin use,” wrote Dr. Andrew Sun and his colleagues at the University of Southern California Institute of Urology in Los Angeles. Most episodes of VTE occurred after patients were discharged home, and “future studies are needed to establish the benefits of extended-duration [VTE] prophylaxis regimens that cover the critical posthospitalization period,” the researchers added (J. UroL 2015;193:565-9).
Previous studies have reported venous thromboembolism rates of 3%-6% in cystectomy patients, a rate that is more than double that reported for nephrectomy or prostatectomy patients. For their study, the investigators retrospectively assessed 2,316 patients who underwent open radical cystectomy and extended pelvic lymph node dissection for urothelial bladder cancer between 1971 and 2012. Symptomatic VTE developed among 109 patients overall (4.7%), compared with 6.4% of those who received the modern, heparin-based protocol implemented in 2009 (P = .089).
Furthermore, 58% of all cases occurred after patients stopped anticoagulation therapy and were discharged home. The median time of onset was 20 days after surgery (range, 2-91 days), and VTE was significantly more common among patients with a higher body mass index, prolonged hospital stays, positive surgical margins and orthotopic diversion procedures, compared with other patients. Surgical techniques remained consistent throughout the study.
The study was retrospective, and thus “could not prove any cause and effect relationships. This underscores the need for additional prospective data in this area of research,” said the investigators. “We focused only on open radical cystectomy, and thus, findings may not be generalizable to minimally invasive modalities, on which there is even a greater paucity of data.”
Senior author Dr. Siamak Daneshmand reported financial or other relationships with Endo and Cubist. The authors reported no funding sources or other relevant conflicts of interest.
Venous thromboembolisms affected 6.4% of patients who underwent radical cystectomy, even though all patients received heparin in the hospital as recommended by the American Urological Association, researchers reported.
“Using an in-house, heparin-based anticoagulation protocol consistent with current AUA guidelines has not decreased the rate of venous thromboembolism compared to historical warfarin use,” wrote Dr. Andrew Sun and his colleagues at the University of Southern California Institute of Urology in Los Angeles. Most episodes of VTE occurred after patients were discharged home, and “future studies are needed to establish the benefits of extended-duration [VTE] prophylaxis regimens that cover the critical posthospitalization period,” the researchers added (J. UroL 2015;193:565-9).
Previous studies have reported venous thromboembolism rates of 3%-6% in cystectomy patients, a rate that is more than double that reported for nephrectomy or prostatectomy patients. For their study, the investigators retrospectively assessed 2,316 patients who underwent open radical cystectomy and extended pelvic lymph node dissection for urothelial bladder cancer between 1971 and 2012. Symptomatic VTE developed among 109 patients overall (4.7%), compared with 6.4% of those who received the modern, heparin-based protocol implemented in 2009 (P = .089).
Furthermore, 58% of all cases occurred after patients stopped anticoagulation therapy and were discharged home. The median time of onset was 20 days after surgery (range, 2-91 days), and VTE was significantly more common among patients with a higher body mass index, prolonged hospital stays, positive surgical margins and orthotopic diversion procedures, compared with other patients. Surgical techniques remained consistent throughout the study.
The study was retrospective, and thus “could not prove any cause and effect relationships. This underscores the need for additional prospective data in this area of research,” said the investigators. “We focused only on open radical cystectomy, and thus, findings may not be generalizable to minimally invasive modalities, on which there is even a greater paucity of data.”
Senior author Dr. Siamak Daneshmand reported financial or other relationships with Endo and Cubist. The authors reported no funding sources or other relevant conflicts of interest.
Venous thromboembolisms affected 6.4% of patients who underwent radical cystectomy, even though all patients received heparin in the hospital as recommended by the American Urological Association, researchers reported.
“Using an in-house, heparin-based anticoagulation protocol consistent with current AUA guidelines has not decreased the rate of venous thromboembolism compared to historical warfarin use,” wrote Dr. Andrew Sun and his colleagues at the University of Southern California Institute of Urology in Los Angeles. Most episodes of VTE occurred after patients were discharged home, and “future studies are needed to establish the benefits of extended-duration [VTE] prophylaxis regimens that cover the critical posthospitalization period,” the researchers added (J. UroL 2015;193:565-9).
Previous studies have reported venous thromboembolism rates of 3%-6% in cystectomy patients, a rate that is more than double that reported for nephrectomy or prostatectomy patients. For their study, the investigators retrospectively assessed 2,316 patients who underwent open radical cystectomy and extended pelvic lymph node dissection for urothelial bladder cancer between 1971 and 2012. Symptomatic VTE developed among 109 patients overall (4.7%), compared with 6.4% of those who received the modern, heparin-based protocol implemented in 2009 (P = .089).
Furthermore, 58% of all cases occurred after patients stopped anticoagulation therapy and were discharged home. The median time of onset was 20 days after surgery (range, 2-91 days), and VTE was significantly more common among patients with a higher body mass index, prolonged hospital stays, positive surgical margins and orthotopic diversion procedures, compared with other patients. Surgical techniques remained consistent throughout the study.
The study was retrospective, and thus “could not prove any cause and effect relationships. This underscores the need for additional prospective data in this area of research,” said the investigators. “We focused only on open radical cystectomy, and thus, findings may not be generalizable to minimally invasive modalities, on which there is even a greater paucity of data.”
Senior author Dr. Siamak Daneshmand reported financial or other relationships with Endo and Cubist. The authors reported no funding sources or other relevant conflicts of interest.
FROM THE JOURNAL OF UROLOGY
Key clinical point: Heparin and warfarin were linked to similar rates of postcystectomy venous thromboembolism.
Major finding: Symptomatic VTE affected 4.7% of patients in the overall cohort, compared with 6.4% of those treated with the modern, heparin-based protocol (P = .089).
Data source: A single-center retrospective cohort study of 2,316 patients who underwent open radical cystectomy and extended pelvic lymph node dissection.
Disclosures: Senior author Dr. Siamak Daneshmand reported financial or other relationships with Endo and Cubist. The authors reported no funding sources or other relevant conflicts of interest.
Questions on stroke ambulance feasibility
The TPA ambulance, armed with its own CT scanner, has arrived in the United States after several successful years in Germany.
Now what?
Like all new advances, it’s a difficult balance between costs and benefits. The money, in the end, is what it really comes down to. Will the cost of a CT ambulance, the equipment needed to send images to a radiologist, the extra training for EMTs, the price of stocking TPA on board, and maybe even having a neurologist on the ride (or telemedicine for one to see the patient) be offset by money saved on rehabilitation costs, better recoveries, fewer complications, even returning a patient to work?
I have no idea. I’m not sure anyone else does, either.
Certainly, I support the idea of improved stroke care. Although far from ideal, TPA is the only thing we have right now, and the sooner it’s given, the better. Most neurologists will agree. But who’s going to pay for this?
The insurance companies, obviously. But money is finite. What if we upgrade all these ambulances, only to find that there’s no significant cost savings on rehab and recovery when TPA is used in the field? Then the money comes out of doctors’ and nurses’ salaries, higher premiums for everyone, and a cutback in treatment for some other disorder. I’m pretty sure it won’t be taken out of an insurance executive’s year-end bonus.
And just try explaining that to the family of a stroke victim.
It’s not practical to put a CT scanner in every ambulance, so where do we put those so equipped? Again, there’s no easy answer. In areas with large retirement communities? Seems like a safe bet, but young people have strokes, too. Only in cities? More people live in cities, but those in rural areas may be too far from a hospital to receive TPA early. Shouldn’t they have one, too?
Who’s going to make the decision to send the TPA ambulance vs. the regular ambulance? That’s another tough question. The layman who calls in usually isn’t sure what’s going on, only that an ambulance is needed. The dispatcher often can’t tell over the phone if the patient has had a stroke, seizure, or psychogenic event. Should a neurologist or emergency medicine physician make the decision? Maybe, but how much extra time will it take to get one on the line? And, even then, they’ll be making a critical decision with sparse, secondhand information. What if the special ambulance is mistakenly sent to deal with a conversion disorder, only to have a legitimate stroke occur elsewhere when it’s no longer immediately available? That, inevitably, will lead to a lawsuit because the wrong ambulance was sent.
I’m not against the stroke ambulance – far from it – but there are still a lot questions to be answered. Putting a CT scanner and TPA in an ambulance is, comparatively, the easiest part.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
The TPA ambulance, armed with its own CT scanner, has arrived in the United States after several successful years in Germany.
Now what?
Like all new advances, it’s a difficult balance between costs and benefits. The money, in the end, is what it really comes down to. Will the cost of a CT ambulance, the equipment needed to send images to a radiologist, the extra training for EMTs, the price of stocking TPA on board, and maybe even having a neurologist on the ride (or telemedicine for one to see the patient) be offset by money saved on rehabilitation costs, better recoveries, fewer complications, even returning a patient to work?
I have no idea. I’m not sure anyone else does, either.
Certainly, I support the idea of improved stroke care. Although far from ideal, TPA is the only thing we have right now, and the sooner it’s given, the better. Most neurologists will agree. But who’s going to pay for this?
The insurance companies, obviously. But money is finite. What if we upgrade all these ambulances, only to find that there’s no significant cost savings on rehab and recovery when TPA is used in the field? Then the money comes out of doctors’ and nurses’ salaries, higher premiums for everyone, and a cutback in treatment for some other disorder. I’m pretty sure it won’t be taken out of an insurance executive’s year-end bonus.
And just try explaining that to the family of a stroke victim.
It’s not practical to put a CT scanner in every ambulance, so where do we put those so equipped? Again, there’s no easy answer. In areas with large retirement communities? Seems like a safe bet, but young people have strokes, too. Only in cities? More people live in cities, but those in rural areas may be too far from a hospital to receive TPA early. Shouldn’t they have one, too?
Who’s going to make the decision to send the TPA ambulance vs. the regular ambulance? That’s another tough question. The layman who calls in usually isn’t sure what’s going on, only that an ambulance is needed. The dispatcher often can’t tell over the phone if the patient has had a stroke, seizure, or psychogenic event. Should a neurologist or emergency medicine physician make the decision? Maybe, but how much extra time will it take to get one on the line? And, even then, they’ll be making a critical decision with sparse, secondhand information. What if the special ambulance is mistakenly sent to deal with a conversion disorder, only to have a legitimate stroke occur elsewhere when it’s no longer immediately available? That, inevitably, will lead to a lawsuit because the wrong ambulance was sent.
I’m not against the stroke ambulance – far from it – but there are still a lot questions to be answered. Putting a CT scanner and TPA in an ambulance is, comparatively, the easiest part.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
The TPA ambulance, armed with its own CT scanner, has arrived in the United States after several successful years in Germany.
Now what?
Like all new advances, it’s a difficult balance between costs and benefits. The money, in the end, is what it really comes down to. Will the cost of a CT ambulance, the equipment needed to send images to a radiologist, the extra training for EMTs, the price of stocking TPA on board, and maybe even having a neurologist on the ride (or telemedicine for one to see the patient) be offset by money saved on rehabilitation costs, better recoveries, fewer complications, even returning a patient to work?
I have no idea. I’m not sure anyone else does, either.
Certainly, I support the idea of improved stroke care. Although far from ideal, TPA is the only thing we have right now, and the sooner it’s given, the better. Most neurologists will agree. But who’s going to pay for this?
The insurance companies, obviously. But money is finite. What if we upgrade all these ambulances, only to find that there’s no significant cost savings on rehab and recovery when TPA is used in the field? Then the money comes out of doctors’ and nurses’ salaries, higher premiums for everyone, and a cutback in treatment for some other disorder. I’m pretty sure it won’t be taken out of an insurance executive’s year-end bonus.
And just try explaining that to the family of a stroke victim.
It’s not practical to put a CT scanner in every ambulance, so where do we put those so equipped? Again, there’s no easy answer. In areas with large retirement communities? Seems like a safe bet, but young people have strokes, too. Only in cities? More people live in cities, but those in rural areas may be too far from a hospital to receive TPA early. Shouldn’t they have one, too?
Who’s going to make the decision to send the TPA ambulance vs. the regular ambulance? That’s another tough question. The layman who calls in usually isn’t sure what’s going on, only that an ambulance is needed. The dispatcher often can’t tell over the phone if the patient has had a stroke, seizure, or psychogenic event. Should a neurologist or emergency medicine physician make the decision? Maybe, but how much extra time will it take to get one on the line? And, even then, they’ll be making a critical decision with sparse, secondhand information. What if the special ambulance is mistakenly sent to deal with a conversion disorder, only to have a legitimate stroke occur elsewhere when it’s no longer immediately available? That, inevitably, will lead to a lawsuit because the wrong ambulance was sent.
I’m not against the stroke ambulance – far from it – but there are still a lot questions to be answered. Putting a CT scanner and TPA in an ambulance is, comparatively, the easiest part.
Dr. Block has a solo neurology practice in Scottsdale, Ariz.
Plasma product can be stored longer, FDA says
Photo by Cristina Granados
The US Food and Drug Administration (FDA) has approved a revised label for the pooled plasma product Octaplas, increasing the product’s shelf life.
The new label says Octaplas can now be stored frozen, at or below -18°C (-0.4°F), for 3 years from the date of manufacture.
And thawed Octaplas should be used within 24 hours if refrigerated (between 1°C and 6°C/33.8°F to 42.8°F) or within 8 hours if stored at room temperature (between 20°C and 25°C/68°F to 77°F)
The previous product label said frozen Octaplas could be stored for 2 years, and thawed Octaplas should be used within 12 hours if stored between 2°C and 4°C (35.6°F to 39.2°F) or within 3 hours if stored between 20°C and 25°C (68°F to 77°F).
About Octaplas
Octaplas is a sterile, frozen solution of human plasma from several donors that has been treated with a solvent detergent process to minimize the risk of serious virus transmission. The plasma is collected from US donors who have been screened and tested for diseases transmitted by blood.
Octaplas gained FDA approval in January 2013. The product is indicated for the replacement of multiple coagulation factors in patients with acquired deficiencies due to liver disease or undergoing cardiac surgery or liver transplant. Octaplas can also be used for plasma exchange in patients with thrombotic thrombocytopenic purpura.
Octaplas is contraindicated in patients with immunoglobulin A deficiency, severe deficiency of protein S, history of hypersensitivity to fresh-frozen plasma or to plasma-derived products including any plasma protein, or a history of hypersensitivity reaction to Octaplas.
Serious adverse events observed in clinical trials of Octaplas were anaphylactic shock, citrate toxicity, and severe hypotension. The most common adverse events observed in 1% of patients or more included pruritus, urticaria, nausea, headache, and paresthesia.
Transfusion reactions can occur with ABO blood group mismatches. High infusion rates can induce hypervolemia with consequent pulmonary edema or cardiac failure. Excessive bleeding due to hyperfibrinolysis can occur due to low levels of alpha2-antiplasmin.
Thrombosis can occur due to low levels of protein S. Citrate toxicity can occur with transfusion rates exceeding 1 mL/kg/min of Octaplas. As Octaplas is made from human plasma, it may carry a risk of transmitting infectious agents, such as viruses, the variant Creutzfeldt-Jakob disease agent, and, theoretically, the Creutzfeldt-Jakob disease agent.
For more details on Octaplas, see the complete prescribing information.
Photo by Cristina Granados
The US Food and Drug Administration (FDA) has approved a revised label for the pooled plasma product Octaplas, increasing the product’s shelf life.
The new label says Octaplas can now be stored frozen, at or below -18°C (-0.4°F), for 3 years from the date of manufacture.
And thawed Octaplas should be used within 24 hours if refrigerated (between 1°C and 6°C/33.8°F to 42.8°F) or within 8 hours if stored at room temperature (between 20°C and 25°C/68°F to 77°F)
The previous product label said frozen Octaplas could be stored for 2 years, and thawed Octaplas should be used within 12 hours if stored between 2°C and 4°C (35.6°F to 39.2°F) or within 3 hours if stored between 20°C and 25°C (68°F to 77°F).
About Octaplas
Octaplas is a sterile, frozen solution of human plasma from several donors that has been treated with a solvent detergent process to minimize the risk of serious virus transmission. The plasma is collected from US donors who have been screened and tested for diseases transmitted by blood.
Octaplas gained FDA approval in January 2013. The product is indicated for the replacement of multiple coagulation factors in patients with acquired deficiencies due to liver disease or undergoing cardiac surgery or liver transplant. Octaplas can also be used for plasma exchange in patients with thrombotic thrombocytopenic purpura.
Octaplas is contraindicated in patients with immunoglobulin A deficiency, severe deficiency of protein S, history of hypersensitivity to fresh-frozen plasma or to plasma-derived products including any plasma protein, or a history of hypersensitivity reaction to Octaplas.
Serious adverse events observed in clinical trials of Octaplas were anaphylactic shock, citrate toxicity, and severe hypotension. The most common adverse events observed in 1% of patients or more included pruritus, urticaria, nausea, headache, and paresthesia.
Transfusion reactions can occur with ABO blood group mismatches. High infusion rates can induce hypervolemia with consequent pulmonary edema or cardiac failure. Excessive bleeding due to hyperfibrinolysis can occur due to low levels of alpha2-antiplasmin.
Thrombosis can occur due to low levels of protein S. Citrate toxicity can occur with transfusion rates exceeding 1 mL/kg/min of Octaplas. As Octaplas is made from human plasma, it may carry a risk of transmitting infectious agents, such as viruses, the variant Creutzfeldt-Jakob disease agent, and, theoretically, the Creutzfeldt-Jakob disease agent.
For more details on Octaplas, see the complete prescribing information.
Photo by Cristina Granados
The US Food and Drug Administration (FDA) has approved a revised label for the pooled plasma product Octaplas, increasing the product’s shelf life.
The new label says Octaplas can now be stored frozen, at or below -18°C (-0.4°F), for 3 years from the date of manufacture.
And thawed Octaplas should be used within 24 hours if refrigerated (between 1°C and 6°C/33.8°F to 42.8°F) or within 8 hours if stored at room temperature (between 20°C and 25°C/68°F to 77°F)
The previous product label said frozen Octaplas could be stored for 2 years, and thawed Octaplas should be used within 12 hours if stored between 2°C and 4°C (35.6°F to 39.2°F) or within 3 hours if stored between 20°C and 25°C (68°F to 77°F).
About Octaplas
Octaplas is a sterile, frozen solution of human plasma from several donors that has been treated with a solvent detergent process to minimize the risk of serious virus transmission. The plasma is collected from US donors who have been screened and tested for diseases transmitted by blood.
Octaplas gained FDA approval in January 2013. The product is indicated for the replacement of multiple coagulation factors in patients with acquired deficiencies due to liver disease or undergoing cardiac surgery or liver transplant. Octaplas can also be used for plasma exchange in patients with thrombotic thrombocytopenic purpura.
Octaplas is contraindicated in patients with immunoglobulin A deficiency, severe deficiency of protein S, history of hypersensitivity to fresh-frozen plasma or to plasma-derived products including any plasma protein, or a history of hypersensitivity reaction to Octaplas.
Serious adverse events observed in clinical trials of Octaplas were anaphylactic shock, citrate toxicity, and severe hypotension. The most common adverse events observed in 1% of patients or more included pruritus, urticaria, nausea, headache, and paresthesia.
Transfusion reactions can occur with ABO blood group mismatches. High infusion rates can induce hypervolemia with consequent pulmonary edema or cardiac failure. Excessive bleeding due to hyperfibrinolysis can occur due to low levels of alpha2-antiplasmin.
Thrombosis can occur due to low levels of protein S. Citrate toxicity can occur with transfusion rates exceeding 1 mL/kg/min of Octaplas. As Octaplas is made from human plasma, it may carry a risk of transmitting infectious agents, such as viruses, the variant Creutzfeldt-Jakob disease agent, and, theoretically, the Creutzfeldt-Jakob disease agent.
For more details on Octaplas, see the complete prescribing information.
Polymer can stop lethal bleeding in vivo
a blood clot, with PolySTAT
(blue) binding strands together
Image by William Walker–
University of Washington
Preclinical research suggests an injectable polymer known as PolySTAT may one day be able to halt life-threatening bleeding in soldiers and trauma patients.
Once injected, this hemostatic polymer circulates in the blood, homes to sites of vascular injury, and promotes the formation of blood clots.
In experiments with rats, 100% of animals injected with PolySTAT survived a typically lethal injury to the femoral artery. In comparison, 0% to 40% of controls survived.
“Most of the patients who die from bleeding die quickly,” said Nathan White, MD, of the University of Washington in Seattle.
“[PolySTAT] is something you could potentially put in a syringe inside a backpack and give right away to reduce blood loss and keep people alive long enough to make it to medical care.”
Dr White and his colleagues described their work with PolySTAT in Science Translational Medicine. A related Focus article addressed the promises and challenges of advancing PolySTAT and other clotting approaches from proof-of-principle to clinical development.
PolySTAT induces hemostasis by cross-linking the fibrin matrix within blood clots, just as factor XIII does. But the researchers said PolySTAT offers greater protection against natural enzymes that dissolve blood clots.
That’s because PolySTAT binds to fibrin monomers and is uniformly integrated into fibrin fibers during polymerization. This produces a fortified, hybrid polymer network that can resist enzymatic degradation.
In vitro experiments showed that PolySTAT accelerated clotting kinetics, increased the strength of blood clots, and delayed clot breakdown.
The researchers also assessed how PolySTAT affected rats following a femoral artery injury, comparing results with PolySTAT to those with volume control (0.9% saline), a nonbinding scrambled control polymer (PolySCRAM), rat albumin, and human FXIIIa.
The team found that PolySTAT conferred superior survival by reducing blood loss and fluid resuscitation requirements.
All of the rats treated with PolySTAT (5/5) survived to the end of the experiment, compared to none of the rats that received albumin, 20% that received PolySCRAM or FXIIIa, and 40% that received volume control.
The researchers said PolySTAT’s initial safety profile looks promising, but they are still planning to test the polymer on larger animals and conduct additional screening to find out if PolySTAT binds to any other unintended substances.
The team also plans to investigate PolySTAT’s potential for treating hemophilia and for integration into bandages.
a blood clot, with PolySTAT
(blue) binding strands together
Image by William Walker–
University of Washington
Preclinical research suggests an injectable polymer known as PolySTAT may one day be able to halt life-threatening bleeding in soldiers and trauma patients.
Once injected, this hemostatic polymer circulates in the blood, homes to sites of vascular injury, and promotes the formation of blood clots.
In experiments with rats, 100% of animals injected with PolySTAT survived a typically lethal injury to the femoral artery. In comparison, 0% to 40% of controls survived.
“Most of the patients who die from bleeding die quickly,” said Nathan White, MD, of the University of Washington in Seattle.
“[PolySTAT] is something you could potentially put in a syringe inside a backpack and give right away to reduce blood loss and keep people alive long enough to make it to medical care.”
Dr White and his colleagues described their work with PolySTAT in Science Translational Medicine. A related Focus article addressed the promises and challenges of advancing PolySTAT and other clotting approaches from proof-of-principle to clinical development.
PolySTAT induces hemostasis by cross-linking the fibrin matrix within blood clots, just as factor XIII does. But the researchers said PolySTAT offers greater protection against natural enzymes that dissolve blood clots.
That’s because PolySTAT binds to fibrin monomers and is uniformly integrated into fibrin fibers during polymerization. This produces a fortified, hybrid polymer network that can resist enzymatic degradation.
In vitro experiments showed that PolySTAT accelerated clotting kinetics, increased the strength of blood clots, and delayed clot breakdown.
The researchers also assessed how PolySTAT affected rats following a femoral artery injury, comparing results with PolySTAT to those with volume control (0.9% saline), a nonbinding scrambled control polymer (PolySCRAM), rat albumin, and human FXIIIa.
The team found that PolySTAT conferred superior survival by reducing blood loss and fluid resuscitation requirements.
All of the rats treated with PolySTAT (5/5) survived to the end of the experiment, compared to none of the rats that received albumin, 20% that received PolySCRAM or FXIIIa, and 40% that received volume control.
The researchers said PolySTAT’s initial safety profile looks promising, but they are still planning to test the polymer on larger animals and conduct additional screening to find out if PolySTAT binds to any other unintended substances.
The team also plans to investigate PolySTAT’s potential for treating hemophilia and for integration into bandages.
a blood clot, with PolySTAT
(blue) binding strands together
Image by William Walker–
University of Washington
Preclinical research suggests an injectable polymer known as PolySTAT may one day be able to halt life-threatening bleeding in soldiers and trauma patients.
Once injected, this hemostatic polymer circulates in the blood, homes to sites of vascular injury, and promotes the formation of blood clots.
In experiments with rats, 100% of animals injected with PolySTAT survived a typically lethal injury to the femoral artery. In comparison, 0% to 40% of controls survived.
“Most of the patients who die from bleeding die quickly,” said Nathan White, MD, of the University of Washington in Seattle.
“[PolySTAT] is something you could potentially put in a syringe inside a backpack and give right away to reduce blood loss and keep people alive long enough to make it to medical care.”
Dr White and his colleagues described their work with PolySTAT in Science Translational Medicine. A related Focus article addressed the promises and challenges of advancing PolySTAT and other clotting approaches from proof-of-principle to clinical development.
PolySTAT induces hemostasis by cross-linking the fibrin matrix within blood clots, just as factor XIII does. But the researchers said PolySTAT offers greater protection against natural enzymes that dissolve blood clots.
That’s because PolySTAT binds to fibrin monomers and is uniformly integrated into fibrin fibers during polymerization. This produces a fortified, hybrid polymer network that can resist enzymatic degradation.
In vitro experiments showed that PolySTAT accelerated clotting kinetics, increased the strength of blood clots, and delayed clot breakdown.
The researchers also assessed how PolySTAT affected rats following a femoral artery injury, comparing results with PolySTAT to those with volume control (0.9% saline), a nonbinding scrambled control polymer (PolySCRAM), rat albumin, and human FXIIIa.
The team found that PolySTAT conferred superior survival by reducing blood loss and fluid resuscitation requirements.
All of the rats treated with PolySTAT (5/5) survived to the end of the experiment, compared to none of the rats that received albumin, 20% that received PolySCRAM or FXIIIa, and 40% that received volume control.
The researchers said PolySTAT’s initial safety profile looks promising, but they are still planning to test the polymer on larger animals and conduct additional screening to find out if PolySTAT binds to any other unintended substances.
The team also plans to investigate PolySTAT’s potential for treating hemophilia and for integration into bandages.
Rehospitalization after severe sepsis often avoidable
A new analysis indicates that patients hospitalized for severe sepsis are often readmitted within 90 days, and many of these readmissions may be preventable.
About 43% of the patients studied were readmitted to the hospital within 90 days of their sepsis hospitalization.
And 42% of these hospitalizations were due to conditions that could potentially be prevented or treated early to avoid hospitalization, according to researchers.
Hallie C. Prescott, MD, of the University of Michigan, Ann Arbor, and her colleagues reported these findings in JAMA.
The researchers analyzed participants in the nationally representative US Health and Retirement Study, a sample of households with adults 50 years of age or older that is linked to Medicare claims (1998-2010).
The team examined the most common readmission diagnoses among patients who were hospitalized for severe sepsis, the extent to which readmissions might have been preventable, and whether the pattern of readmission diagnoses differed compared with that of other acute medical conditions.
To gauge what proportion of rehospitalizations might have been preventable, the researchers looked at ambulatory-care-sensitive conditions (ACSCs) identified by the Agency for Healthcare Research and Quality. They also expanded the definition of ACSCs to include conditions that aren’t common among the general population but arise more often in sepsis survivors.
So their potentially preventable readmission diagnoses included pneumonia, hypertension, dehydration, asthma, urinary tract infection, chronic obstructive pulmonary disease exacerbation, perforated appendix, diabetes, angina, congestive heart failure, sepsis, acute renal failure, skin or soft tissue infection, and aspiration pneumonitis.
Dr Prescott and her colleagues identified 2617 hospitalizations for severe sepsis that could be matched to hospitalizations for other acute medical conditions. And they found that 1115 of the severe sepsis survivors (42.6%) were rehospitalized within 90 days.
The 10 most common readmission diagnoses following severe sepsis were sepsis (6.4%), congestive heart failure (5.5%), pneumonia (3.5%), acute renal failure (3.3%), rehabilitation (2.8%), respiratory failure (2.5%), complication related to a device, implant, or graft (2%), exacerbation of chronic obstructive pulmonary disorder (1.9%), aspiration pneumonitis (1.8%), and urinary tract infection (1.7%).
Readmissions for a primary diagnosis of infection (sepsis, pneumonia, urinary tract, and skin or soft tissue infection) occurred in 11.9% of severe sepsis survivors and 8.0% of patients with acute medical conditions (P<0.001).
Likewise, readmissions for ACSCs were more common after severe sepsis than for patients with acute conditions—21.6% and 19.1%, respectively (P=0.02)—and accounted for a greater proportion of all 90-day readmissions—41.6% and 37%.1, respectively (P=0.009).
“Many of these conditions can be managed if the patient can get in to see a doctor at the start of the illness, meaning that we potentially avoid hospitalization,” Dr Prescott said. “We need to assess their vulnerability and design a better landing pad for patients when they leave the hospital, and avoid the second hit that derails recovery.”
A new analysis indicates that patients hospitalized for severe sepsis are often readmitted within 90 days, and many of these readmissions may be preventable.
About 43% of the patients studied were readmitted to the hospital within 90 days of their sepsis hospitalization.
And 42% of these hospitalizations were due to conditions that could potentially be prevented or treated early to avoid hospitalization, according to researchers.
Hallie C. Prescott, MD, of the University of Michigan, Ann Arbor, and her colleagues reported these findings in JAMA.
The researchers analyzed participants in the nationally representative US Health and Retirement Study, a sample of households with adults 50 years of age or older that is linked to Medicare claims (1998-2010).
The team examined the most common readmission diagnoses among patients who were hospitalized for severe sepsis, the extent to which readmissions might have been preventable, and whether the pattern of readmission diagnoses differed compared with that of other acute medical conditions.
To gauge what proportion of rehospitalizations might have been preventable, the researchers looked at ambulatory-care-sensitive conditions (ACSCs) identified by the Agency for Healthcare Research and Quality. They also expanded the definition of ACSCs to include conditions that aren’t common among the general population but arise more often in sepsis survivors.
So their potentially preventable readmission diagnoses included pneumonia, hypertension, dehydration, asthma, urinary tract infection, chronic obstructive pulmonary disease exacerbation, perforated appendix, diabetes, angina, congestive heart failure, sepsis, acute renal failure, skin or soft tissue infection, and aspiration pneumonitis.
Dr Prescott and her colleagues identified 2617 hospitalizations for severe sepsis that could be matched to hospitalizations for other acute medical conditions. And they found that 1115 of the severe sepsis survivors (42.6%) were rehospitalized within 90 days.
The 10 most common readmission diagnoses following severe sepsis were sepsis (6.4%), congestive heart failure (5.5%), pneumonia (3.5%), acute renal failure (3.3%), rehabilitation (2.8%), respiratory failure (2.5%), complication related to a device, implant, or graft (2%), exacerbation of chronic obstructive pulmonary disorder (1.9%), aspiration pneumonitis (1.8%), and urinary tract infection (1.7%).
Readmissions for a primary diagnosis of infection (sepsis, pneumonia, urinary tract, and skin or soft tissue infection) occurred in 11.9% of severe sepsis survivors and 8.0% of patients with acute medical conditions (P<0.001).
Likewise, readmissions for ACSCs were more common after severe sepsis than for patients with acute conditions—21.6% and 19.1%, respectively (P=0.02)—and accounted for a greater proportion of all 90-day readmissions—41.6% and 37%.1, respectively (P=0.009).
“Many of these conditions can be managed if the patient can get in to see a doctor at the start of the illness, meaning that we potentially avoid hospitalization,” Dr Prescott said. “We need to assess their vulnerability and design a better landing pad for patients when they leave the hospital, and avoid the second hit that derails recovery.”
A new analysis indicates that patients hospitalized for severe sepsis are often readmitted within 90 days, and many of these readmissions may be preventable.
About 43% of the patients studied were readmitted to the hospital within 90 days of their sepsis hospitalization.
And 42% of these hospitalizations were due to conditions that could potentially be prevented or treated early to avoid hospitalization, according to researchers.
Hallie C. Prescott, MD, of the University of Michigan, Ann Arbor, and her colleagues reported these findings in JAMA.
The researchers analyzed participants in the nationally representative US Health and Retirement Study, a sample of households with adults 50 years of age or older that is linked to Medicare claims (1998-2010).
The team examined the most common readmission diagnoses among patients who were hospitalized for severe sepsis, the extent to which readmissions might have been preventable, and whether the pattern of readmission diagnoses differed compared with that of other acute medical conditions.
To gauge what proportion of rehospitalizations might have been preventable, the researchers looked at ambulatory-care-sensitive conditions (ACSCs) identified by the Agency for Healthcare Research and Quality. They also expanded the definition of ACSCs to include conditions that aren’t common among the general population but arise more often in sepsis survivors.
So their potentially preventable readmission diagnoses included pneumonia, hypertension, dehydration, asthma, urinary tract infection, chronic obstructive pulmonary disease exacerbation, perforated appendix, diabetes, angina, congestive heart failure, sepsis, acute renal failure, skin or soft tissue infection, and aspiration pneumonitis.
Dr Prescott and her colleagues identified 2617 hospitalizations for severe sepsis that could be matched to hospitalizations for other acute medical conditions. And they found that 1115 of the severe sepsis survivors (42.6%) were rehospitalized within 90 days.
The 10 most common readmission diagnoses following severe sepsis were sepsis (6.4%), congestive heart failure (5.5%), pneumonia (3.5%), acute renal failure (3.3%), rehabilitation (2.8%), respiratory failure (2.5%), complication related to a device, implant, or graft (2%), exacerbation of chronic obstructive pulmonary disorder (1.9%), aspiration pneumonitis (1.8%), and urinary tract infection (1.7%).
Readmissions for a primary diagnosis of infection (sepsis, pneumonia, urinary tract, and skin or soft tissue infection) occurred in 11.9% of severe sepsis survivors and 8.0% of patients with acute medical conditions (P<0.001).
Likewise, readmissions for ACSCs were more common after severe sepsis than for patients with acute conditions—21.6% and 19.1%, respectively (P=0.02)—and accounted for a greater proportion of all 90-day readmissions—41.6% and 37%.1, respectively (P=0.009).
“Many of these conditions can be managed if the patient can get in to see a doctor at the start of the illness, meaning that we potentially avoid hospitalization,” Dr Prescott said. “We need to assess their vulnerability and design a better landing pad for patients when they leave the hospital, and avoid the second hit that derails recovery.”
Researchers generate RBCs to treat SCD
Image by Ying Wang–
Johns Hopkins Medicine
Researchers say they have devised a technique for generating normal, mature red blood cells (RBCs) from patients with sickle cell disease (SCD).
The team hopes that, ultimately, the RBCs could be transfused back into the patients from which they are derived and eliminate the need for donor transfusion in SCD.
Linzhao Cheng, PhD, of the Johns Hopkins School of Medicine in Baltimore, Maryland, and his colleagues described the technique in Stem Cells.
Dr Cheng noted that SCD patients often require RBC transfusions, but over time, their bodies may begin to mount an immune response against the foreign blood.
“Their bodies quickly kill off the blood cells,” Dr Cheng said. “So they have to get transfusions more and more frequently.”
A solution, Dr Cheng and his colleagues thought, would be to generate RBCs for transfusion using a patient’s own cells.
To do this, the researchers first took hematopoietic cells from an SCD patient and generated induced pluripotent stem cells (iPSCs).
Then, the team used the gene-editing technique CRISPR/Cas9 to target the homozygous SCD mutation (nt. 69A>T) in the HBB gene and ensure the RBCs they generated would not be sickled.
Finally, they coaxed the iPSCs into mature RBCs that expressed the corrected HBB gene. The edited iPSCs generated RBCs just as efficiently as iPSCs that hadn’t been subjected to CRISPR/Cas9.
And the level of HBB protein expression in the RBCs derived from edited iPSCs was similar to that of RBCs generated from unedited iPSCs.
Dr Cheng noted that, to become medically useful, this method will have to be made more efficient and scaled up significantly. And the cells would need to be tested for safety.
“[Nevertheless,] this study shows it may be possible in the not-too-distant future to provide patients with sickle cell disease with an exciting new treatment option,” Dr Cheng said.
He and his colleagues believe this method of RBC generation may also be applicable for other blood disorders. And they think it might be possible to edit cells from healthy individuals so they can resist malaria and other infectious agents.
Another research group has reported the ability to correct the SCD mutation using zinc-finger nucleases.
Image by Ying Wang–
Johns Hopkins Medicine
Researchers say they have devised a technique for generating normal, mature red blood cells (RBCs) from patients with sickle cell disease (SCD).
The team hopes that, ultimately, the RBCs could be transfused back into the patients from which they are derived and eliminate the need for donor transfusion in SCD.
Linzhao Cheng, PhD, of the Johns Hopkins School of Medicine in Baltimore, Maryland, and his colleagues described the technique in Stem Cells.
Dr Cheng noted that SCD patients often require RBC transfusions, but over time, their bodies may begin to mount an immune response against the foreign blood.
“Their bodies quickly kill off the blood cells,” Dr Cheng said. “So they have to get transfusions more and more frequently.”
A solution, Dr Cheng and his colleagues thought, would be to generate RBCs for transfusion using a patient’s own cells.
To do this, the researchers first took hematopoietic cells from an SCD patient and generated induced pluripotent stem cells (iPSCs).
Then, the team used the gene-editing technique CRISPR/Cas9 to target the homozygous SCD mutation (nt. 69A>T) in the HBB gene and ensure the RBCs they generated would not be sickled.
Finally, they coaxed the iPSCs into mature RBCs that expressed the corrected HBB gene. The edited iPSCs generated RBCs just as efficiently as iPSCs that hadn’t been subjected to CRISPR/Cas9.
And the level of HBB protein expression in the RBCs derived from edited iPSCs was similar to that of RBCs generated from unedited iPSCs.
Dr Cheng noted that, to become medically useful, this method will have to be made more efficient and scaled up significantly. And the cells would need to be tested for safety.
“[Nevertheless,] this study shows it may be possible in the not-too-distant future to provide patients with sickle cell disease with an exciting new treatment option,” Dr Cheng said.
He and his colleagues believe this method of RBC generation may also be applicable for other blood disorders. And they think it might be possible to edit cells from healthy individuals so they can resist malaria and other infectious agents.
Another research group has reported the ability to correct the SCD mutation using zinc-finger nucleases.
Image by Ying Wang–
Johns Hopkins Medicine
Researchers say they have devised a technique for generating normal, mature red blood cells (RBCs) from patients with sickle cell disease (SCD).
The team hopes that, ultimately, the RBCs could be transfused back into the patients from which they are derived and eliminate the need for donor transfusion in SCD.
Linzhao Cheng, PhD, of the Johns Hopkins School of Medicine in Baltimore, Maryland, and his colleagues described the technique in Stem Cells.
Dr Cheng noted that SCD patients often require RBC transfusions, but over time, their bodies may begin to mount an immune response against the foreign blood.
“Their bodies quickly kill off the blood cells,” Dr Cheng said. “So they have to get transfusions more and more frequently.”
A solution, Dr Cheng and his colleagues thought, would be to generate RBCs for transfusion using a patient’s own cells.
To do this, the researchers first took hematopoietic cells from an SCD patient and generated induced pluripotent stem cells (iPSCs).
Then, the team used the gene-editing technique CRISPR/Cas9 to target the homozygous SCD mutation (nt. 69A>T) in the HBB gene and ensure the RBCs they generated would not be sickled.
Finally, they coaxed the iPSCs into mature RBCs that expressed the corrected HBB gene. The edited iPSCs generated RBCs just as efficiently as iPSCs that hadn’t been subjected to CRISPR/Cas9.
And the level of HBB protein expression in the RBCs derived from edited iPSCs was similar to that of RBCs generated from unedited iPSCs.
Dr Cheng noted that, to become medically useful, this method will have to be made more efficient and scaled up significantly. And the cells would need to be tested for safety.
“[Nevertheless,] this study shows it may be possible in the not-too-distant future to provide patients with sickle cell disease with an exciting new treatment option,” Dr Cheng said.
He and his colleagues believe this method of RBC generation may also be applicable for other blood disorders. And they think it might be possible to edit cells from healthy individuals so they can resist malaria and other infectious agents.
Another research group has reported the ability to correct the SCD mutation using zinc-finger nucleases.
Automated Sepsis Alert Systems
Sepsis is the most expensive condition treated in the hospital, resulting in an aggregate cost of $20.3 billion or 5.2% of total aggregate cost for all hospitalizations in the United States.[1] Rates of sepsis and sepsis‐related mortality are rising in the United States.[2, 3] Timely treatment of sepsis, including adequate fluid resuscitation and appropriate antibiotic administration, decreases morbidity, mortality, and costs.[4, 5, 6] Consequently, the Surviving Sepsis Campaign recommends timely care with the implementation of sepsis bundles and protocols.[4] Though effective, sepsis protocols require dedicated personnel with specialized training, who must be highly vigilant and constantly monitor a patient's condition for the course of an entire hospitalization.[7, 8] As such, delays in administering evidence‐based therapies are common.[8, 9]
Automated electronic sepsis alerts are being developed and implemented to facilitate the delivery of timely sepsis care. Electronic alert systems synthesize electronic health data routinely collected for clinical purposes in real time or near real time to automatically identify sepsis based on prespecified diagnostic criteria, and immediately alert providers that their patient may meet sepsis criteria via electronic notifications (eg, through electronic health record [EHR], e‐mail, or pager alerts).
However, little data exist to describe whether automated, electronic systems achieve their intended goal of earlier, more effective sepsis care. To examine this question, we performed a systematic review on automated electronic sepsis alerts to assess their suitability for clinical use. Our 2 objectives were: (1) to describe the diagnostic accuracy of alert systems in identifying sepsis using electronic data available in real‐time or near real‐time, and (2) to evaluate the effectiveness of sepsis alert systems on sepsis care process measures and clinical outcomes.
MATERIALS AND METHODS
Data Sources and Search Strategies
We searched PubMed MEDLINE, Embase, The Cochrane Library, and the Cumulative Index to Nursing and Allied Health Literature from database inception through June 27, 2014, for all studies that contained the following 3 concepts: sepsis, electronic systems, and alerts (or identification). All citations were imported into an electronic database (EndNote X5; Thomson‐Reuters Corp., New York, NY) (see Supporting Information, Appendix, in the online version of this article for our complete search strategy).
Study Selection
Two authors (A.N.M. and O.K.N.) reviewed the citation titles, abstracts, and full‐text articles of potentially relevant references identified from the literature search for eligibility. References of selected articles were hand searched to identify additional eligible studies. Inclusion criteria for eligible studies were: (1) adult patients (aged 18 years) receiving care either in the emergency department or hospital, (2) outcomes of interest including diagnostic accuracy in identification of sepsis, and/or effectiveness of sepsis alerts on process measures and clinical outcomes evaluated using empiric data, and (3) sepsis alert systems used real time or near real time electronically available data to enable proactive, timely management. We excluded studies that: (1) tested the effect of other electronic interventions that were not sepsis alerts (ie, computerized order sets) for sepsis management; (2) studies solely focused on detecting and treating central line‐associated bloodstream infections, shock (not otherwise specified), bacteremia, or other device‐related infections; and (3) studies evaluating the effectiveness of sepsis alerts without a control group.
Data Extraction and Quality Assessment
Two reviewers (A.N.M. and O.K.N.) extracted data on the clinical setting, study design, dates of enrollment, definition of sepsis, details of the identification and alert systems, diagnostic accuracy of the alert system, and the incidence of process measures and clinical outcomes using a standardized form. Discrepancies between reviewers were resolved by discussion and consensus. Data discrepancies identified in 1 study were resolved by contacting the corresponding author.[10]
For studies assessing the diagnostic accuracy of sepsis identification, study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies revised tool.[11] For studies evaluating the effectiveness of sepsis alert systems, studies were considered high quality if a contemporaneous control group was present to account for temporal trends (eg, randomized controlled trial or observational analysis with a concurrent control). Fair‐quality studies were before‐and‐after studies that adjusted for potential confounders between time periods. Low‐quality studies included those that did not account for temporal trends, such as before‐and‐after studies using only historical controls without adjustment. Studies that did not use an intention‐to‐treat analysis were also considered low quality. The strength of the overall body of evidence, including risk of bias, was guided by the Grading of Recommendations Assessment, Development, and Evaluation Working Group Criteria adapted by the Agency of Healthcare Research and Quality.[12]
Data Synthesis
To analyze the diagnostic accuracy of automated sepsis alert systems to identify sepsis and to evaluate the effect on outcomes, we performed a qualitative assessment of all studies. We were unable to perform a meta‐analysis due to significant heterogeneity in study quality, clinical setting, and definition of the sepsis alert. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Effectiveness was assessed by changes in sepsis care process measures (ie, time to antibiotics) and outcomes (length of stay, mortality).
RESULTS
Description of Studies
Of 1293 titles, 183 qualified for abstract review, 84 for full‐text review, and 8 articles met our inclusion criteria (see Supporting Figure in the online version of this article). Five articles evaluated the diagnostic accuracy of sepsis identification,[10, 13, 14, 15, 16] and 5 articles[10, 14, 17, 18, 19] evaluated the effectiveness of automated electronic sepsis alerts on sepsis process measures and patient outcomes. All articles were published between 2009 and 2014 and were single‐site studies conducted at academic medical centers (Tables 1 and 2). The clinical settings in the included studies varied and included the emergency department (ED), hospital wards, and the intensive care unit (ICU).
Source | Site No./Type | Setting | Alert Threshold | Gold Standard Definition | Gold Standard Measurement | No. | Study Qualitya |
---|---|---|---|---|---|---|---|
| |||||||
Hooper et al., 201210 | 1/academic | MICU | 2 SIRS criteriab | Reviewer judgment, not otherwise specified | Chart review | 560 | High |
Meurer et al., 200913 | 1/academic | ED | 2 SIRS criteria | Reviewer judgment whether diagnosis of infection present in ED plus SIRS criteria | Chart review | 248 | Low |
Nelson J. et al., 201114 | 1/academic | ED | 2 SIRS criteria and 2 SBP measurements <90 mm Hg | Reviewer judgment whether infection present, requiring hospitalization with at least 1 organ system involved | Chart review | 1,386 | High |
Nguyen et al., 201415 | 1/academic | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Reviewer judgment to confirm SIRS, shock, and presence of a serious infection | Chart review | 1,095 | Low |
Thiel et al., 201016 | 1/academic | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsc | Admitted to the hospital wards and subsequently transferred to the ICU for septic shock and treated with vasopressor therapy | ICD‐9 discharge codes for acute infection, acute organ dysfunction, and need for vasopressors within 24 hours of ICU transfer | 27,674 | Low |
Source | Design | Site No./ Type | Setting | No. | Alert System Type | Alert Threshold | Alert Notificationa | Treatment Recommendation | Study Qualityb |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Berger et al., 201017 | Before‐after (6 months pre and 6 months post) | 1/academic | ED | 5796c | CPOE system | 2 SIRS criteria | CPOE passive alert | Yes: lactate collection | Low |
Hooper et al., 201210 | RCT | 1/academic | MICU | 443 | EHR | 2 SIRS criteriad | Text page and EHR passive alert | No | High |
McRee et al., 201418 | Before‐after (6 months pre and 6 months post) | 1/academic | Wards | 171e | EHR | 2 SIRS criteria | Notified nurse, specifics unclear | No, but the nurse completed a sepsis risk evaluation flow sheet | Low |
Nelson et al., 201114 | Before‐after (3 months pre and 3 months post) | 1/academic | ED | 184f | EHR | 2 SIRS criteria and 2 or more SBP readings <90 mm Hg | Text page and EHR passive alert | Yes: fluid resuscitation, blood culture collection, antibiotic administration, among others | Low |
Sawyer et al., 201119 | Prospective, nonrandomized (2 intervention and 4 control wards) | 1/academic | Wards | 300 | EHR | Recursive partitioning regression tree algorithm including vitals and lab valuesg | Text page to charge nurse who then assessed patient and informed treating physicianh | No | High |
Among the 8 included studies, there was significant heterogeneity in threshold criteria for sepsis identification and subsequent alert activation. The most commonly defined threshold was the presence of 2 or more systemic inflammatory response syndrome (SIRS) criteria.[10, 13, 17, 18]
Diagnostic Accuracy of Automated Electronic Sepsis Alert Systems
The prevalence of sepsis varied substantially between the studies depending on the gold standard definition of sepsis used and the clinical setting (ED, wards, or ICU) of the study (Table 3). The 2 studies[14, 16] that defined sepsis as requiring evidence of shock had a substantially lower prevalence (0.8%4.7%) compared to the 2 studies[10, 13] that defined sepsis as having only 2 or more SIRS criteria with a presumed diagnosis of an infection (27.8%32.5%).
Source | Setting | Alert Threshold | Prevalence, % | Sensitivity, % (95% CI) | Specificity, % (95% CI) | PPV, % (95% CI) | NPV, % (95% CI) | LR+, (95% CI) | LR, (95% CI) |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Hooper et al., 201210 | MICU | 2 SIRS criteriaa | 36.3 | 98.9 (95.799.8) | 18.1 (14.222.9) | 40.7 (36.145.5) | 96.7 (87.599.4) | 1.21 (1.14‐1.27) | 0.06 (0.01‐0.25) |
Meurer et al., 200913 | ED | 2 SIRS criteria | 27.8 | 36.2 (25.348.8) | 79.9 (73.185.3) | 41.0 (28.854.3) | 76.5 (69.682.2) | 1.80 (1.17‐2.76) | 0.80 (0.67‐0.96) |
Nelson et al., 201114 | ED | 2 SIRS criteria and 2 SBP measurements<90 mm Hg | 0.8 | 63.6 (31.687.8) | 99.6 (99.099.8) | 53.8 (26.179.6) | 99.7 (99.299.9) | 145.8 (58.4364.1) | 0.37 (0.17‐0.80) |
Nguyen et al., 201415 | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Unable to estimateb | Unable to estimateb | Unable to estimateb | 44.7 (41.248.2) | 100.0c (98.8100.0) | Unable to estimateb | Unable to estimateb |
Thiel et al., 201016 | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsd | 4.7 | 17.1 (15.119.3) | 96.7 (96.596.9) | 20.5 (18.223.0) | 95.9 (95.796.2) | 5.22 (4.56‐5.98) | 0.86 (0.84‐0.88) |
All alert systems had suboptimal PPV (20.5%‐53.8%). The 2 studies that designed the sepsis alert to activate by SIRS criteria alone[10, 13] had a positive predictive value of 41% and a positive LR of 1.21 to 1.80. The ability to exclude the presence of sepsis varied considerably depending on the clinical setting. The study by Hooper et al.[10] that examined the alert among patients in the medical ICU appeared more effective at ruling out sepsis (NPV=96.7%; negative LR=0.06) compared to a similar alert system used by Meurer et al.[13] that studied patients in the ED (NPV=76.5%, negative LR=0.80).
There were also differences in the diagnostic accuracy of the sepsis alert systems depending on how the threshold for activating the sepsis alert was defined and applied in the study. Two studies evaluated a sepsis alert system among patients presenting to the ED at the same academic medical center.[13, 14] The alert system (Nelson et al.) that was triggered by a combination of SIRS criteria and hypotension (PPV=53.8%, LR+=145.8; NPV=99.7%, LR=0.37) outperformed the alert system (Meurer et al.) that was triggered by SIRS criteria alone (PPV=41.0%, LR+=1.80; NPV=76.5%, LR=0.80). Furthermore, the study by Meurer and colleagues evaluated the accuracy of the alert system only among patients who were hospitalized after presenting to the ED, rather than all consecutive patients presenting to the ED. This selection bias likely falsely inflated the diagnostic accuracy of the alert system used by Meurer et al., suggesting the alert system that was triggered by a combination of SIRS criteria and hypotension was comparatively even more accurate.
Two studies evaluating the diagnostic accuracy of the alert system were deemed to be high quality (Table 4). Three studies were considered low quality1 study did not include all patients in their assessment of diagnostic accuracy13; 1 study consecutively selected alert cases but randomly selected nonalert cases, greatly limiting the assessment of diagnostic accuracy15; and the other study applied a gold standard that was unlikely to correctly classify sepsis (septic shock requiring ICU transfer with vasopressor support in the first 24 hours was defined by discharge International Classification of Diseases, Ninth Revision diagnoses without chart review), with a considerable delay from the alert system trigger (alert identification was compared to the discharge diagnosis rather than physician review of real‐time data).[16]
Study | Patient Selection | Index Test | Reference Standard | Flow and Timing |
---|---|---|---|---|
| ||||
Hooper et al., 201210 | +++ | +++ | ++b | +++ |
Meurer et al., 200913 | +++ | +++ | ++b | +c |
Nelson et al., 201114 | +++ | +++ | ++b | +++ |
Nguyen et al., 201415 | +d | +++ | +e | +++ |
Thiel et al., 201016 | +++ | +++ | +f | +g |
Effectiveness of Automated Electronic Sepsis Alert Systems
Characteristics of the studies evaluating the effectiveness of automated electronic sepsis alert systems are summarized in Table 2. Regarding activation of the sepsis alert, 2 studies notified the provider directly by an automated text page and a passive EHR alert (not requiring the provider to acknowledge the alert or take action),[10, 14] 1 study notified the provider by a passive electronic alert alone,[17] and 1 study only employed an automated text page.[19] Furthermore, if the sepsis alert was activated, 2 studies suggested specific clinical management decisions,[14, 17] 2 studies left clinical management decisions solely to the discretion of the treating provider,[10, 19] and 1 study assisted the diagnosis of sepsis by prompting nurses to complete a second manual sepsis risk evaluation.[18]
Table 5 summarizes the effectiveness of automated electronic sepsis alert systems. Two studies evaluating the effectiveness of the sepsis alert system were considered to be high‐quality studies based on the use of a contemporaneous control group to account for temporal trends and an intention‐to‐treat analysis.[10, 19] The 2 studies evaluating the effectiveness of a sepsis alert system in the ED were considered low quality due to before‐and‐after designs without an intention‐to‐treat analysis.[14, 17]
Source | Outcomes Evaluated | Key Findings | Quality |
---|---|---|---|
| |||
Hooper et al., 201210 | Primary: time to receipt of antibiotic (new or changed) | No difference (6.1 hours for control vs 6.0 hours for intervention, P=0.95) | High |
Secondary: sepsis‐related process measures and outcomes | No difference in amount of 6 hour IV fluid administration (964 mL vs 1,019 mL, P=0.6), collection of blood cultures (adjusted HR 1.01; 95% CI, 0.76 to 1.35), collection of lactate (adjusted HR 0.84; 95% CI, 0.54 to 1.30), ICU length of stay (3.0 vs 3.0 days, P=0.2), hospital length of stay (4.7 vs 5.7 days, P=0.08), and hospital mortality (10% for control vs 14% for intervention, P=0.3) | ||
Sawyer et al., 201119 | Primary: sepsis‐related process measures (antibiotic escalation, IV fluids, oxygen therapy, vasopressor initiation, diagnostic testing (blood culture, CXR) within 12 hours of alert | Increases in receiving 1 measure (56% for control vs 71% for intervention, P=0.02), antibiotic escalation (24% vs 36%, P=0.04), IV fluid administration (24% vs 38%, P=0.01), and oxygen therapy (8% vs 20%, P=0.005). There was a nonsignificant increase in obtaining diagnostic tests (40% vs 52%, P=0.06) and vasopressor initiation (3% vs 6%, P=0.4) | High |
Secondary: ICU transfer, hospital length of stay, hospital length of stay after alert, in‐hospital mortality | Similar rate of ICU transfer (23% for control vs 26% for intervention, P=0.6), hospital length of stay (7 vs 9 days, median, P=0.8), hospital length of stay after alert (5 vs 6 days, median, P=0.7), and in‐hospital mortality (12% vs 10%, P=0.7) | ||
Berger et al., 201017 | Primary: lactate collection in ED | Increase in lactate collection in the ED (5.2% before vs 12.7% after alert implemented, absolute increase of 7.5%, 95% CI, 6.0% to 9.0%) | Low |
Secondary: lactate collection among hospitalized patients, proportion of patients with abnormal lactate (4 mmol/L), and in‐hospital mortality among hospitalized patients | Increase in lactate collection among hospitalized patients (15.3% vs 34.2%, absolute increase of 18.9%, 95% CI, 15.0% to 22.8%); decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% CI, 15.8% to 0.6%), and no significant difference in mortality (5.7% vs 5.2%, absolute decrease of 0.5%, 95% CI, 1.6% to 2.6%, P=0.6) | ||
McRee et al., 201418 | Stage of sepsis, length of stay, mortality, discharge location | Nonsignificant decrease in stage of sepsis (34.7% with septic shock before vs 21.9% after, P>0.05); no difference in length‐of‐stay (8.5 days before vs 8.7 days after, P>0.05). Decrease in mortality (9.3% before vs 1.0% after, P<0.05) and proportion of patients discharged home (25.3% before vs 49.0% after, P<0.05) | Low |
Nelson et al., 201114 | Frequency and time to completion of process measures: lactate, blood culture, CXR, and antibiotic initiation | Increases in blood culture collection (OR 2.9; 95% CI, 1.1 to 7.7) and CXR (OR 3.2; 95% CI, 1.1 to 9.5); nonsignificant increases in lactate collection (OR 1.7; 95% CI, 0.9 to 3.2) and antibiotic administration (OR 2.8; 95% CI, 0.9 to 8.3). Only blood cultures were collected in a more timely manner (median of 86 minutes before vs 81 minutes after alert implementation, P=0.03). | Low |
Neither of the 2 high‐quality studies that included a contemporaneous control found evidence for improving inpatient mortality or hospital and ICU length of stay.[10, 19] The impact of sepsis alert systems on improving process measures for sepsis management depended on the clinical setting. In a randomized controlled trial of patients admitted to a medical ICU, Hooper et al. did not find any benefit of implementing a sepsis alert system on improving intermediate outcome measures such as antibiotic escalation, fluid resuscitation, and collection of blood cultures and lactate.[10] However, in a well‐designed observational study, Sawyer et al. found significant increases in antibiotic escalation, fluid resuscitation, and diagnostic testing in patients admitted to the medical wards.[19] Both studies that evaluated the effectiveness of sepsis alert systems in the ED showed improvements in various process measures,[14, 17] but without improvement in mortality.[17] The single study that showed improvement in clinical outcomes (in‐hospital mortality and disposition location) was of low quality due to the prestudypoststudy design without adjustment for potential confounders and lack of an intention‐to‐treat analysis (only individuals with a discharge diagnosis of sepsis were included, rather than all individuals who triggered the alert).[18] Additionally, the preintervention group had a higher proportion of individuals with septic shock compared to the postintervention group, raising the possibility that the observed improvement was due to difference in severity of illness between the 2 groups rather than due to the intervention.
None of the studies included in this review explicitly reported on the potential harms (eg, excess antimicrobial use or alert fatigue) after implementation of sepsis alerts, but Hooper et al. found a nonsignificant increase in mortality, and Sawyer et al. showed a nonsignificant increase in the length of stay in the intervention group compared to the control group.[10, 19] Berger et al. showed an overall increase in the number of lactate tests performed, but with a decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% confidence interval, 15.8% to 0.6%), suggesting potential overtesting in patients at low risk for septic shock. In the study by Hooper et al., 88% (442/502) of the patients in the medical intensive care unit triggered an alert, raising the concern for alert fatigue.[10] Furthermore, 3 studies did not perform intention‐to‐treat analyses; rather, they included only patients who triggered the alert and also had provider‐suspected or confirmed sepsis,[14, 17] or had a discharge diagnosis for sepsis.[18]
DISCUSSION
The use of sepsis alert systems derived from electronic health data and targeting hospitalized patients improve a subset of sepsis process of care measures, but at the cost of poor positive predictive value and no clear improvement in mortality or length of stay. There is insufficient evidence for the effectiveness of automated electronic sepsis alert systems in the emergency department.
We found considerable variability in the diagnostic accuracy of automated electronic sepsis alert systems. There was moderate evidence that alert systems designed to identify severe sepsis (eg, SIRS criteria plus measures of shock) had greater diagnostic accuracy than alert systems that detected sepsis based on SIRS criteria alone. Given that SIRS criteria are highly prevalent among hospitalized patients with noninfectious diseases,[20] sepsis alert systems triggered by standard SIRS criteria may have poorer predictive value with an increased risk of alert fatigueexcessive electronic warnings resulting in physicians disregarding clinically useful alerts.[21] The potential for alert fatigue is even greater in critical care settings. A retrospective analysis of physiological alarms in the ICU estimated on average 6 alarms per hour with only 15% of alarms considered to be clinically relevant.[22]
The fact that sepsis alert systems improve intermediate process measures among ward and ED patients but not ICU patients likely reflects differences in both the patients and the clinical settings.[23] First, patients in the ICU may already be prescribed broad spectrum antibiotics, aggressively fluid resuscitated, and have other diagnostic testing performed before the activation of a sepsis alert, so it would be less likely to see an improvement in the rates of process measures assessing initiation or escalation of therapy compared to patients treated on the wards or in the ED. The apparent lack of benefit of these systems in the ICU may merely represent a ceiling effect. Second, nurses and physicians are already vigilantly monitoring patients in the ICU for signs of clinical deterioration, so additional alert systems may be redundant. Third, patients in the ICU are connected to standard bedside monitors that continuously monitor for the presence of abnormal vital signs. An additional sepsis alert system triggered by SIRS criteria alone may be superfluous to the existing infrastructure. Fourth, the majority of patients in the ICU will trigger the sepsis alert system,[10] so there likely is a high noise‐to‐signal ratio with resultant alert fatigue.[21]
In addition to greater emphasis on alert systems of greater diagnostic accuracy and effectiveness, our review notes several important gaps that limit evidence supporting the usefulness of automated sepsis alert systems. First, there are little data to describe the optimal design of sepsis alerts[24, 25] or the frequency with which they are appropriately acted upon or dismissed. In addition, we found little data to support whether effectiveness of alert systems differed based on whether clinical decision support was included with the alert itself (eg, direct prompting with specific clinical management recommendations) or the configuration of the alert (eg, interruptive alert or informational).[24, 25] Most of the studies we reviewed employed alerts primarily targeting physicians; we found little evidence for systems that also alerted other providers (eg, nurses or rapid response teams). Few studies provided data on harms of these systems (eg, excess antimicrobial use, fluid overload due to aggressive fluid resuscitation) or how often these treatments were administered to patients who did not eventually have sepsis. Few studies employed study designs that limited biases (eg, randomized or quasiexperimental designs) or used an intention‐to‐treat approach. Studies that exclude false positive alerts in analyses could bias estimates toward making sepsis alert systems appear more effective than they actually were. Finally, although presumably, deploying automated sepsis alerts in the ED would facilitate more timely recognition and treatment, more rigorously conducted studies are needed to identify whether using these alerts in the ED are of greater value compared to the wards and ICU. Given the limited number of studies included in this review, we were unable to make strong conclusions regarding the clinical benefits and cost‐effectiveness of implementing automated sepsis alerts.
Our review has certain limitations. First, despite our extensive literature search strategy, we may have missed studies published in the grey literature or in non‐English languages. Second, there is potential publication bias given the number of abstracts that we identified addressing 1 of our prespecified research questions compared to the number of peer‐reviewed publications identified by our search strategy.
CONCLUSION
Automated electronic sepsis alert systems have promise in delivering early goal‐directed therapies to patients. However, at present, automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor PPV and have not been shown to improve mortality or length of stay. Future efforts should develop and study methods for sepsis alert systems that avoid the potential for alert fatigue while improving outcomes.
Acknowledgements
The authors thank Gloria Won, MLIS, for her assistance with developing and performing the literature search strategy and wish her a long and joyous retirement.
Disclosures: Part of Dr. Makam's work on this project was completed while he was a primary care research fellow at the University of California, San Francisco, funded by a National Research Service Award (training grant T32HP19025‐07‐00). Dr. Makam is currently supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (KL2TR001103). Dr. Nguyen was supported by the Agency for Healthcare Research and Quality (R24HS022428‐01). Dr. Auerbach was supported by an NHLBI K24 grant (K24HL098372). Dr. Makam had full access to the data in the study and takes responsibility for the integrity of the date and accuracy of the data analysis. Study concept and design: all authors. Acquisition of data: Makam and Nguyen. Analysis and interpretation of data: all authors. Drafting of the manuscript: Makam. Critical revision of the manuscript: all authors. Statistical analysis: Makam and Nguyen. The authors have no conflicts of interest to disclose.
- Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality; 2013. , . National inpatient hospital costs: the most expensive conditions by payer, 2011: statistical brief #160.
- Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011;(62):1–8. , , , .
- The epidemiology of sepsis in the United States from 1979 through 2000. N Engl J Med. 2003;348(16):1546–1554. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- A randomized trial of protocol‐based care for early septic shock. N Engl J Med. 2014;370(18):1683–1693. , , , et al.
- Implementation of early goal‐directed therapy for septic patients in the emergency department: a review of the literature. J Emerg Nurs. 2013;39(1):13–19. , .
- Factors influencing variability in compliance rates and clinical outcomes among three different severe sepsis bundles. Ann Pharmacother. 2007;41(6):929–936. , , , , , .
- Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain. JAMA. 2008;299(19):2294–2303. , , , et al.
- Randomized trial of automated, electronic monitoring to facilitate early detection of sepsis in the intensive care unit*. Crit Care Med. 2012;40(7):2096–2101. , , , et al.
- QUADAS‐2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–536. , , , et al.
- AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health‐care program. J Clin Epidemiol. 2010;63(5):513–523. , , , et al.
- Real‐time identification of serious infection in geriatric patients using clinical information system surveillance. J Am Geriatr Soc. 2009;57(1):40–45. , , , et al.
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Automated electronic medical record sepsis detection in the emergency department. PeerJ. 2014;2:e343. , , , et al.
- Early prediction of septic shock in hospitalized patients. J Hosp Med. 2010;5(1):19–25. , , , , , .
- A Computerized alert screening for severe sepsis in emergency department patients increases lactate testing but does not improve inpatient mortality. Appl Clin Inform. 2010;1(4):394–407. , , , , .
- The impact of an electronic medical record surveillance program on outcomes for patients with sepsis. Heart Lung. 2014;43(6):546–549. , , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- The epidemiology of the systemic inflammatory response. Intensive Care Med. 2000;26(suppl 1):S64–S74. .
- Overrides of medication‐related clinical decision support alerts in outpatients. J Am Med Inform Assoc. 2014;21(3):487–491. , , , et al.
- Intensive care unit alarms–how many do we need? Crit Care Med. 2010;38(2):451–456. , , , , , .
- How can we best use electronic data to find and treat the critically ill?*. Crit Care Med. 2012;40(7):2242–2243. , .
- Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform. 2010;160(pt 2):806–810. , , , et al.
- Best practices in clinical decision support: the case of preventive care reminders. Appl Clin Inform. 2010;1(3):331–345. , , , et al.
Sepsis is the most expensive condition treated in the hospital, resulting in an aggregate cost of $20.3 billion or 5.2% of total aggregate cost for all hospitalizations in the United States.[1] Rates of sepsis and sepsis‐related mortality are rising in the United States.[2, 3] Timely treatment of sepsis, including adequate fluid resuscitation and appropriate antibiotic administration, decreases morbidity, mortality, and costs.[4, 5, 6] Consequently, the Surviving Sepsis Campaign recommends timely care with the implementation of sepsis bundles and protocols.[4] Though effective, sepsis protocols require dedicated personnel with specialized training, who must be highly vigilant and constantly monitor a patient's condition for the course of an entire hospitalization.[7, 8] As such, delays in administering evidence‐based therapies are common.[8, 9]
Automated electronic sepsis alerts are being developed and implemented to facilitate the delivery of timely sepsis care. Electronic alert systems synthesize electronic health data routinely collected for clinical purposes in real time or near real time to automatically identify sepsis based on prespecified diagnostic criteria, and immediately alert providers that their patient may meet sepsis criteria via electronic notifications (eg, through electronic health record [EHR], e‐mail, or pager alerts).
However, little data exist to describe whether automated, electronic systems achieve their intended goal of earlier, more effective sepsis care. To examine this question, we performed a systematic review on automated electronic sepsis alerts to assess their suitability for clinical use. Our 2 objectives were: (1) to describe the diagnostic accuracy of alert systems in identifying sepsis using electronic data available in real‐time or near real‐time, and (2) to evaluate the effectiveness of sepsis alert systems on sepsis care process measures and clinical outcomes.
MATERIALS AND METHODS
Data Sources and Search Strategies
We searched PubMed MEDLINE, Embase, The Cochrane Library, and the Cumulative Index to Nursing and Allied Health Literature from database inception through June 27, 2014, for all studies that contained the following 3 concepts: sepsis, electronic systems, and alerts (or identification). All citations were imported into an electronic database (EndNote X5; Thomson‐Reuters Corp., New York, NY) (see Supporting Information, Appendix, in the online version of this article for our complete search strategy).
Study Selection
Two authors (A.N.M. and O.K.N.) reviewed the citation titles, abstracts, and full‐text articles of potentially relevant references identified from the literature search for eligibility. References of selected articles were hand searched to identify additional eligible studies. Inclusion criteria for eligible studies were: (1) adult patients (aged 18 years) receiving care either in the emergency department or hospital, (2) outcomes of interest including diagnostic accuracy in identification of sepsis, and/or effectiveness of sepsis alerts on process measures and clinical outcomes evaluated using empiric data, and (3) sepsis alert systems used real time or near real time electronically available data to enable proactive, timely management. We excluded studies that: (1) tested the effect of other electronic interventions that were not sepsis alerts (ie, computerized order sets) for sepsis management; (2) studies solely focused on detecting and treating central line‐associated bloodstream infections, shock (not otherwise specified), bacteremia, or other device‐related infections; and (3) studies evaluating the effectiveness of sepsis alerts without a control group.
Data Extraction and Quality Assessment
Two reviewers (A.N.M. and O.K.N.) extracted data on the clinical setting, study design, dates of enrollment, definition of sepsis, details of the identification and alert systems, diagnostic accuracy of the alert system, and the incidence of process measures and clinical outcomes using a standardized form. Discrepancies between reviewers were resolved by discussion and consensus. Data discrepancies identified in 1 study were resolved by contacting the corresponding author.[10]
For studies assessing the diagnostic accuracy of sepsis identification, study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies revised tool.[11] For studies evaluating the effectiveness of sepsis alert systems, studies were considered high quality if a contemporaneous control group was present to account for temporal trends (eg, randomized controlled trial or observational analysis with a concurrent control). Fair‐quality studies were before‐and‐after studies that adjusted for potential confounders between time periods. Low‐quality studies included those that did not account for temporal trends, such as before‐and‐after studies using only historical controls without adjustment. Studies that did not use an intention‐to‐treat analysis were also considered low quality. The strength of the overall body of evidence, including risk of bias, was guided by the Grading of Recommendations Assessment, Development, and Evaluation Working Group Criteria adapted by the Agency of Healthcare Research and Quality.[12]
Data Synthesis
To analyze the diagnostic accuracy of automated sepsis alert systems to identify sepsis and to evaluate the effect on outcomes, we performed a qualitative assessment of all studies. We were unable to perform a meta‐analysis due to significant heterogeneity in study quality, clinical setting, and definition of the sepsis alert. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Effectiveness was assessed by changes in sepsis care process measures (ie, time to antibiotics) and outcomes (length of stay, mortality).
RESULTS
Description of Studies
Of 1293 titles, 183 qualified for abstract review, 84 for full‐text review, and 8 articles met our inclusion criteria (see Supporting Figure in the online version of this article). Five articles evaluated the diagnostic accuracy of sepsis identification,[10, 13, 14, 15, 16] and 5 articles[10, 14, 17, 18, 19] evaluated the effectiveness of automated electronic sepsis alerts on sepsis process measures and patient outcomes. All articles were published between 2009 and 2014 and were single‐site studies conducted at academic medical centers (Tables 1 and 2). The clinical settings in the included studies varied and included the emergency department (ED), hospital wards, and the intensive care unit (ICU).
Source | Site No./Type | Setting | Alert Threshold | Gold Standard Definition | Gold Standard Measurement | No. | Study Qualitya |
---|---|---|---|---|---|---|---|
| |||||||
Hooper et al., 201210 | 1/academic | MICU | 2 SIRS criteriab | Reviewer judgment, not otherwise specified | Chart review | 560 | High |
Meurer et al., 200913 | 1/academic | ED | 2 SIRS criteria | Reviewer judgment whether diagnosis of infection present in ED plus SIRS criteria | Chart review | 248 | Low |
Nelson J. et al., 201114 | 1/academic | ED | 2 SIRS criteria and 2 SBP measurements <90 mm Hg | Reviewer judgment whether infection present, requiring hospitalization with at least 1 organ system involved | Chart review | 1,386 | High |
Nguyen et al., 201415 | 1/academic | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Reviewer judgment to confirm SIRS, shock, and presence of a serious infection | Chart review | 1,095 | Low |
Thiel et al., 201016 | 1/academic | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsc | Admitted to the hospital wards and subsequently transferred to the ICU for septic shock and treated with vasopressor therapy | ICD‐9 discharge codes for acute infection, acute organ dysfunction, and need for vasopressors within 24 hours of ICU transfer | 27,674 | Low |
Source | Design | Site No./ Type | Setting | No. | Alert System Type | Alert Threshold | Alert Notificationa | Treatment Recommendation | Study Qualityb |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Berger et al., 201017 | Before‐after (6 months pre and 6 months post) | 1/academic | ED | 5796c | CPOE system | 2 SIRS criteria | CPOE passive alert | Yes: lactate collection | Low |
Hooper et al., 201210 | RCT | 1/academic | MICU | 443 | EHR | 2 SIRS criteriad | Text page and EHR passive alert | No | High |
McRee et al., 201418 | Before‐after (6 months pre and 6 months post) | 1/academic | Wards | 171e | EHR | 2 SIRS criteria | Notified nurse, specifics unclear | No, but the nurse completed a sepsis risk evaluation flow sheet | Low |
Nelson et al., 201114 | Before‐after (3 months pre and 3 months post) | 1/academic | ED | 184f | EHR | 2 SIRS criteria and 2 or more SBP readings <90 mm Hg | Text page and EHR passive alert | Yes: fluid resuscitation, blood culture collection, antibiotic administration, among others | Low |
Sawyer et al., 201119 | Prospective, nonrandomized (2 intervention and 4 control wards) | 1/academic | Wards | 300 | EHR | Recursive partitioning regression tree algorithm including vitals and lab valuesg | Text page to charge nurse who then assessed patient and informed treating physicianh | No | High |
Among the 8 included studies, there was significant heterogeneity in threshold criteria for sepsis identification and subsequent alert activation. The most commonly defined threshold was the presence of 2 or more systemic inflammatory response syndrome (SIRS) criteria.[10, 13, 17, 18]
Diagnostic Accuracy of Automated Electronic Sepsis Alert Systems
The prevalence of sepsis varied substantially between the studies depending on the gold standard definition of sepsis used and the clinical setting (ED, wards, or ICU) of the study (Table 3). The 2 studies[14, 16] that defined sepsis as requiring evidence of shock had a substantially lower prevalence (0.8%4.7%) compared to the 2 studies[10, 13] that defined sepsis as having only 2 or more SIRS criteria with a presumed diagnosis of an infection (27.8%32.5%).
Source | Setting | Alert Threshold | Prevalence, % | Sensitivity, % (95% CI) | Specificity, % (95% CI) | PPV, % (95% CI) | NPV, % (95% CI) | LR+, (95% CI) | LR, (95% CI) |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Hooper et al., 201210 | MICU | 2 SIRS criteriaa | 36.3 | 98.9 (95.799.8) | 18.1 (14.222.9) | 40.7 (36.145.5) | 96.7 (87.599.4) | 1.21 (1.14‐1.27) | 0.06 (0.01‐0.25) |
Meurer et al., 200913 | ED | 2 SIRS criteria | 27.8 | 36.2 (25.348.8) | 79.9 (73.185.3) | 41.0 (28.854.3) | 76.5 (69.682.2) | 1.80 (1.17‐2.76) | 0.80 (0.67‐0.96) |
Nelson et al., 201114 | ED | 2 SIRS criteria and 2 SBP measurements<90 mm Hg | 0.8 | 63.6 (31.687.8) | 99.6 (99.099.8) | 53.8 (26.179.6) | 99.7 (99.299.9) | 145.8 (58.4364.1) | 0.37 (0.17‐0.80) |
Nguyen et al., 201415 | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Unable to estimateb | Unable to estimateb | Unable to estimateb | 44.7 (41.248.2) | 100.0c (98.8100.0) | Unable to estimateb | Unable to estimateb |
Thiel et al., 201016 | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsd | 4.7 | 17.1 (15.119.3) | 96.7 (96.596.9) | 20.5 (18.223.0) | 95.9 (95.796.2) | 5.22 (4.56‐5.98) | 0.86 (0.84‐0.88) |
All alert systems had suboptimal PPV (20.5%‐53.8%). The 2 studies that designed the sepsis alert to activate by SIRS criteria alone[10, 13] had a positive predictive value of 41% and a positive LR of 1.21 to 1.80. The ability to exclude the presence of sepsis varied considerably depending on the clinical setting. The study by Hooper et al.[10] that examined the alert among patients in the medical ICU appeared more effective at ruling out sepsis (NPV=96.7%; negative LR=0.06) compared to a similar alert system used by Meurer et al.[13] that studied patients in the ED (NPV=76.5%, negative LR=0.80).
There were also differences in the diagnostic accuracy of the sepsis alert systems depending on how the threshold for activating the sepsis alert was defined and applied in the study. Two studies evaluated a sepsis alert system among patients presenting to the ED at the same academic medical center.[13, 14] The alert system (Nelson et al.) that was triggered by a combination of SIRS criteria and hypotension (PPV=53.8%, LR+=145.8; NPV=99.7%, LR=0.37) outperformed the alert system (Meurer et al.) that was triggered by SIRS criteria alone (PPV=41.0%, LR+=1.80; NPV=76.5%, LR=0.80). Furthermore, the study by Meurer and colleagues evaluated the accuracy of the alert system only among patients who were hospitalized after presenting to the ED, rather than all consecutive patients presenting to the ED. This selection bias likely falsely inflated the diagnostic accuracy of the alert system used by Meurer et al., suggesting the alert system that was triggered by a combination of SIRS criteria and hypotension was comparatively even more accurate.
Two studies evaluating the diagnostic accuracy of the alert system were deemed to be high quality (Table 4). Three studies were considered low quality1 study did not include all patients in their assessment of diagnostic accuracy13; 1 study consecutively selected alert cases but randomly selected nonalert cases, greatly limiting the assessment of diagnostic accuracy15; and the other study applied a gold standard that was unlikely to correctly classify sepsis (septic shock requiring ICU transfer with vasopressor support in the first 24 hours was defined by discharge International Classification of Diseases, Ninth Revision diagnoses without chart review), with a considerable delay from the alert system trigger (alert identification was compared to the discharge diagnosis rather than physician review of real‐time data).[16]
Study | Patient Selection | Index Test | Reference Standard | Flow and Timing |
---|---|---|---|---|
| ||||
Hooper et al., 201210 | +++ | +++ | ++b | +++ |
Meurer et al., 200913 | +++ | +++ | ++b | +c |
Nelson et al., 201114 | +++ | +++ | ++b | +++ |
Nguyen et al., 201415 | +d | +++ | +e | +++ |
Thiel et al., 201016 | +++ | +++ | +f | +g |
Effectiveness of Automated Electronic Sepsis Alert Systems
Characteristics of the studies evaluating the effectiveness of automated electronic sepsis alert systems are summarized in Table 2. Regarding activation of the sepsis alert, 2 studies notified the provider directly by an automated text page and a passive EHR alert (not requiring the provider to acknowledge the alert or take action),[10, 14] 1 study notified the provider by a passive electronic alert alone,[17] and 1 study only employed an automated text page.[19] Furthermore, if the sepsis alert was activated, 2 studies suggested specific clinical management decisions,[14, 17] 2 studies left clinical management decisions solely to the discretion of the treating provider,[10, 19] and 1 study assisted the diagnosis of sepsis by prompting nurses to complete a second manual sepsis risk evaluation.[18]
Table 5 summarizes the effectiveness of automated electronic sepsis alert systems. Two studies evaluating the effectiveness of the sepsis alert system were considered to be high‐quality studies based on the use of a contemporaneous control group to account for temporal trends and an intention‐to‐treat analysis.[10, 19] The 2 studies evaluating the effectiveness of a sepsis alert system in the ED were considered low quality due to before‐and‐after designs without an intention‐to‐treat analysis.[14, 17]
Source | Outcomes Evaluated | Key Findings | Quality |
---|---|---|---|
| |||
Hooper et al., 201210 | Primary: time to receipt of antibiotic (new or changed) | No difference (6.1 hours for control vs 6.0 hours for intervention, P=0.95) | High |
Secondary: sepsis‐related process measures and outcomes | No difference in amount of 6 hour IV fluid administration (964 mL vs 1,019 mL, P=0.6), collection of blood cultures (adjusted HR 1.01; 95% CI, 0.76 to 1.35), collection of lactate (adjusted HR 0.84; 95% CI, 0.54 to 1.30), ICU length of stay (3.0 vs 3.0 days, P=0.2), hospital length of stay (4.7 vs 5.7 days, P=0.08), and hospital mortality (10% for control vs 14% for intervention, P=0.3) | ||
Sawyer et al., 201119 | Primary: sepsis‐related process measures (antibiotic escalation, IV fluids, oxygen therapy, vasopressor initiation, diagnostic testing (blood culture, CXR) within 12 hours of alert | Increases in receiving 1 measure (56% for control vs 71% for intervention, P=0.02), antibiotic escalation (24% vs 36%, P=0.04), IV fluid administration (24% vs 38%, P=0.01), and oxygen therapy (8% vs 20%, P=0.005). There was a nonsignificant increase in obtaining diagnostic tests (40% vs 52%, P=0.06) and vasopressor initiation (3% vs 6%, P=0.4) | High |
Secondary: ICU transfer, hospital length of stay, hospital length of stay after alert, in‐hospital mortality | Similar rate of ICU transfer (23% for control vs 26% for intervention, P=0.6), hospital length of stay (7 vs 9 days, median, P=0.8), hospital length of stay after alert (5 vs 6 days, median, P=0.7), and in‐hospital mortality (12% vs 10%, P=0.7) | ||
Berger et al., 201017 | Primary: lactate collection in ED | Increase in lactate collection in the ED (5.2% before vs 12.7% after alert implemented, absolute increase of 7.5%, 95% CI, 6.0% to 9.0%) | Low |
Secondary: lactate collection among hospitalized patients, proportion of patients with abnormal lactate (4 mmol/L), and in‐hospital mortality among hospitalized patients | Increase in lactate collection among hospitalized patients (15.3% vs 34.2%, absolute increase of 18.9%, 95% CI, 15.0% to 22.8%); decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% CI, 15.8% to 0.6%), and no significant difference in mortality (5.7% vs 5.2%, absolute decrease of 0.5%, 95% CI, 1.6% to 2.6%, P=0.6) | ||
McRee et al., 201418 | Stage of sepsis, length of stay, mortality, discharge location | Nonsignificant decrease in stage of sepsis (34.7% with septic shock before vs 21.9% after, P>0.05); no difference in length‐of‐stay (8.5 days before vs 8.7 days after, P>0.05). Decrease in mortality (9.3% before vs 1.0% after, P<0.05) and proportion of patients discharged home (25.3% before vs 49.0% after, P<0.05) | Low |
Nelson et al., 201114 | Frequency and time to completion of process measures: lactate, blood culture, CXR, and antibiotic initiation | Increases in blood culture collection (OR 2.9; 95% CI, 1.1 to 7.7) and CXR (OR 3.2; 95% CI, 1.1 to 9.5); nonsignificant increases in lactate collection (OR 1.7; 95% CI, 0.9 to 3.2) and antibiotic administration (OR 2.8; 95% CI, 0.9 to 8.3). Only blood cultures were collected in a more timely manner (median of 86 minutes before vs 81 minutes after alert implementation, P=0.03). | Low |
Neither of the 2 high‐quality studies that included a contemporaneous control found evidence for improving inpatient mortality or hospital and ICU length of stay.[10, 19] The impact of sepsis alert systems on improving process measures for sepsis management depended on the clinical setting. In a randomized controlled trial of patients admitted to a medical ICU, Hooper et al. did not find any benefit of implementing a sepsis alert system on improving intermediate outcome measures such as antibiotic escalation, fluid resuscitation, and collection of blood cultures and lactate.[10] However, in a well‐designed observational study, Sawyer et al. found significant increases in antibiotic escalation, fluid resuscitation, and diagnostic testing in patients admitted to the medical wards.[19] Both studies that evaluated the effectiveness of sepsis alert systems in the ED showed improvements in various process measures,[14, 17] but without improvement in mortality.[17] The single study that showed improvement in clinical outcomes (in‐hospital mortality and disposition location) was of low quality due to the prestudypoststudy design without adjustment for potential confounders and lack of an intention‐to‐treat analysis (only individuals with a discharge diagnosis of sepsis were included, rather than all individuals who triggered the alert).[18] Additionally, the preintervention group had a higher proportion of individuals with septic shock compared to the postintervention group, raising the possibility that the observed improvement was due to difference in severity of illness between the 2 groups rather than due to the intervention.
None of the studies included in this review explicitly reported on the potential harms (eg, excess antimicrobial use or alert fatigue) after implementation of sepsis alerts, but Hooper et al. found a nonsignificant increase in mortality, and Sawyer et al. showed a nonsignificant increase in the length of stay in the intervention group compared to the control group.[10, 19] Berger et al. showed an overall increase in the number of lactate tests performed, but with a decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% confidence interval, 15.8% to 0.6%), suggesting potential overtesting in patients at low risk for septic shock. In the study by Hooper et al., 88% (442/502) of the patients in the medical intensive care unit triggered an alert, raising the concern for alert fatigue.[10] Furthermore, 3 studies did not perform intention‐to‐treat analyses; rather, they included only patients who triggered the alert and also had provider‐suspected or confirmed sepsis,[14, 17] or had a discharge diagnosis for sepsis.[18]
DISCUSSION
The use of sepsis alert systems derived from electronic health data and targeting hospitalized patients improve a subset of sepsis process of care measures, but at the cost of poor positive predictive value and no clear improvement in mortality or length of stay. There is insufficient evidence for the effectiveness of automated electronic sepsis alert systems in the emergency department.
We found considerable variability in the diagnostic accuracy of automated electronic sepsis alert systems. There was moderate evidence that alert systems designed to identify severe sepsis (eg, SIRS criteria plus measures of shock) had greater diagnostic accuracy than alert systems that detected sepsis based on SIRS criteria alone. Given that SIRS criteria are highly prevalent among hospitalized patients with noninfectious diseases,[20] sepsis alert systems triggered by standard SIRS criteria may have poorer predictive value with an increased risk of alert fatigueexcessive electronic warnings resulting in physicians disregarding clinically useful alerts.[21] The potential for alert fatigue is even greater in critical care settings. A retrospective analysis of physiological alarms in the ICU estimated on average 6 alarms per hour with only 15% of alarms considered to be clinically relevant.[22]
The fact that sepsis alert systems improve intermediate process measures among ward and ED patients but not ICU patients likely reflects differences in both the patients and the clinical settings.[23] First, patients in the ICU may already be prescribed broad spectrum antibiotics, aggressively fluid resuscitated, and have other diagnostic testing performed before the activation of a sepsis alert, so it would be less likely to see an improvement in the rates of process measures assessing initiation or escalation of therapy compared to patients treated on the wards or in the ED. The apparent lack of benefit of these systems in the ICU may merely represent a ceiling effect. Second, nurses and physicians are already vigilantly monitoring patients in the ICU for signs of clinical deterioration, so additional alert systems may be redundant. Third, patients in the ICU are connected to standard bedside monitors that continuously monitor for the presence of abnormal vital signs. An additional sepsis alert system triggered by SIRS criteria alone may be superfluous to the existing infrastructure. Fourth, the majority of patients in the ICU will trigger the sepsis alert system,[10] so there likely is a high noise‐to‐signal ratio with resultant alert fatigue.[21]
In addition to greater emphasis on alert systems of greater diagnostic accuracy and effectiveness, our review notes several important gaps that limit evidence supporting the usefulness of automated sepsis alert systems. First, there are little data to describe the optimal design of sepsis alerts[24, 25] or the frequency with which they are appropriately acted upon or dismissed. In addition, we found little data to support whether effectiveness of alert systems differed based on whether clinical decision support was included with the alert itself (eg, direct prompting with specific clinical management recommendations) or the configuration of the alert (eg, interruptive alert or informational).[24, 25] Most of the studies we reviewed employed alerts primarily targeting physicians; we found little evidence for systems that also alerted other providers (eg, nurses or rapid response teams). Few studies provided data on harms of these systems (eg, excess antimicrobial use, fluid overload due to aggressive fluid resuscitation) or how often these treatments were administered to patients who did not eventually have sepsis. Few studies employed study designs that limited biases (eg, randomized or quasiexperimental designs) or used an intention‐to‐treat approach. Studies that exclude false positive alerts in analyses could bias estimates toward making sepsis alert systems appear more effective than they actually were. Finally, although presumably, deploying automated sepsis alerts in the ED would facilitate more timely recognition and treatment, more rigorously conducted studies are needed to identify whether using these alerts in the ED are of greater value compared to the wards and ICU. Given the limited number of studies included in this review, we were unable to make strong conclusions regarding the clinical benefits and cost‐effectiveness of implementing automated sepsis alerts.
Our review has certain limitations. First, despite our extensive literature search strategy, we may have missed studies published in the grey literature or in non‐English languages. Second, there is potential publication bias given the number of abstracts that we identified addressing 1 of our prespecified research questions compared to the number of peer‐reviewed publications identified by our search strategy.
CONCLUSION
Automated electronic sepsis alert systems have promise in delivering early goal‐directed therapies to patients. However, at present, automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor PPV and have not been shown to improve mortality or length of stay. Future efforts should develop and study methods for sepsis alert systems that avoid the potential for alert fatigue while improving outcomes.
Acknowledgements
The authors thank Gloria Won, MLIS, for her assistance with developing and performing the literature search strategy and wish her a long and joyous retirement.
Disclosures: Part of Dr. Makam's work on this project was completed while he was a primary care research fellow at the University of California, San Francisco, funded by a National Research Service Award (training grant T32HP19025‐07‐00). Dr. Makam is currently supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (KL2TR001103). Dr. Nguyen was supported by the Agency for Healthcare Research and Quality (R24HS022428‐01). Dr. Auerbach was supported by an NHLBI K24 grant (K24HL098372). Dr. Makam had full access to the data in the study and takes responsibility for the integrity of the date and accuracy of the data analysis. Study concept and design: all authors. Acquisition of data: Makam and Nguyen. Analysis and interpretation of data: all authors. Drafting of the manuscript: Makam. Critical revision of the manuscript: all authors. Statistical analysis: Makam and Nguyen. The authors have no conflicts of interest to disclose.
Sepsis is the most expensive condition treated in the hospital, resulting in an aggregate cost of $20.3 billion or 5.2% of total aggregate cost for all hospitalizations in the United States.[1] Rates of sepsis and sepsis‐related mortality are rising in the United States.[2, 3] Timely treatment of sepsis, including adequate fluid resuscitation and appropriate antibiotic administration, decreases morbidity, mortality, and costs.[4, 5, 6] Consequently, the Surviving Sepsis Campaign recommends timely care with the implementation of sepsis bundles and protocols.[4] Though effective, sepsis protocols require dedicated personnel with specialized training, who must be highly vigilant and constantly monitor a patient's condition for the course of an entire hospitalization.[7, 8] As such, delays in administering evidence‐based therapies are common.[8, 9]
Automated electronic sepsis alerts are being developed and implemented to facilitate the delivery of timely sepsis care. Electronic alert systems synthesize electronic health data routinely collected for clinical purposes in real time or near real time to automatically identify sepsis based on prespecified diagnostic criteria, and immediately alert providers that their patient may meet sepsis criteria via electronic notifications (eg, through electronic health record [EHR], e‐mail, or pager alerts).
However, little data exist to describe whether automated, electronic systems achieve their intended goal of earlier, more effective sepsis care. To examine this question, we performed a systematic review on automated electronic sepsis alerts to assess their suitability for clinical use. Our 2 objectives were: (1) to describe the diagnostic accuracy of alert systems in identifying sepsis using electronic data available in real‐time or near real‐time, and (2) to evaluate the effectiveness of sepsis alert systems on sepsis care process measures and clinical outcomes.
MATERIALS AND METHODS
Data Sources and Search Strategies
We searched PubMed MEDLINE, Embase, The Cochrane Library, and the Cumulative Index to Nursing and Allied Health Literature from database inception through June 27, 2014, for all studies that contained the following 3 concepts: sepsis, electronic systems, and alerts (or identification). All citations were imported into an electronic database (EndNote X5; Thomson‐Reuters Corp., New York, NY) (see Supporting Information, Appendix, in the online version of this article for our complete search strategy).
Study Selection
Two authors (A.N.M. and O.K.N.) reviewed the citation titles, abstracts, and full‐text articles of potentially relevant references identified from the literature search for eligibility. References of selected articles were hand searched to identify additional eligible studies. Inclusion criteria for eligible studies were: (1) adult patients (aged 18 years) receiving care either in the emergency department or hospital, (2) outcomes of interest including diagnostic accuracy in identification of sepsis, and/or effectiveness of sepsis alerts on process measures and clinical outcomes evaluated using empiric data, and (3) sepsis alert systems used real time or near real time electronically available data to enable proactive, timely management. We excluded studies that: (1) tested the effect of other electronic interventions that were not sepsis alerts (ie, computerized order sets) for sepsis management; (2) studies solely focused on detecting and treating central line‐associated bloodstream infections, shock (not otherwise specified), bacteremia, or other device‐related infections; and (3) studies evaluating the effectiveness of sepsis alerts without a control group.
Data Extraction and Quality Assessment
Two reviewers (A.N.M. and O.K.N.) extracted data on the clinical setting, study design, dates of enrollment, definition of sepsis, details of the identification and alert systems, diagnostic accuracy of the alert system, and the incidence of process measures and clinical outcomes using a standardized form. Discrepancies between reviewers were resolved by discussion and consensus. Data discrepancies identified in 1 study were resolved by contacting the corresponding author.[10]
For studies assessing the diagnostic accuracy of sepsis identification, study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies revised tool.[11] For studies evaluating the effectiveness of sepsis alert systems, studies were considered high quality if a contemporaneous control group was present to account for temporal trends (eg, randomized controlled trial or observational analysis with a concurrent control). Fair‐quality studies were before‐and‐after studies that adjusted for potential confounders between time periods. Low‐quality studies included those that did not account for temporal trends, such as before‐and‐after studies using only historical controls without adjustment. Studies that did not use an intention‐to‐treat analysis were also considered low quality. The strength of the overall body of evidence, including risk of bias, was guided by the Grading of Recommendations Assessment, Development, and Evaluation Working Group Criteria adapted by the Agency of Healthcare Research and Quality.[12]
Data Synthesis
To analyze the diagnostic accuracy of automated sepsis alert systems to identify sepsis and to evaluate the effect on outcomes, we performed a qualitative assessment of all studies. We were unable to perform a meta‐analysis due to significant heterogeneity in study quality, clinical setting, and definition of the sepsis alert. Diagnostic accuracy of sepsis identification was measured by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Effectiveness was assessed by changes in sepsis care process measures (ie, time to antibiotics) and outcomes (length of stay, mortality).
RESULTS
Description of Studies
Of 1293 titles, 183 qualified for abstract review, 84 for full‐text review, and 8 articles met our inclusion criteria (see Supporting Figure in the online version of this article). Five articles evaluated the diagnostic accuracy of sepsis identification,[10, 13, 14, 15, 16] and 5 articles[10, 14, 17, 18, 19] evaluated the effectiveness of automated electronic sepsis alerts on sepsis process measures and patient outcomes. All articles were published between 2009 and 2014 and were single‐site studies conducted at academic medical centers (Tables 1 and 2). The clinical settings in the included studies varied and included the emergency department (ED), hospital wards, and the intensive care unit (ICU).
Source | Site No./Type | Setting | Alert Threshold | Gold Standard Definition | Gold Standard Measurement | No. | Study Qualitya |
---|---|---|---|---|---|---|---|
| |||||||
Hooper et al., 201210 | 1/academic | MICU | 2 SIRS criteriab | Reviewer judgment, not otherwise specified | Chart review | 560 | High |
Meurer et al., 200913 | 1/academic | ED | 2 SIRS criteria | Reviewer judgment whether diagnosis of infection present in ED plus SIRS criteria | Chart review | 248 | Low |
Nelson J. et al., 201114 | 1/academic | ED | 2 SIRS criteria and 2 SBP measurements <90 mm Hg | Reviewer judgment whether infection present, requiring hospitalization with at least 1 organ system involved | Chart review | 1,386 | High |
Nguyen et al., 201415 | 1/academic | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Reviewer judgment to confirm SIRS, shock, and presence of a serious infection | Chart review | 1,095 | Low |
Thiel et al., 201016 | 1/academic | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsc | Admitted to the hospital wards and subsequently transferred to the ICU for septic shock and treated with vasopressor therapy | ICD‐9 discharge codes for acute infection, acute organ dysfunction, and need for vasopressors within 24 hours of ICU transfer | 27,674 | Low |
Source | Design | Site No./ Type | Setting | No. | Alert System Type | Alert Threshold | Alert Notificationa | Treatment Recommendation | Study Qualityb |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Berger et al., 201017 | Before‐after (6 months pre and 6 months post) | 1/academic | ED | 5796c | CPOE system | 2 SIRS criteria | CPOE passive alert | Yes: lactate collection | Low |
Hooper et al., 201210 | RCT | 1/academic | MICU | 443 | EHR | 2 SIRS criteriad | Text page and EHR passive alert | No | High |
McRee et al., 201418 | Before‐after (6 months pre and 6 months post) | 1/academic | Wards | 171e | EHR | 2 SIRS criteria | Notified nurse, specifics unclear | No, but the nurse completed a sepsis risk evaluation flow sheet | Low |
Nelson et al., 201114 | Before‐after (3 months pre and 3 months post) | 1/academic | ED | 184f | EHR | 2 SIRS criteria and 2 or more SBP readings <90 mm Hg | Text page and EHR passive alert | Yes: fluid resuscitation, blood culture collection, antibiotic administration, among others | Low |
Sawyer et al., 201119 | Prospective, nonrandomized (2 intervention and 4 control wards) | 1/academic | Wards | 300 | EHR | Recursive partitioning regression tree algorithm including vitals and lab valuesg | Text page to charge nurse who then assessed patient and informed treating physicianh | No | High |
Among the 8 included studies, there was significant heterogeneity in threshold criteria for sepsis identification and subsequent alert activation. The most commonly defined threshold was the presence of 2 or more systemic inflammatory response syndrome (SIRS) criteria.[10, 13, 17, 18]
Diagnostic Accuracy of Automated Electronic Sepsis Alert Systems
The prevalence of sepsis varied substantially between the studies depending on the gold standard definition of sepsis used and the clinical setting (ED, wards, or ICU) of the study (Table 3). The 2 studies[14, 16] that defined sepsis as requiring evidence of shock had a substantially lower prevalence (0.8%4.7%) compared to the 2 studies[10, 13] that defined sepsis as having only 2 or more SIRS criteria with a presumed diagnosis of an infection (27.8%32.5%).
Source | Setting | Alert Threshold | Prevalence, % | Sensitivity, % (95% CI) | Specificity, % (95% CI) | PPV, % (95% CI) | NPV, % (95% CI) | LR+, (95% CI) | LR, (95% CI) |
---|---|---|---|---|---|---|---|---|---|
| |||||||||
Hooper et al., 201210 | MICU | 2 SIRS criteriaa | 36.3 | 98.9 (95.799.8) | 18.1 (14.222.9) | 40.7 (36.145.5) | 96.7 (87.599.4) | 1.21 (1.14‐1.27) | 0.06 (0.01‐0.25) |
Meurer et al., 200913 | ED | 2 SIRS criteria | 27.8 | 36.2 (25.348.8) | 79.9 (73.185.3) | 41.0 (28.854.3) | 76.5 (69.682.2) | 1.80 (1.17‐2.76) | 0.80 (0.67‐0.96) |
Nelson et al., 201114 | ED | 2 SIRS criteria and 2 SBP measurements<90 mm Hg | 0.8 | 63.6 (31.687.8) | 99.6 (99.099.8) | 53.8 (26.179.6) | 99.7 (99.299.9) | 145.8 (58.4364.1) | 0.37 (0.17‐0.80) |
Nguyen et al., 201415 | ED | 2 SIRS criteria and 1 sign of shock (SBP 90 mm Hg or lactic acid 2.0 mmol/L) | Unable to estimateb | Unable to estimateb | Unable to estimateb | 44.7 (41.248.2) | 100.0c (98.8100.0) | Unable to estimateb | Unable to estimateb |
Thiel et al., 201016 | Wards | Recursive partitioning tree analysis including vitals and laboratory resultsd | 4.7 | 17.1 (15.119.3) | 96.7 (96.596.9) | 20.5 (18.223.0) | 95.9 (95.796.2) | 5.22 (4.56‐5.98) | 0.86 (0.84‐0.88) |
All alert systems had suboptimal PPV (20.5%‐53.8%). The 2 studies that designed the sepsis alert to activate by SIRS criteria alone[10, 13] had a positive predictive value of 41% and a positive LR of 1.21 to 1.80. The ability to exclude the presence of sepsis varied considerably depending on the clinical setting. The study by Hooper et al.[10] that examined the alert among patients in the medical ICU appeared more effective at ruling out sepsis (NPV=96.7%; negative LR=0.06) compared to a similar alert system used by Meurer et al.[13] that studied patients in the ED (NPV=76.5%, negative LR=0.80).
There were also differences in the diagnostic accuracy of the sepsis alert systems depending on how the threshold for activating the sepsis alert was defined and applied in the study. Two studies evaluated a sepsis alert system among patients presenting to the ED at the same academic medical center.[13, 14] The alert system (Nelson et al.) that was triggered by a combination of SIRS criteria and hypotension (PPV=53.8%, LR+=145.8; NPV=99.7%, LR=0.37) outperformed the alert system (Meurer et al.) that was triggered by SIRS criteria alone (PPV=41.0%, LR+=1.80; NPV=76.5%, LR=0.80). Furthermore, the study by Meurer and colleagues evaluated the accuracy of the alert system only among patients who were hospitalized after presenting to the ED, rather than all consecutive patients presenting to the ED. This selection bias likely falsely inflated the diagnostic accuracy of the alert system used by Meurer et al., suggesting the alert system that was triggered by a combination of SIRS criteria and hypotension was comparatively even more accurate.
Two studies evaluating the diagnostic accuracy of the alert system were deemed to be high quality (Table 4). Three studies were considered low quality1 study did not include all patients in their assessment of diagnostic accuracy13; 1 study consecutively selected alert cases but randomly selected nonalert cases, greatly limiting the assessment of diagnostic accuracy15; and the other study applied a gold standard that was unlikely to correctly classify sepsis (septic shock requiring ICU transfer with vasopressor support in the first 24 hours was defined by discharge International Classification of Diseases, Ninth Revision diagnoses without chart review), with a considerable delay from the alert system trigger (alert identification was compared to the discharge diagnosis rather than physician review of real‐time data).[16]
Study | Patient Selection | Index Test | Reference Standard | Flow and Timing |
---|---|---|---|---|
| ||||
Hooper et al., 201210 | +++ | +++ | ++b | +++ |
Meurer et al., 200913 | +++ | +++ | ++b | +c |
Nelson et al., 201114 | +++ | +++ | ++b | +++ |
Nguyen et al., 201415 | +d | +++ | +e | +++ |
Thiel et al., 201016 | +++ | +++ | +f | +g |
Effectiveness of Automated Electronic Sepsis Alert Systems
Characteristics of the studies evaluating the effectiveness of automated electronic sepsis alert systems are summarized in Table 2. Regarding activation of the sepsis alert, 2 studies notified the provider directly by an automated text page and a passive EHR alert (not requiring the provider to acknowledge the alert or take action),[10, 14] 1 study notified the provider by a passive electronic alert alone,[17] and 1 study only employed an automated text page.[19] Furthermore, if the sepsis alert was activated, 2 studies suggested specific clinical management decisions,[14, 17] 2 studies left clinical management decisions solely to the discretion of the treating provider,[10, 19] and 1 study assisted the diagnosis of sepsis by prompting nurses to complete a second manual sepsis risk evaluation.[18]
Table 5 summarizes the effectiveness of automated electronic sepsis alert systems. Two studies evaluating the effectiveness of the sepsis alert system were considered to be high‐quality studies based on the use of a contemporaneous control group to account for temporal trends and an intention‐to‐treat analysis.[10, 19] The 2 studies evaluating the effectiveness of a sepsis alert system in the ED were considered low quality due to before‐and‐after designs without an intention‐to‐treat analysis.[14, 17]
Source | Outcomes Evaluated | Key Findings | Quality |
---|---|---|---|
| |||
Hooper et al., 201210 | Primary: time to receipt of antibiotic (new or changed) | No difference (6.1 hours for control vs 6.0 hours for intervention, P=0.95) | High |
Secondary: sepsis‐related process measures and outcomes | No difference in amount of 6 hour IV fluid administration (964 mL vs 1,019 mL, P=0.6), collection of blood cultures (adjusted HR 1.01; 95% CI, 0.76 to 1.35), collection of lactate (adjusted HR 0.84; 95% CI, 0.54 to 1.30), ICU length of stay (3.0 vs 3.0 days, P=0.2), hospital length of stay (4.7 vs 5.7 days, P=0.08), and hospital mortality (10% for control vs 14% for intervention, P=0.3) | ||
Sawyer et al., 201119 | Primary: sepsis‐related process measures (antibiotic escalation, IV fluids, oxygen therapy, vasopressor initiation, diagnostic testing (blood culture, CXR) within 12 hours of alert | Increases in receiving 1 measure (56% for control vs 71% for intervention, P=0.02), antibiotic escalation (24% vs 36%, P=0.04), IV fluid administration (24% vs 38%, P=0.01), and oxygen therapy (8% vs 20%, P=0.005). There was a nonsignificant increase in obtaining diagnostic tests (40% vs 52%, P=0.06) and vasopressor initiation (3% vs 6%, P=0.4) | High |
Secondary: ICU transfer, hospital length of stay, hospital length of stay after alert, in‐hospital mortality | Similar rate of ICU transfer (23% for control vs 26% for intervention, P=0.6), hospital length of stay (7 vs 9 days, median, P=0.8), hospital length of stay after alert (5 vs 6 days, median, P=0.7), and in‐hospital mortality (12% vs 10%, P=0.7) | ||
Berger et al., 201017 | Primary: lactate collection in ED | Increase in lactate collection in the ED (5.2% before vs 12.7% after alert implemented, absolute increase of 7.5%, 95% CI, 6.0% to 9.0%) | Low |
Secondary: lactate collection among hospitalized patients, proportion of patients with abnormal lactate (4 mmol/L), and in‐hospital mortality among hospitalized patients | Increase in lactate collection among hospitalized patients (15.3% vs 34.2%, absolute increase of 18.9%, 95% CI, 15.0% to 22.8%); decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% CI, 15.8% to 0.6%), and no significant difference in mortality (5.7% vs 5.2%, absolute decrease of 0.5%, 95% CI, 1.6% to 2.6%, P=0.6) | ||
McRee et al., 201418 | Stage of sepsis, length of stay, mortality, discharge location | Nonsignificant decrease in stage of sepsis (34.7% with septic shock before vs 21.9% after, P>0.05); no difference in length‐of‐stay (8.5 days before vs 8.7 days after, P>0.05). Decrease in mortality (9.3% before vs 1.0% after, P<0.05) and proportion of patients discharged home (25.3% before vs 49.0% after, P<0.05) | Low |
Nelson et al., 201114 | Frequency and time to completion of process measures: lactate, blood culture, CXR, and antibiotic initiation | Increases in blood culture collection (OR 2.9; 95% CI, 1.1 to 7.7) and CXR (OR 3.2; 95% CI, 1.1 to 9.5); nonsignificant increases in lactate collection (OR 1.7; 95% CI, 0.9 to 3.2) and antibiotic administration (OR 2.8; 95% CI, 0.9 to 8.3). Only blood cultures were collected in a more timely manner (median of 86 minutes before vs 81 minutes after alert implementation, P=0.03). | Low |
Neither of the 2 high‐quality studies that included a contemporaneous control found evidence for improving inpatient mortality or hospital and ICU length of stay.[10, 19] The impact of sepsis alert systems on improving process measures for sepsis management depended on the clinical setting. In a randomized controlled trial of patients admitted to a medical ICU, Hooper et al. did not find any benefit of implementing a sepsis alert system on improving intermediate outcome measures such as antibiotic escalation, fluid resuscitation, and collection of blood cultures and lactate.[10] However, in a well‐designed observational study, Sawyer et al. found significant increases in antibiotic escalation, fluid resuscitation, and diagnostic testing in patients admitted to the medical wards.[19] Both studies that evaluated the effectiveness of sepsis alert systems in the ED showed improvements in various process measures,[14, 17] but without improvement in mortality.[17] The single study that showed improvement in clinical outcomes (in‐hospital mortality and disposition location) was of low quality due to the prestudypoststudy design without adjustment for potential confounders and lack of an intention‐to‐treat analysis (only individuals with a discharge diagnosis of sepsis were included, rather than all individuals who triggered the alert).[18] Additionally, the preintervention group had a higher proportion of individuals with septic shock compared to the postintervention group, raising the possibility that the observed improvement was due to difference in severity of illness between the 2 groups rather than due to the intervention.
None of the studies included in this review explicitly reported on the potential harms (eg, excess antimicrobial use or alert fatigue) after implementation of sepsis alerts, but Hooper et al. found a nonsignificant increase in mortality, and Sawyer et al. showed a nonsignificant increase in the length of stay in the intervention group compared to the control group.[10, 19] Berger et al. showed an overall increase in the number of lactate tests performed, but with a decrease in the proportion of abnormal lactate values (21.9% vs 14.8%, absolute decrease of 7.6%, 95% confidence interval, 15.8% to 0.6%), suggesting potential overtesting in patients at low risk for septic shock. In the study by Hooper et al., 88% (442/502) of the patients in the medical intensive care unit triggered an alert, raising the concern for alert fatigue.[10] Furthermore, 3 studies did not perform intention‐to‐treat analyses; rather, they included only patients who triggered the alert and also had provider‐suspected or confirmed sepsis,[14, 17] or had a discharge diagnosis for sepsis.[18]
DISCUSSION
The use of sepsis alert systems derived from electronic health data and targeting hospitalized patients improve a subset of sepsis process of care measures, but at the cost of poor positive predictive value and no clear improvement in mortality or length of stay. There is insufficient evidence for the effectiveness of automated electronic sepsis alert systems in the emergency department.
We found considerable variability in the diagnostic accuracy of automated electronic sepsis alert systems. There was moderate evidence that alert systems designed to identify severe sepsis (eg, SIRS criteria plus measures of shock) had greater diagnostic accuracy than alert systems that detected sepsis based on SIRS criteria alone. Given that SIRS criteria are highly prevalent among hospitalized patients with noninfectious diseases,[20] sepsis alert systems triggered by standard SIRS criteria may have poorer predictive value with an increased risk of alert fatigueexcessive electronic warnings resulting in physicians disregarding clinically useful alerts.[21] The potential for alert fatigue is even greater in critical care settings. A retrospective analysis of physiological alarms in the ICU estimated on average 6 alarms per hour with only 15% of alarms considered to be clinically relevant.[22]
The fact that sepsis alert systems improve intermediate process measures among ward and ED patients but not ICU patients likely reflects differences in both the patients and the clinical settings.[23] First, patients in the ICU may already be prescribed broad spectrum antibiotics, aggressively fluid resuscitated, and have other diagnostic testing performed before the activation of a sepsis alert, so it would be less likely to see an improvement in the rates of process measures assessing initiation or escalation of therapy compared to patients treated on the wards or in the ED. The apparent lack of benefit of these systems in the ICU may merely represent a ceiling effect. Second, nurses and physicians are already vigilantly monitoring patients in the ICU for signs of clinical deterioration, so additional alert systems may be redundant. Third, patients in the ICU are connected to standard bedside monitors that continuously monitor for the presence of abnormal vital signs. An additional sepsis alert system triggered by SIRS criteria alone may be superfluous to the existing infrastructure. Fourth, the majority of patients in the ICU will trigger the sepsis alert system,[10] so there likely is a high noise‐to‐signal ratio with resultant alert fatigue.[21]
In addition to greater emphasis on alert systems of greater diagnostic accuracy and effectiveness, our review notes several important gaps that limit evidence supporting the usefulness of automated sepsis alert systems. First, there are little data to describe the optimal design of sepsis alerts[24, 25] or the frequency with which they are appropriately acted upon or dismissed. In addition, we found little data to support whether effectiveness of alert systems differed based on whether clinical decision support was included with the alert itself (eg, direct prompting with specific clinical management recommendations) or the configuration of the alert (eg, interruptive alert or informational).[24, 25] Most of the studies we reviewed employed alerts primarily targeting physicians; we found little evidence for systems that also alerted other providers (eg, nurses or rapid response teams). Few studies provided data on harms of these systems (eg, excess antimicrobial use, fluid overload due to aggressive fluid resuscitation) or how often these treatments were administered to patients who did not eventually have sepsis. Few studies employed study designs that limited biases (eg, randomized or quasiexperimental designs) or used an intention‐to‐treat approach. Studies that exclude false positive alerts in analyses could bias estimates toward making sepsis alert systems appear more effective than they actually were. Finally, although presumably, deploying automated sepsis alerts in the ED would facilitate more timely recognition and treatment, more rigorously conducted studies are needed to identify whether using these alerts in the ED are of greater value compared to the wards and ICU. Given the limited number of studies included in this review, we were unable to make strong conclusions regarding the clinical benefits and cost‐effectiveness of implementing automated sepsis alerts.
Our review has certain limitations. First, despite our extensive literature search strategy, we may have missed studies published in the grey literature or in non‐English languages. Second, there is potential publication bias given the number of abstracts that we identified addressing 1 of our prespecified research questions compared to the number of peer‐reviewed publications identified by our search strategy.
CONCLUSION
Automated electronic sepsis alert systems have promise in delivering early goal‐directed therapies to patients. However, at present, automated sepsis alerts derived from electronic health data may improve care processes but tend to have poor PPV and have not been shown to improve mortality or length of stay. Future efforts should develop and study methods for sepsis alert systems that avoid the potential for alert fatigue while improving outcomes.
Acknowledgements
The authors thank Gloria Won, MLIS, for her assistance with developing and performing the literature search strategy and wish her a long and joyous retirement.
Disclosures: Part of Dr. Makam's work on this project was completed while he was a primary care research fellow at the University of California, San Francisco, funded by a National Research Service Award (training grant T32HP19025‐07‐00). Dr. Makam is currently supported by the National Center for Advancing Translational Sciences of the National Institutes of Health (KL2TR001103). Dr. Nguyen was supported by the Agency for Healthcare Research and Quality (R24HS022428‐01). Dr. Auerbach was supported by an NHLBI K24 grant (K24HL098372). Dr. Makam had full access to the data in the study and takes responsibility for the integrity of the date and accuracy of the data analysis. Study concept and design: all authors. Acquisition of data: Makam and Nguyen. Analysis and interpretation of data: all authors. Drafting of the manuscript: Makam. Critical revision of the manuscript: all authors. Statistical analysis: Makam and Nguyen. The authors have no conflicts of interest to disclose.
- Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality; 2013. , . National inpatient hospital costs: the most expensive conditions by payer, 2011: statistical brief #160.
- Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011;(62):1–8. , , , .
- The epidemiology of sepsis in the United States from 1979 through 2000. N Engl J Med. 2003;348(16):1546–1554. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- A randomized trial of protocol‐based care for early septic shock. N Engl J Med. 2014;370(18):1683–1693. , , , et al.
- Implementation of early goal‐directed therapy for septic patients in the emergency department: a review of the literature. J Emerg Nurs. 2013;39(1):13–19. , .
- Factors influencing variability in compliance rates and clinical outcomes among three different severe sepsis bundles. Ann Pharmacother. 2007;41(6):929–936. , , , , , .
- Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain. JAMA. 2008;299(19):2294–2303. , , , et al.
- Randomized trial of automated, electronic monitoring to facilitate early detection of sepsis in the intensive care unit*. Crit Care Med. 2012;40(7):2096–2101. , , , et al.
- QUADAS‐2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–536. , , , et al.
- AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health‐care program. J Clin Epidemiol. 2010;63(5):513–523. , , , et al.
- Real‐time identification of serious infection in geriatric patients using clinical information system surveillance. J Am Geriatr Soc. 2009;57(1):40–45. , , , et al.
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Automated electronic medical record sepsis detection in the emergency department. PeerJ. 2014;2:e343. , , , et al.
- Early prediction of septic shock in hospitalized patients. J Hosp Med. 2010;5(1):19–25. , , , , , .
- A Computerized alert screening for severe sepsis in emergency department patients increases lactate testing but does not improve inpatient mortality. Appl Clin Inform. 2010;1(4):394–407. , , , , .
- The impact of an electronic medical record surveillance program on outcomes for patients with sepsis. Heart Lung. 2014;43(6):546–549. , , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- The epidemiology of the systemic inflammatory response. Intensive Care Med. 2000;26(suppl 1):S64–S74. .
- Overrides of medication‐related clinical decision support alerts in outpatients. J Am Med Inform Assoc. 2014;21(3):487–491. , , , et al.
- Intensive care unit alarms–how many do we need? Crit Care Med. 2010;38(2):451–456. , , , , , .
- How can we best use electronic data to find and treat the critically ill?*. Crit Care Med. 2012;40(7):2242–2243. , .
- Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform. 2010;160(pt 2):806–810. , , , et al.
- Best practices in clinical decision support: the case of preventive care reminders. Appl Clin Inform. 2010;1(3):331–345. , , , et al.
- Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality; 2013. , . National inpatient hospital costs: the most expensive conditions by payer, 2011: statistical brief #160.
- Inpatient care for septicemia or sepsis: a challenge for patients and hospitals. NCHS Data Brief. 2011;(62):1–8. , , , .
- The epidemiology of sepsis in the United States from 1979 through 2000. N Engl J Med. 2003;348(16):1546–1554. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- A randomized trial of protocol‐based care for early septic shock. N Engl J Med. 2014;370(18):1683–1693. , , , et al.
- Implementation of early goal‐directed therapy for septic patients in the emergency department: a review of the literature. J Emerg Nurs. 2013;39(1):13–19. , .
- Factors influencing variability in compliance rates and clinical outcomes among three different severe sepsis bundles. Ann Pharmacother. 2007;41(6):929–936. , , , , , .
- Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain. JAMA. 2008;299(19):2294–2303. , , , et al.
- Randomized trial of automated, electronic monitoring to facilitate early detection of sepsis in the intensive care unit*. Crit Care Med. 2012;40(7):2096–2101. , , , et al.
- QUADAS‐2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–536. , , , et al.
- AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health‐care program. J Clin Epidemiol. 2010;63(5):513–523. , , , et al.
- Real‐time identification of serious infection in geriatric patients using clinical information system surveillance. J Am Geriatr Soc. 2009;57(1):40–45. , , , et al.
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Automated electronic medical record sepsis detection in the emergency department. PeerJ. 2014;2:e343. , , , et al.
- Early prediction of septic shock in hospitalized patients. J Hosp Med. 2010;5(1):19–25. , , , , , .
- A Computerized alert screening for severe sepsis in emergency department patients increases lactate testing but does not improve inpatient mortality. Appl Clin Inform. 2010;1(4):394–407. , , , , .
- The impact of an electronic medical record surveillance program on outcomes for patients with sepsis. Heart Lung. 2014;43(6):546–549. , , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- The epidemiology of the systemic inflammatory response. Intensive Care Med. 2000;26(suppl 1):S64–S74. .
- Overrides of medication‐related clinical decision support alerts in outpatients. J Am Med Inform Assoc. 2014;21(3):487–491. , , , et al.
- Intensive care unit alarms–how many do we need? Crit Care Med. 2010;38(2):451–456. , , , , , .
- How can we best use electronic data to find and treat the critically ill?*. Crit Care Med. 2012;40(7):2242–2243. , .
- Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform. 2010;160(pt 2):806–810. , , , et al.
- Best practices in clinical decision support: the case of preventive care reminders. Appl Clin Inform. 2010;1(3):331–345. , , , et al.
Statins for all eligible under new guidelines could save lives
BALTIMORE – If all Americans eligible for statins under new American College of Cardiology/American Heart Association guidelines actually took them, thousands of deaths per year from cardiovascular disease might be prevented but at a cost of increased incidence of diabetes and myopathy.
The 2013 ACC/AHA guidelines expand criteria for the use of statins for primary prevention of CVD to more Americans (Circulation 2015;131:A05). Compliance with those guidelines would save 7,930 lives per year that would have been lost to CVD, according to Quanhe Yang, Ph.D., of the Centers for Disease Control and Prevention’s Division for Heart Disease and Stroke Prevention, and colleagues from the CDC and Emory University, Atlanta. Dr. Yang presented the findings at the American Heart Association Epidemiology and Prevention, Lifestyle and Cardiometabolic Health 2015 Scientific Sessions.
Statins are now indicated for primary prevention of CVD for anyone with an LDL cholesterol level greater than or equal to 190 mg/dL, for individuals aged 40-75 years with diabetes, and for those aged 40-75 years with LDL cholesterol greater than or equal to 70 mg/dL but less than 190 mg/dL who have at least a 7.5% estimated 10-year risk of developing atherosclerotic CVD. This means that an additional 24.2 million Americans are now eligible for statins but are not taking one, according to Dr. Yang and coinvestigators. However, “no study has assessed the potential impact of statin therapy under the new guidelines,” said Dr. Yang.
In order to obtain treatment group-specific atherosclerotic CVD, investigators first estimated hazard ratios for each treatment group by sex from the National Health and Nutrition Examination Survey III (NHANES III)–linked Mortality files. These hazard ratios were then applied to data from NHANES 2005-2010, the 2010 Multiple Cause of Death file, and the 2010 U.S. census to obtain age/race/sex-specific atherosclerotic CVD for each treatment group.
Applying the per-group hazard ratios, Dr. Yang and colleagues calculated that an annual 7,930 atherosclerotic CVD deaths would be prevented with full statin compliance, a reduction of 12.6%. However, modeling predicted an additional 16,400 additional cases of diabetes caused by statin use, he cautioned. More cases of myopathy would also occur, though the estimated number depends on whether the rate is derived from randomized, controlled trials (RCTs) or from population-based reports of myopathy. If the RCT data are used, just 1,510 excess cases of myopathy would be seen, in contrast to the 36,100 cases predicted by population-based data.
The study could model deaths caused by CVD only and not the reduction in disease burden of CVD that would result if all of the additional 24.2 million Americans took a statin, Dr Yang noted. Other limitations of the study included the lack of agreement in incidence of myopathy between RCTs and population-based studies, as well as the likelihood that the risk of diabetes increases with age and higher statin dose – effects not accounted for in the study.
Questioning after the talk focused on sex-specific differences in statin takers. For example, statin-associated diabetes is more common in women than men, another effect not accounted for in the study’s modeling, noted an audience member. Additionally, given that women have been underrepresented in clinical trials in general and in those for CVD in particular, some modeling assumptions in the present study may also lack full generalizability to women at risk for CVD.
BALTIMORE – If all Americans eligible for statins under new American College of Cardiology/American Heart Association guidelines actually took them, thousands of deaths per year from cardiovascular disease might be prevented but at a cost of increased incidence of diabetes and myopathy.
The 2013 ACC/AHA guidelines expand criteria for the use of statins for primary prevention of CVD to more Americans (Circulation 2015;131:A05). Compliance with those guidelines would save 7,930 lives per year that would have been lost to CVD, according to Quanhe Yang, Ph.D., of the Centers for Disease Control and Prevention’s Division for Heart Disease and Stroke Prevention, and colleagues from the CDC and Emory University, Atlanta. Dr. Yang presented the findings at the American Heart Association Epidemiology and Prevention, Lifestyle and Cardiometabolic Health 2015 Scientific Sessions.
Statins are now indicated for primary prevention of CVD for anyone with an LDL cholesterol level greater than or equal to 190 mg/dL, for individuals aged 40-75 years with diabetes, and for those aged 40-75 years with LDL cholesterol greater than or equal to 70 mg/dL but less than 190 mg/dL who have at least a 7.5% estimated 10-year risk of developing atherosclerotic CVD. This means that an additional 24.2 million Americans are now eligible for statins but are not taking one, according to Dr. Yang and coinvestigators. However, “no study has assessed the potential impact of statin therapy under the new guidelines,” said Dr. Yang.
In order to obtain treatment group-specific atherosclerotic CVD, investigators first estimated hazard ratios for each treatment group by sex from the National Health and Nutrition Examination Survey III (NHANES III)–linked Mortality files. These hazard ratios were then applied to data from NHANES 2005-2010, the 2010 Multiple Cause of Death file, and the 2010 U.S. census to obtain age/race/sex-specific atherosclerotic CVD for each treatment group.
Applying the per-group hazard ratios, Dr. Yang and colleagues calculated that an annual 7,930 atherosclerotic CVD deaths would be prevented with full statin compliance, a reduction of 12.6%. However, modeling predicted an additional 16,400 additional cases of diabetes caused by statin use, he cautioned. More cases of myopathy would also occur, though the estimated number depends on whether the rate is derived from randomized, controlled trials (RCTs) or from population-based reports of myopathy. If the RCT data are used, just 1,510 excess cases of myopathy would be seen, in contrast to the 36,100 cases predicted by population-based data.
The study could model deaths caused by CVD only and not the reduction in disease burden of CVD that would result if all of the additional 24.2 million Americans took a statin, Dr Yang noted. Other limitations of the study included the lack of agreement in incidence of myopathy between RCTs and population-based studies, as well as the likelihood that the risk of diabetes increases with age and higher statin dose – effects not accounted for in the study.
Questioning after the talk focused on sex-specific differences in statin takers. For example, statin-associated diabetes is more common in women than men, another effect not accounted for in the study’s modeling, noted an audience member. Additionally, given that women have been underrepresented in clinical trials in general and in those for CVD in particular, some modeling assumptions in the present study may also lack full generalizability to women at risk for CVD.
BALTIMORE – If all Americans eligible for statins under new American College of Cardiology/American Heart Association guidelines actually took them, thousands of deaths per year from cardiovascular disease might be prevented but at a cost of increased incidence of diabetes and myopathy.
The 2013 ACC/AHA guidelines expand criteria for the use of statins for primary prevention of CVD to more Americans (Circulation 2015;131:A05). Compliance with those guidelines would save 7,930 lives per year that would have been lost to CVD, according to Quanhe Yang, Ph.D., of the Centers for Disease Control and Prevention’s Division for Heart Disease and Stroke Prevention, and colleagues from the CDC and Emory University, Atlanta. Dr. Yang presented the findings at the American Heart Association Epidemiology and Prevention, Lifestyle and Cardiometabolic Health 2015 Scientific Sessions.
Statins are now indicated for primary prevention of CVD for anyone with an LDL cholesterol level greater than or equal to 190 mg/dL, for individuals aged 40-75 years with diabetes, and for those aged 40-75 years with LDL cholesterol greater than or equal to 70 mg/dL but less than 190 mg/dL who have at least a 7.5% estimated 10-year risk of developing atherosclerotic CVD. This means that an additional 24.2 million Americans are now eligible for statins but are not taking one, according to Dr. Yang and coinvestigators. However, “no study has assessed the potential impact of statin therapy under the new guidelines,” said Dr. Yang.
In order to obtain treatment group-specific atherosclerotic CVD, investigators first estimated hazard ratios for each treatment group by sex from the National Health and Nutrition Examination Survey III (NHANES III)–linked Mortality files. These hazard ratios were then applied to data from NHANES 2005-2010, the 2010 Multiple Cause of Death file, and the 2010 U.S. census to obtain age/race/sex-specific atherosclerotic CVD for each treatment group.
Applying the per-group hazard ratios, Dr. Yang and colleagues calculated that an annual 7,930 atherosclerotic CVD deaths would be prevented with full statin compliance, a reduction of 12.6%. However, modeling predicted an additional 16,400 additional cases of diabetes caused by statin use, he cautioned. More cases of myopathy would also occur, though the estimated number depends on whether the rate is derived from randomized, controlled trials (RCTs) or from population-based reports of myopathy. If the RCT data are used, just 1,510 excess cases of myopathy would be seen, in contrast to the 36,100 cases predicted by population-based data.
The study could model deaths caused by CVD only and not the reduction in disease burden of CVD that would result if all of the additional 24.2 million Americans took a statin, Dr Yang noted. Other limitations of the study included the lack of agreement in incidence of myopathy between RCTs and population-based studies, as well as the likelihood that the risk of diabetes increases with age and higher statin dose – effects not accounted for in the study.
Questioning after the talk focused on sex-specific differences in statin takers. For example, statin-associated diabetes is more common in women than men, another effect not accounted for in the study’s modeling, noted an audience member. Additionally, given that women have been underrepresented in clinical trials in general and in those for CVD in particular, some modeling assumptions in the present study may also lack full generalizability to women at risk for CVD.
AT AHA EPI/LIFESTYLE 2015MEETING
Key clinical point: New statin guidelines, if followed, could save lives but increase cases of myopathy and diabetes.
Major finding: Up to 12.6% of current deaths from CVD could be prevented if all guideline-eligible Americans took statins; saving of these lives would come at the cost of excess cases of diabetes and myopathy.
Data source: Analysis of U.S. census data and data from the NHANES study, together with meta-analysis of RCTs, used to model outcomes for 100% guideline-eligible statin use.
Disclosures: No authors reported financial disclosures.