Official news magazine of the Society of Hospital Medicine

Theme
medstat_thn
Top Sections
Quality
Clinical
Practice Management
Public Policy
Career
From the Society
thn
Main menu
THN Explore Menu
Explore menu
THN Main Menu
Proclivity ID
18836001
Unpublish
Specialty Focus
Critical Care
Infectious Diseases
Leadership Training
Medication Reconciliation
Neurology
Pediatrics
Transitions of Care
Negative Keywords Excluded Elements
div[contains(@class, 'view-clinical-edge-must-reads')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
div[contains(@class, 'pane-pub-article-hospitalist')]
Custom Lock Domain
the-hospitalist.org
Adblock Warning Text
We noticed you have an ad blocker enabled. Please whitelist The Hospitalist so that we can continue to bring you unique, HM-focused content.
Act-On Beacon Path
//shm.hospitalmedicine.org/cdnr/73/acton/bn/tracker/25526
Altmetric
Article Authors "autobrand" affiliation
MDedge News
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
Society
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
AdBlock Gif
Featured Buckets Admin
Adblock Button Text
Whitelist the-hospitalist.org
Publication LayerRX Default ID
795
Non-Overridden Topics
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
On
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Challenge Center
Disable Inline Native ads
Adblock Gif Media

High risk of low glucose? Hospital alerts promise a crucial heads-up

Article Type
Changed
Tue, 05/03/2022 - 15:18

– Researchers have been able to sustain a dramatic reduction in hypoglycemia incidents at nine St. Louis–area hospitals, thanks to a computer algorithm that warns medical staff when patients appear to be on the road to dangerously low blood sugar levels.

Robert Lodge/MDedge Medical News
Dr. Garry S. Tobin
“Complex variables can be utilized in real time to make diabetic therapy safer,” said coauthor Garry S. Tobin, MD, director of the Washington University Diabetes Center at Barnes-Jewish Hospital in St. Louis, said in an interview. “It can be a useful tool, and it’s sustainable.”

The 6-year retrospective system-wide study, which was released at the annual scientific sessions of the American Diabetes Association, found that the use of the alert system lowered the annual occurrence of severe hypoglycemia events by 41% at the hospitals.

In at-risk patients – those with blood glucose levels under 90 mg/dL – the system considers several variables, such as their weight, creatinine clearance, insulin therapy, and basal insulin doses. If the algorithm considers that a patient is at high risk of a sub–40-mg/dL glucose level – dangerously low – it sends a single alert to medical staff during the patient’s stay.

The idea is that the real-time alerts will go to nurses or pharmacists who will review patient charts and then contact physicians. The doctors are expected to “make clinically appropriate changes,” Dr. Tobin said.

Earlier, Dr. Tobin and colleagues prospectively analyzed the alert system’s effectiveness at a single hospital for 5 months. The trial, a cohort intervention study, tracked 655 patients with a blood glucose level under 90 mg/dL.

In 2014, the researchers reported the results of that trial: The alert identified 390 of the patients as being at high risk for severe hypoglycemia (blood glucose under 40 mg/dL). The frequency of severe hypoglycemia events was just 3.1% in this population vs. 9.7% in unalerted patients who were also deemed to be at high risk (J Hosp Med. 2014[9]: 621-6).

For the new study, researchers extended the alert system to nine hospitals and tracked its use from 2011 to 2017.

During all visits, the number of severe hypoglycemic events fell from 2.9 to 1.7 per 1,000 at-risk patient days. (P less than .001)

At one hospital, Dr. Tobin said, the average monthly number of severe hypoglycemia incidents fell from 40 to 12.

Researchers found that the average blood glucose level post alert was 93 mg/dL vs. 74 mg/dL before alert. They also reported that the system-wide total of alerts per year ranged from 4,142 to 5,649.

“The current data reflected in our poster show that the alert process is sustainable over a wide range of clinical settings, including community hospitals of various size and complexity, as well as academic medical centers,” Dr. Tobin said.

The alert system had no effect on hyperglycemia, Dr. Tobin said.

In regard to expense, Dr. Tobin said it’s small because the alert system uses existing current staff and computer systems. Setup costs, he said, included programming, creating the alert infrastructure, and staff training

No study funding is reported. Dr. Tobin reports relationships with Novo Nordisk (advisory board, speaker’s bureau) and MannKind (speaker’s bureau). The other authors report no relevant disclosures.

SOURCE: Tobin G et al. ADA 2018. Abstract 397-P.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Researchers have been able to sustain a dramatic reduction in hypoglycemia incidents at nine St. Louis–area hospitals, thanks to a computer algorithm that warns medical staff when patients appear to be on the road to dangerously low blood sugar levels.

Robert Lodge/MDedge Medical News
Dr. Garry S. Tobin
“Complex variables can be utilized in real time to make diabetic therapy safer,” said coauthor Garry S. Tobin, MD, director of the Washington University Diabetes Center at Barnes-Jewish Hospital in St. Louis, said in an interview. “It can be a useful tool, and it’s sustainable.”

The 6-year retrospective system-wide study, which was released at the annual scientific sessions of the American Diabetes Association, found that the use of the alert system lowered the annual occurrence of severe hypoglycemia events by 41% at the hospitals.

In at-risk patients – those with blood glucose levels under 90 mg/dL – the system considers several variables, such as their weight, creatinine clearance, insulin therapy, and basal insulin doses. If the algorithm considers that a patient is at high risk of a sub–40-mg/dL glucose level – dangerously low – it sends a single alert to medical staff during the patient’s stay.

The idea is that the real-time alerts will go to nurses or pharmacists who will review patient charts and then contact physicians. The doctors are expected to “make clinically appropriate changes,” Dr. Tobin said.

Earlier, Dr. Tobin and colleagues prospectively analyzed the alert system’s effectiveness at a single hospital for 5 months. The trial, a cohort intervention study, tracked 655 patients with a blood glucose level under 90 mg/dL.

In 2014, the researchers reported the results of that trial: The alert identified 390 of the patients as being at high risk for severe hypoglycemia (blood glucose under 40 mg/dL). The frequency of severe hypoglycemia events was just 3.1% in this population vs. 9.7% in unalerted patients who were also deemed to be at high risk (J Hosp Med. 2014[9]: 621-6).

For the new study, researchers extended the alert system to nine hospitals and tracked its use from 2011 to 2017.

During all visits, the number of severe hypoglycemic events fell from 2.9 to 1.7 per 1,000 at-risk patient days. (P less than .001)

At one hospital, Dr. Tobin said, the average monthly number of severe hypoglycemia incidents fell from 40 to 12.

Researchers found that the average blood glucose level post alert was 93 mg/dL vs. 74 mg/dL before alert. They also reported that the system-wide total of alerts per year ranged from 4,142 to 5,649.

“The current data reflected in our poster show that the alert process is sustainable over a wide range of clinical settings, including community hospitals of various size and complexity, as well as academic medical centers,” Dr. Tobin said.

The alert system had no effect on hyperglycemia, Dr. Tobin said.

In regard to expense, Dr. Tobin said it’s small because the alert system uses existing current staff and computer systems. Setup costs, he said, included programming, creating the alert infrastructure, and staff training

No study funding is reported. Dr. Tobin reports relationships with Novo Nordisk (advisory board, speaker’s bureau) and MannKind (speaker’s bureau). The other authors report no relevant disclosures.

SOURCE: Tobin G et al. ADA 2018. Abstract 397-P.

– Researchers have been able to sustain a dramatic reduction in hypoglycemia incidents at nine St. Louis–area hospitals, thanks to a computer algorithm that warns medical staff when patients appear to be on the road to dangerously low blood sugar levels.

Robert Lodge/MDedge Medical News
Dr. Garry S. Tobin
“Complex variables can be utilized in real time to make diabetic therapy safer,” said coauthor Garry S. Tobin, MD, director of the Washington University Diabetes Center at Barnes-Jewish Hospital in St. Louis, said in an interview. “It can be a useful tool, and it’s sustainable.”

The 6-year retrospective system-wide study, which was released at the annual scientific sessions of the American Diabetes Association, found that the use of the alert system lowered the annual occurrence of severe hypoglycemia events by 41% at the hospitals.

In at-risk patients – those with blood glucose levels under 90 mg/dL – the system considers several variables, such as their weight, creatinine clearance, insulin therapy, and basal insulin doses. If the algorithm considers that a patient is at high risk of a sub–40-mg/dL glucose level – dangerously low – it sends a single alert to medical staff during the patient’s stay.

The idea is that the real-time alerts will go to nurses or pharmacists who will review patient charts and then contact physicians. The doctors are expected to “make clinically appropriate changes,” Dr. Tobin said.

Earlier, Dr. Tobin and colleagues prospectively analyzed the alert system’s effectiveness at a single hospital for 5 months. The trial, a cohort intervention study, tracked 655 patients with a blood glucose level under 90 mg/dL.

In 2014, the researchers reported the results of that trial: The alert identified 390 of the patients as being at high risk for severe hypoglycemia (blood glucose under 40 mg/dL). The frequency of severe hypoglycemia events was just 3.1% in this population vs. 9.7% in unalerted patients who were also deemed to be at high risk (J Hosp Med. 2014[9]: 621-6).

For the new study, researchers extended the alert system to nine hospitals and tracked its use from 2011 to 2017.

During all visits, the number of severe hypoglycemic events fell from 2.9 to 1.7 per 1,000 at-risk patient days. (P less than .001)

At one hospital, Dr. Tobin said, the average monthly number of severe hypoglycemia incidents fell from 40 to 12.

Researchers found that the average blood glucose level post alert was 93 mg/dL vs. 74 mg/dL before alert. They also reported that the system-wide total of alerts per year ranged from 4,142 to 5,649.

“The current data reflected in our poster show that the alert process is sustainable over a wide range of clinical settings, including community hospitals of various size and complexity, as well as academic medical centers,” Dr. Tobin said.

The alert system had no effect on hyperglycemia, Dr. Tobin said.

In regard to expense, Dr. Tobin said it’s small because the alert system uses existing current staff and computer systems. Setup costs, he said, included programming, creating the alert infrastructure, and staff training

No study funding is reported. Dr. Tobin reports relationships with Novo Nordisk (advisory board, speaker’s bureau) and MannKind (speaker’s bureau). The other authors report no relevant disclosures.

SOURCE: Tobin G et al. ADA 2018. Abstract 397-P.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ADA 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Hospitals were able to sustain lower numbers of severe hypoglycemia events over 6 years by using a prewarning alert system.

Major finding: The number of severe hypoglycemic events (below 40 mg/dL) fell from 2.9 per 1,000 at-risk patient-days to 1.7 per 1,000 at-risk patient-days.

Study details: Retrospective, system-wide study of nine hospitals with alert system in place from 2011 to 2017.

Disclosures: No funding is reported. One author reports relationships with Novo Nordisk and MannKind. The other authors report no relevant disclosures.

Source: Tobin G et al. ADA 2018. Abstract 397-P.

Disqus Comments
Default
Use ProPublica

New look at ATLAS suggests rivaroxaban may still have role in ACS

ATLAS reanalysis shines a light on rivaroxaban’s overlooked benefits
Article Type
Changed
Fri, 01/18/2019 - 17:46

 

In a new analysis comparing only clinically similar outcomes in patients with acute coronary syndrome, the addition of rivaroxaban to standard antiplatelet therapy resulted in 115 fewer fatal or irreversible ischemic events per 10,000 patient-years than placebo, at the expense of only 10 additional fatal or seriously harmful events.

This new interpretation of the ATLAS ACS 2-TIMI 51 trial (Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome–Thrombolysis in Myocardial Infarction-51) suggests that the factor Xa inhibitor may still carve out a place for itself in ACS therapy, despite Food and Drug Administration rejections for this indication.

Dr. C. Michael Gibson

Not only did the survival benefit of rivaroxaban appear early in postevent treatment, it continued to protect patients over time, C. Michael Gibson, MD, and colleagues reported in the Journal of the American College of Cardiology.

“Time-to-event analysis demonstrated that the risk of fatal or irreversible harm remained low and constant over time, whereas reduction in fatal or irreversible ischemic events expanded,” wrote Dr. Gibson, professor of medicine at Beth Israel Deaconess Medical Center, Boston, and his coinvestigators. “By 720 days, a net of 142 fatal or irreversible events would have been prevented by 2.5-mg oral doses twice per day of rivaroxaban. Additional time-to-event sensitivity analyses demonstrated similar results, even when TIMI major bleeding was included as a fatal or irreversible event.”

In conducting the new analysis, Dr. Gibson and his team argued that the original interpretation of the results of ATLAS ACS 2-TIMI 51 lumped both fatal and nonfatal events together in composite endpoints, resulting in an inaccurate real-life picture of rivaroxaban’s therapeutic potential. “All types of events [were] weighted equally; for example, reversible nonintracranial hemorrhage, nonfatal bleeds that can be managed with supportive care, are weighted equally with death and disabling stroke. Second, stroke can be either hemorrhagic or ischemic, and the relative contributions of hemorrhagic or ischemic stroke may not be appropriately assigned to risk-versus-benefit categories in many analyses.”

The net result was that, while rivaroxaban did reduce the risk of the composite endpoint (cardiovascular death, MI, or stroke), the 1.7% absolute difference in cardiovascular mortality was almost completely offset by a 1.3% increase in major bleeding. However, most of those bleeds were reversible and nonfatal, associated with a drop in hemoglobin and/or blood transfusion. The drug did not increase the risk of fatal bleeding.

Giving equal statistical weight to clinically equal events provides a clearer focus, the investigators said.

“In this form of analysis, only fatal or irreversible events were included so that benefit and seriously harmful events of similar clinical impact were compared,” they wrote. “This is particularly important when the endpoints and analyses do not include measurements of subjective clinical impact such as utility measurements or preference weights. This approach also uses risk differences rather than relative measurements such as hazard ratios, so the number of events prevented and caused are clearly distinguished.”

ATLAS comprised more than 15,000 patients with ST-segment elevation MI, non-STEMI, or unstable angina. They were randomized to either rivaroxaban 2.5 mg orally twice per day, 5 mg orally twice per day, or to placebo, in addition to standard of care, which included low-dose aspirin. Patients were stratified by the optional use of clopidogrel/ticlopidine.

Dr. Gibson and his team reanalyzed the data by comparing outcomes they judged as having a similar clinical impact: fatal and irreversible cardiovascular death, MI, and ischemic stroke. They also assessed all bleeding, TIMI life-threatening bleeding, and TIMI major bleeding.

In this analysis, the 2.5-mg dose was associated with 115 fewer fatal or irreversible ischemic deaths per 10,000 patient-years of exposure than placebo (548 vs. 663 nonbleeding cardiovascular deaths, MIs, or ischemic strokes).

However, the same dose was also associated with 10 more excessive, fatal, or irreversibly serious harmful events, compared with placebo per 10,000 patient years (33 fatal bleeds or intracranial hemorrhage vs. 23 for placebo).

“Considered together, there would be 105 fatal or irreversible events prevented per 10,000 patient-years of exposure to 2.5 mg of rivaroxaban taken orally twice a day, compared with placebo. An alternate interpretation of the data is that there would be 11 [10 of 115] fatal or irreversible ischemic events prevented for each fatal or irreversible harmful event caused,” Dr. Gibson and his colleagues wrote.

The benefit held when the outcomes were individually reckoned as well. If periprocedural MIs were excluded, rivaroxaban would still prevent 115 fatal or irreversible ischemic events. If only nonbleeding cardiovascular death or ischemic strokes were included, then 90 fatal or irreversible events would be prevented. And if only nonbleeding cardiovascular death was included, then 95 events would be prevented per 10,000 patient-years of exposure in the group taking rivaroxaban 2.5 mg twice daily.

“In all cases, the fatal or irreversible events prevented are 9-11 times the fatal or irreversible seriously harmful events caused,” the investigators said.

ATLAS ACS 2-TIMI 51 was supported by Johnson & Johnson and Bayer Healthcare. Dr. Gibson has received institutional funding, grants, and honoraria from those companies and from Portola Pharmaceuticals.
 

[email protected]

SOURCE: Gibson CM et al. J Am Coll Cardiol. 2018;72:129-36.

Body

 

Balancing the risks and benefits of anticoagulation therapy after an acute coronary event leaves physicians on the horns of a dilemma. How do we choose the most effective and the least harmful antiplatelet and/or antithrombotic strategy?

To support decision making, a careful and thoughtful interpretation of the existing evidence is essential, with an explicit focus on the risk-versus-benefit assessment. Even the most well-designed trial can contain ambiguities, the study investigators noted, and ATLAS was one of these.

The reanalysis of ATLAS by Gibson et al. is an attempt to cut through some of these ambiguities. By comparing only serious or fatal outcomes, the investigators aimed to bring clinically meaningful insight into the picture. Such a way of reporting provides readers with an extra piece of information to assist in deciding whether a treatment should be used.

The analysis isn’t perfect. It doesn’t include less-serious bleeding events, which still may contribute to a poor prognosis. And the analysis didn’t take into about ischemia-driven revascularizations.

Although commonly successful, repeat revascularizations are not free from complications, which may include occurrence of large infarctions, stroke, and serious bleeding.

Nevertheless, the study enhances our understanding of how to best employ low-dose rivaroxaban therapy in addition to antiplatelet agents.

Although we are getting closer to therapy optimization, the final word regarding the use of low-dose rivaroxaban and other agents for secondary prevention of cardiovascular diseases has not yet been said. This is primarily because of substantial variation in the magnitude of the risks and benefits across a population. Comprehensive, individualized profiling of the patients with respect to their ischemic and bleeding risks is crucial to further improve acute coronary syndrome–related outcomes.
 

Eugenia Nikolsy, MD, PhD, and Freek Verheugt, MD, made these comments in an accompanying editorial (J Am Coll Cardiol. 2018;72:137-40). Dr. Nikolsy is director of clinical research in invasive cardiology at Rambam Academic Hospital, Haifa, Israel. Dr. Verheugt is a professor of cardiology at the Heart-Lung Centre at University Medical Centre, Nijmegen, the Netherlands.

Publications
Topics
Sections
Body

 

Balancing the risks and benefits of anticoagulation therapy after an acute coronary event leaves physicians on the horns of a dilemma. How do we choose the most effective and the least harmful antiplatelet and/or antithrombotic strategy?

To support decision making, a careful and thoughtful interpretation of the existing evidence is essential, with an explicit focus on the risk-versus-benefit assessment. Even the most well-designed trial can contain ambiguities, the study investigators noted, and ATLAS was one of these.

The reanalysis of ATLAS by Gibson et al. is an attempt to cut through some of these ambiguities. By comparing only serious or fatal outcomes, the investigators aimed to bring clinically meaningful insight into the picture. Such a way of reporting provides readers with an extra piece of information to assist in deciding whether a treatment should be used.

The analysis isn’t perfect. It doesn’t include less-serious bleeding events, which still may contribute to a poor prognosis. And the analysis didn’t take into about ischemia-driven revascularizations.

Although commonly successful, repeat revascularizations are not free from complications, which may include occurrence of large infarctions, stroke, and serious bleeding.

Nevertheless, the study enhances our understanding of how to best employ low-dose rivaroxaban therapy in addition to antiplatelet agents.

Although we are getting closer to therapy optimization, the final word regarding the use of low-dose rivaroxaban and other agents for secondary prevention of cardiovascular diseases has not yet been said. This is primarily because of substantial variation in the magnitude of the risks and benefits across a population. Comprehensive, individualized profiling of the patients with respect to their ischemic and bleeding risks is crucial to further improve acute coronary syndrome–related outcomes.
 

Eugenia Nikolsy, MD, PhD, and Freek Verheugt, MD, made these comments in an accompanying editorial (J Am Coll Cardiol. 2018;72:137-40). Dr. Nikolsy is director of clinical research in invasive cardiology at Rambam Academic Hospital, Haifa, Israel. Dr. Verheugt is a professor of cardiology at the Heart-Lung Centre at University Medical Centre, Nijmegen, the Netherlands.

Body

 

Balancing the risks and benefits of anticoagulation therapy after an acute coronary event leaves physicians on the horns of a dilemma. How do we choose the most effective and the least harmful antiplatelet and/or antithrombotic strategy?

To support decision making, a careful and thoughtful interpretation of the existing evidence is essential, with an explicit focus on the risk-versus-benefit assessment. Even the most well-designed trial can contain ambiguities, the study investigators noted, and ATLAS was one of these.

The reanalysis of ATLAS by Gibson et al. is an attempt to cut through some of these ambiguities. By comparing only serious or fatal outcomes, the investigators aimed to bring clinically meaningful insight into the picture. Such a way of reporting provides readers with an extra piece of information to assist in deciding whether a treatment should be used.

The analysis isn’t perfect. It doesn’t include less-serious bleeding events, which still may contribute to a poor prognosis. And the analysis didn’t take into about ischemia-driven revascularizations.

Although commonly successful, repeat revascularizations are not free from complications, which may include occurrence of large infarctions, stroke, and serious bleeding.

Nevertheless, the study enhances our understanding of how to best employ low-dose rivaroxaban therapy in addition to antiplatelet agents.

Although we are getting closer to therapy optimization, the final word regarding the use of low-dose rivaroxaban and other agents for secondary prevention of cardiovascular diseases has not yet been said. This is primarily because of substantial variation in the magnitude of the risks and benefits across a population. Comprehensive, individualized profiling of the patients with respect to their ischemic and bleeding risks is crucial to further improve acute coronary syndrome–related outcomes.
 

Eugenia Nikolsy, MD, PhD, and Freek Verheugt, MD, made these comments in an accompanying editorial (J Am Coll Cardiol. 2018;72:137-40). Dr. Nikolsy is director of clinical research in invasive cardiology at Rambam Academic Hospital, Haifa, Israel. Dr. Verheugt is a professor of cardiology at the Heart-Lung Centre at University Medical Centre, Nijmegen, the Netherlands.

Title
ATLAS reanalysis shines a light on rivaroxaban’s overlooked benefits
ATLAS reanalysis shines a light on rivaroxaban’s overlooked benefits

 

In a new analysis comparing only clinically similar outcomes in patients with acute coronary syndrome, the addition of rivaroxaban to standard antiplatelet therapy resulted in 115 fewer fatal or irreversible ischemic events per 10,000 patient-years than placebo, at the expense of only 10 additional fatal or seriously harmful events.

This new interpretation of the ATLAS ACS 2-TIMI 51 trial (Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome–Thrombolysis in Myocardial Infarction-51) suggests that the factor Xa inhibitor may still carve out a place for itself in ACS therapy, despite Food and Drug Administration rejections for this indication.

Dr. C. Michael Gibson

Not only did the survival benefit of rivaroxaban appear early in postevent treatment, it continued to protect patients over time, C. Michael Gibson, MD, and colleagues reported in the Journal of the American College of Cardiology.

“Time-to-event analysis demonstrated that the risk of fatal or irreversible harm remained low and constant over time, whereas reduction in fatal or irreversible ischemic events expanded,” wrote Dr. Gibson, professor of medicine at Beth Israel Deaconess Medical Center, Boston, and his coinvestigators. “By 720 days, a net of 142 fatal or irreversible events would have been prevented by 2.5-mg oral doses twice per day of rivaroxaban. Additional time-to-event sensitivity analyses demonstrated similar results, even when TIMI major bleeding was included as a fatal or irreversible event.”

In conducting the new analysis, Dr. Gibson and his team argued that the original interpretation of the results of ATLAS ACS 2-TIMI 51 lumped both fatal and nonfatal events together in composite endpoints, resulting in an inaccurate real-life picture of rivaroxaban’s therapeutic potential. “All types of events [were] weighted equally; for example, reversible nonintracranial hemorrhage, nonfatal bleeds that can be managed with supportive care, are weighted equally with death and disabling stroke. Second, stroke can be either hemorrhagic or ischemic, and the relative contributions of hemorrhagic or ischemic stroke may not be appropriately assigned to risk-versus-benefit categories in many analyses.”

The net result was that, while rivaroxaban did reduce the risk of the composite endpoint (cardiovascular death, MI, or stroke), the 1.7% absolute difference in cardiovascular mortality was almost completely offset by a 1.3% increase in major bleeding. However, most of those bleeds were reversible and nonfatal, associated with a drop in hemoglobin and/or blood transfusion. The drug did not increase the risk of fatal bleeding.

Giving equal statistical weight to clinically equal events provides a clearer focus, the investigators said.

“In this form of analysis, only fatal or irreversible events were included so that benefit and seriously harmful events of similar clinical impact were compared,” they wrote. “This is particularly important when the endpoints and analyses do not include measurements of subjective clinical impact such as utility measurements or preference weights. This approach also uses risk differences rather than relative measurements such as hazard ratios, so the number of events prevented and caused are clearly distinguished.”

ATLAS comprised more than 15,000 patients with ST-segment elevation MI, non-STEMI, or unstable angina. They were randomized to either rivaroxaban 2.5 mg orally twice per day, 5 mg orally twice per day, or to placebo, in addition to standard of care, which included low-dose aspirin. Patients were stratified by the optional use of clopidogrel/ticlopidine.

Dr. Gibson and his team reanalyzed the data by comparing outcomes they judged as having a similar clinical impact: fatal and irreversible cardiovascular death, MI, and ischemic stroke. They also assessed all bleeding, TIMI life-threatening bleeding, and TIMI major bleeding.

In this analysis, the 2.5-mg dose was associated with 115 fewer fatal or irreversible ischemic deaths per 10,000 patient-years of exposure than placebo (548 vs. 663 nonbleeding cardiovascular deaths, MIs, or ischemic strokes).

However, the same dose was also associated with 10 more excessive, fatal, or irreversibly serious harmful events, compared with placebo per 10,000 patient years (33 fatal bleeds or intracranial hemorrhage vs. 23 for placebo).

“Considered together, there would be 105 fatal or irreversible events prevented per 10,000 patient-years of exposure to 2.5 mg of rivaroxaban taken orally twice a day, compared with placebo. An alternate interpretation of the data is that there would be 11 [10 of 115] fatal or irreversible ischemic events prevented for each fatal or irreversible harmful event caused,” Dr. Gibson and his colleagues wrote.

The benefit held when the outcomes were individually reckoned as well. If periprocedural MIs were excluded, rivaroxaban would still prevent 115 fatal or irreversible ischemic events. If only nonbleeding cardiovascular death or ischemic strokes were included, then 90 fatal or irreversible events would be prevented. And if only nonbleeding cardiovascular death was included, then 95 events would be prevented per 10,000 patient-years of exposure in the group taking rivaroxaban 2.5 mg twice daily.

“In all cases, the fatal or irreversible events prevented are 9-11 times the fatal or irreversible seriously harmful events caused,” the investigators said.

ATLAS ACS 2-TIMI 51 was supported by Johnson & Johnson and Bayer Healthcare. Dr. Gibson has received institutional funding, grants, and honoraria from those companies and from Portola Pharmaceuticals.
 

[email protected]

SOURCE: Gibson CM et al. J Am Coll Cardiol. 2018;72:129-36.

 

In a new analysis comparing only clinically similar outcomes in patients with acute coronary syndrome, the addition of rivaroxaban to standard antiplatelet therapy resulted in 115 fewer fatal or irreversible ischemic events per 10,000 patient-years than placebo, at the expense of only 10 additional fatal or seriously harmful events.

This new interpretation of the ATLAS ACS 2-TIMI 51 trial (Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome–Thrombolysis in Myocardial Infarction-51) suggests that the factor Xa inhibitor may still carve out a place for itself in ACS therapy, despite Food and Drug Administration rejections for this indication.

Dr. C. Michael Gibson

Not only did the survival benefit of rivaroxaban appear early in postevent treatment, it continued to protect patients over time, C. Michael Gibson, MD, and colleagues reported in the Journal of the American College of Cardiology.

“Time-to-event analysis demonstrated that the risk of fatal or irreversible harm remained low and constant over time, whereas reduction in fatal or irreversible ischemic events expanded,” wrote Dr. Gibson, professor of medicine at Beth Israel Deaconess Medical Center, Boston, and his coinvestigators. “By 720 days, a net of 142 fatal or irreversible events would have been prevented by 2.5-mg oral doses twice per day of rivaroxaban. Additional time-to-event sensitivity analyses demonstrated similar results, even when TIMI major bleeding was included as a fatal or irreversible event.”

In conducting the new analysis, Dr. Gibson and his team argued that the original interpretation of the results of ATLAS ACS 2-TIMI 51 lumped both fatal and nonfatal events together in composite endpoints, resulting in an inaccurate real-life picture of rivaroxaban’s therapeutic potential. “All types of events [were] weighted equally; for example, reversible nonintracranial hemorrhage, nonfatal bleeds that can be managed with supportive care, are weighted equally with death and disabling stroke. Second, stroke can be either hemorrhagic or ischemic, and the relative contributions of hemorrhagic or ischemic stroke may not be appropriately assigned to risk-versus-benefit categories in many analyses.”

The net result was that, while rivaroxaban did reduce the risk of the composite endpoint (cardiovascular death, MI, or stroke), the 1.7% absolute difference in cardiovascular mortality was almost completely offset by a 1.3% increase in major bleeding. However, most of those bleeds were reversible and nonfatal, associated with a drop in hemoglobin and/or blood transfusion. The drug did not increase the risk of fatal bleeding.

Giving equal statistical weight to clinically equal events provides a clearer focus, the investigators said.

“In this form of analysis, only fatal or irreversible events were included so that benefit and seriously harmful events of similar clinical impact were compared,” they wrote. “This is particularly important when the endpoints and analyses do not include measurements of subjective clinical impact such as utility measurements or preference weights. This approach also uses risk differences rather than relative measurements such as hazard ratios, so the number of events prevented and caused are clearly distinguished.”

ATLAS comprised more than 15,000 patients with ST-segment elevation MI, non-STEMI, or unstable angina. They were randomized to either rivaroxaban 2.5 mg orally twice per day, 5 mg orally twice per day, or to placebo, in addition to standard of care, which included low-dose aspirin. Patients were stratified by the optional use of clopidogrel/ticlopidine.

Dr. Gibson and his team reanalyzed the data by comparing outcomes they judged as having a similar clinical impact: fatal and irreversible cardiovascular death, MI, and ischemic stroke. They also assessed all bleeding, TIMI life-threatening bleeding, and TIMI major bleeding.

In this analysis, the 2.5-mg dose was associated with 115 fewer fatal or irreversible ischemic deaths per 10,000 patient-years of exposure than placebo (548 vs. 663 nonbleeding cardiovascular deaths, MIs, or ischemic strokes).

However, the same dose was also associated with 10 more excessive, fatal, or irreversibly serious harmful events, compared with placebo per 10,000 patient years (33 fatal bleeds or intracranial hemorrhage vs. 23 for placebo).

“Considered together, there would be 105 fatal or irreversible events prevented per 10,000 patient-years of exposure to 2.5 mg of rivaroxaban taken orally twice a day, compared with placebo. An alternate interpretation of the data is that there would be 11 [10 of 115] fatal or irreversible ischemic events prevented for each fatal or irreversible harmful event caused,” Dr. Gibson and his colleagues wrote.

The benefit held when the outcomes were individually reckoned as well. If periprocedural MIs were excluded, rivaroxaban would still prevent 115 fatal or irreversible ischemic events. If only nonbleeding cardiovascular death or ischemic strokes were included, then 90 fatal or irreversible events would be prevented. And if only nonbleeding cardiovascular death was included, then 95 events would be prevented per 10,000 patient-years of exposure in the group taking rivaroxaban 2.5 mg twice daily.

“In all cases, the fatal or irreversible events prevented are 9-11 times the fatal or irreversible seriously harmful events caused,” the investigators said.

ATLAS ACS 2-TIMI 51 was supported by Johnson & Johnson and Bayer Healthcare. Dr. Gibson has received institutional funding, grants, and honoraria from those companies and from Portola Pharmaceuticals.
 

[email protected]

SOURCE: Gibson CM et al. J Am Coll Cardiol. 2018;72:129-36.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Myeloproliferative neoplasms increase risk for arterial and venous thrombosis

Article Type
Changed
Fri, 09/14/2018 - 11:52

Clinical question: What are the risks for arterial and venous thrombosis in patients with myeloproliferative neoplasms (MPNs)?

Background: Myeloproliferative neoplasms include polycythemia vera, essential thrombocythemia, and primary myelofibrosis. Prior studies have investigated the incidence of arterial and venous thrombosis in patients with mye­loproliferative neoplasms, but the actual magnitude of thrombosis risk relative to the general population is unknown.

Study design: Retrospective matched-cohort study.

Setting: Sweden, using the Swedish Inpatient and Cancer Registers.

Dr. Arkady Komsoukaniants

Synopsis: Using data from 1987 to 2009, 9,429 patients with MPNs were compared with 35,820 control participants to determine hazard ratios for arterial thrombosis, venous thrombosis, and any thrombosis. The highest hazard ratios were seen within 3 months of MPN diagnosis, with hazard ratios of 4.0 (95% confidence interval, 3.6-4.4) for any thrombosis, 3.0 (95% CI, 2.7-3.4) for arterial thrombosis, and 9.7 (95% CI, 7.8-12.0) for venous thrombosis. Risk decreased but remained significantly elevated through follow-up out to 20 years after diagnosis. This decrease was thought to be caused by effective thromboprophylactic and cytoreductive treatment of the MPN.

This study demonstrates significantly elevated risk for thrombosis in patients with MPNs, highest shortly after diagnosis. It suggests the importance of timely diagnosis and treatment of MPNs to decrease early thrombosis risk.

Bottom line: Patients with MPNs have increased rates of arterial and venous thrombosis, with the highest rates within 3 months of diagnosis.

Citation: Hultcrantz M et al. Risk for arterial and venous thrombosis in patients with myeloproliferative neoplasms. Ann Intern Med. 2018 Mar 6;168(5):317-25.

Dr. Komsoukaniants is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Topics
Sections

Clinical question: What are the risks for arterial and venous thrombosis in patients with myeloproliferative neoplasms (MPNs)?

Background: Myeloproliferative neoplasms include polycythemia vera, essential thrombocythemia, and primary myelofibrosis. Prior studies have investigated the incidence of arterial and venous thrombosis in patients with mye­loproliferative neoplasms, but the actual magnitude of thrombosis risk relative to the general population is unknown.

Study design: Retrospective matched-cohort study.

Setting: Sweden, using the Swedish Inpatient and Cancer Registers.

Dr. Arkady Komsoukaniants

Synopsis: Using data from 1987 to 2009, 9,429 patients with MPNs were compared with 35,820 control participants to determine hazard ratios for arterial thrombosis, venous thrombosis, and any thrombosis. The highest hazard ratios were seen within 3 months of MPN diagnosis, with hazard ratios of 4.0 (95% confidence interval, 3.6-4.4) for any thrombosis, 3.0 (95% CI, 2.7-3.4) for arterial thrombosis, and 9.7 (95% CI, 7.8-12.0) for venous thrombosis. Risk decreased but remained significantly elevated through follow-up out to 20 years after diagnosis. This decrease was thought to be caused by effective thromboprophylactic and cytoreductive treatment of the MPN.

This study demonstrates significantly elevated risk for thrombosis in patients with MPNs, highest shortly after diagnosis. It suggests the importance of timely diagnosis and treatment of MPNs to decrease early thrombosis risk.

Bottom line: Patients with MPNs have increased rates of arterial and venous thrombosis, with the highest rates within 3 months of diagnosis.

Citation: Hultcrantz M et al. Risk for arterial and venous thrombosis in patients with myeloproliferative neoplasms. Ann Intern Med. 2018 Mar 6;168(5):317-25.

Dr. Komsoukaniants is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Clinical question: What are the risks for arterial and venous thrombosis in patients with myeloproliferative neoplasms (MPNs)?

Background: Myeloproliferative neoplasms include polycythemia vera, essential thrombocythemia, and primary myelofibrosis. Prior studies have investigated the incidence of arterial and venous thrombosis in patients with mye­loproliferative neoplasms, but the actual magnitude of thrombosis risk relative to the general population is unknown.

Study design: Retrospective matched-cohort study.

Setting: Sweden, using the Swedish Inpatient and Cancer Registers.

Dr. Arkady Komsoukaniants

Synopsis: Using data from 1987 to 2009, 9,429 patients with MPNs were compared with 35,820 control participants to determine hazard ratios for arterial thrombosis, venous thrombosis, and any thrombosis. The highest hazard ratios were seen within 3 months of MPN diagnosis, with hazard ratios of 4.0 (95% confidence interval, 3.6-4.4) for any thrombosis, 3.0 (95% CI, 2.7-3.4) for arterial thrombosis, and 9.7 (95% CI, 7.8-12.0) for venous thrombosis. Risk decreased but remained significantly elevated through follow-up out to 20 years after diagnosis. This decrease was thought to be caused by effective thromboprophylactic and cytoreductive treatment of the MPN.

This study demonstrates significantly elevated risk for thrombosis in patients with MPNs, highest shortly after diagnosis. It suggests the importance of timely diagnosis and treatment of MPNs to decrease early thrombosis risk.

Bottom line: Patients with MPNs have increased rates of arterial and venous thrombosis, with the highest rates within 3 months of diagnosis.

Citation: Hultcrantz M et al. Risk for arterial and venous thrombosis in patients with myeloproliferative neoplasms. Ann Intern Med. 2018 Mar 6;168(5):317-25.

Dr. Komsoukaniants is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Disparities found in access to medication treatment for OUDs

Article Type
Changed
Fri, 01/18/2019 - 17:46

 

The number of Medicaid enrollees receiving medication treatment with methadone and buprenorphine rose from 2002 to 2009 because of the availability of buprenorphine. A cause for concern, however, is that medication treatment increased at a much higher rate in counties with lower poverty rates – and lower concentrations of black and Hispanic residents.

“Concerted efforts are needed to ensure that [medication treatment] benefits are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of experiencing opioid use disorders,” wrote Bradley D. Stein, MD, PhD, and his colleagues. The report was published in Substance Abuse.

Dr. Stein, of Rand Corporation, and his colleagues set out to assess the changes in medication treatment use over time and how medication treatment was being used at the county level – in addition to the associations between poverty, race/ethnicity, and urbanicity. The research team analyzed Medicaid claims from 2002 to 2009 from 14 states, representing 53% of the U.S. population and 47% of 2009 Medicaid enrollees. The states selected in the analysis, chosen to represent regional and population diversity, were California, Connecticut, Florida, Georgia, Illinois, Louisiana, Massachusetts, Maryland, New York, Pennsylvania, Rhode Island, Texas, Vermont, and Wisconsin. The researchers looked at medication treatment use among 18- to 64-year-old Medicaid enrollees, excluding people who were eligible for both Medicare and Medicaid.

The variables for who received medication treatment and data on county characteristics were well defined. Individuals who had received either methadone or buprenorphine were identified as receiving medication treatment. Some patients (3% or less) used both methadone or buprenorphine but were categorized as methadone users in the analysis to better elucidate the role of buprenorphine in medication treatment. Counties were classified as low poverty if the percentage of the county population was below the median (less than 13.5%) of the counties in the 14 states in the analysis and the federal poverty line.

The racial/ethnic makeup of a county was determined to be low percentage of black people if the percentage of the black population was below the median (less than 5.6%) in all counties. Similarly, a county was considered low percentage of Hispanic residents if the proportion of the Hispanic population was below the median of less than 4.2%, reported Dr. Stein, who also is affiliated with the University of Pittsburgh.

The analysis showed that from 2002 to 2009, the proportion of Medicaid users receiving methadone increased by 20% (42,235 to 50,587), accounting for a fraction of the 62% increase in Medicaid enrollment (42,263 to 68,278). The real driver in increased medication treatment rates was the adoption of buprenorphine, which soared from 75 in 2002 to 19,691 in 2009. In 2009, 29% of Medicaid enrollees received medication treatment with buprenorphine. The growth of medication treatment varied by the characteristics of a county’s population. In 2002, urban counties had substantially higher rates of primarily methadone therapy than did rural counties (P less than.001). But no significant differences were found across the county based the concentration of black residents or poverty. Communities that did not have low concentrations of Hispanic residents experienced higher rates of medication treatment, regardless of poverty (P less than .01 for low poverty and not low poverty)

 

 

Those trends changed by 2009. Compared with individuals living in all other types of counties, those living in counties with a lower proportion of black residents and a low poverty rate were much more likely to receive medication treatment. A similar pattern was seen among populations with a lower proportion of Hispanic residents and a low poverty rate, compared with communities with high numbers of Hispanics and not low poverty rate.

Dr. Stein and his colleagues cited several limitations. First, because the study analyzed Medicaid enrollees, it is not known how the findings might translate to uninsured or commercially insured patients. Another limitation is that the study data analyzed patients until 2009, making it difficult to generalize the findings to the population today. Finally, the researchers used a population-based approach.

Nevertheless, they said, the study advances understanding of the impact of buprenorphine on medication treatment among patients who receive Medicaid.

“At a time of intensive policymaker and regulatory efforts to increase [medication treatment] availability, our findings highlight the importance of ensuring that benefits of such policies are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of suffering from these disorders,” Dr. Stein and his colleagues wrote.

The study was supported by a grant from the National Institute on Drug Abuse. The authors disclosed no relevant conflicts of interest.

SOURCE: Stein BD et al. Subst Abuse. 2018 Jun 22. doi: 10.1080/08897077.2018.1449166.

Publications
Topics
Sections

 

The number of Medicaid enrollees receiving medication treatment with methadone and buprenorphine rose from 2002 to 2009 because of the availability of buprenorphine. A cause for concern, however, is that medication treatment increased at a much higher rate in counties with lower poverty rates – and lower concentrations of black and Hispanic residents.

“Concerted efforts are needed to ensure that [medication treatment] benefits are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of experiencing opioid use disorders,” wrote Bradley D. Stein, MD, PhD, and his colleagues. The report was published in Substance Abuse.

Dr. Stein, of Rand Corporation, and his colleagues set out to assess the changes in medication treatment use over time and how medication treatment was being used at the county level – in addition to the associations between poverty, race/ethnicity, and urbanicity. The research team analyzed Medicaid claims from 2002 to 2009 from 14 states, representing 53% of the U.S. population and 47% of 2009 Medicaid enrollees. The states selected in the analysis, chosen to represent regional and population diversity, were California, Connecticut, Florida, Georgia, Illinois, Louisiana, Massachusetts, Maryland, New York, Pennsylvania, Rhode Island, Texas, Vermont, and Wisconsin. The researchers looked at medication treatment use among 18- to 64-year-old Medicaid enrollees, excluding people who were eligible for both Medicare and Medicaid.

The variables for who received medication treatment and data on county characteristics were well defined. Individuals who had received either methadone or buprenorphine were identified as receiving medication treatment. Some patients (3% or less) used both methadone or buprenorphine but were categorized as methadone users in the analysis to better elucidate the role of buprenorphine in medication treatment. Counties were classified as low poverty if the percentage of the county population was below the median (less than 13.5%) of the counties in the 14 states in the analysis and the federal poverty line.

The racial/ethnic makeup of a county was determined to be low percentage of black people if the percentage of the black population was below the median (less than 5.6%) in all counties. Similarly, a county was considered low percentage of Hispanic residents if the proportion of the Hispanic population was below the median of less than 4.2%, reported Dr. Stein, who also is affiliated with the University of Pittsburgh.

The analysis showed that from 2002 to 2009, the proportion of Medicaid users receiving methadone increased by 20% (42,235 to 50,587), accounting for a fraction of the 62% increase in Medicaid enrollment (42,263 to 68,278). The real driver in increased medication treatment rates was the adoption of buprenorphine, which soared from 75 in 2002 to 19,691 in 2009. In 2009, 29% of Medicaid enrollees received medication treatment with buprenorphine. The growth of medication treatment varied by the characteristics of a county’s population. In 2002, urban counties had substantially higher rates of primarily methadone therapy than did rural counties (P less than.001). But no significant differences were found across the county based the concentration of black residents or poverty. Communities that did not have low concentrations of Hispanic residents experienced higher rates of medication treatment, regardless of poverty (P less than .01 for low poverty and not low poverty)

 

 

Those trends changed by 2009. Compared with individuals living in all other types of counties, those living in counties with a lower proportion of black residents and a low poverty rate were much more likely to receive medication treatment. A similar pattern was seen among populations with a lower proportion of Hispanic residents and a low poverty rate, compared with communities with high numbers of Hispanics and not low poverty rate.

Dr. Stein and his colleagues cited several limitations. First, because the study analyzed Medicaid enrollees, it is not known how the findings might translate to uninsured or commercially insured patients. Another limitation is that the study data analyzed patients until 2009, making it difficult to generalize the findings to the population today. Finally, the researchers used a population-based approach.

Nevertheless, they said, the study advances understanding of the impact of buprenorphine on medication treatment among patients who receive Medicaid.

“At a time of intensive policymaker and regulatory efforts to increase [medication treatment] availability, our findings highlight the importance of ensuring that benefits of such policies are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of suffering from these disorders,” Dr. Stein and his colleagues wrote.

The study was supported by a grant from the National Institute on Drug Abuse. The authors disclosed no relevant conflicts of interest.

SOURCE: Stein BD et al. Subst Abuse. 2018 Jun 22. doi: 10.1080/08897077.2018.1449166.

 

The number of Medicaid enrollees receiving medication treatment with methadone and buprenorphine rose from 2002 to 2009 because of the availability of buprenorphine. A cause for concern, however, is that medication treatment increased at a much higher rate in counties with lower poverty rates – and lower concentrations of black and Hispanic residents.

“Concerted efforts are needed to ensure that [medication treatment] benefits are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of experiencing opioid use disorders,” wrote Bradley D. Stein, MD, PhD, and his colleagues. The report was published in Substance Abuse.

Dr. Stein, of Rand Corporation, and his colleagues set out to assess the changes in medication treatment use over time and how medication treatment was being used at the county level – in addition to the associations between poverty, race/ethnicity, and urbanicity. The research team analyzed Medicaid claims from 2002 to 2009 from 14 states, representing 53% of the U.S. population and 47% of 2009 Medicaid enrollees. The states selected in the analysis, chosen to represent regional and population diversity, were California, Connecticut, Florida, Georgia, Illinois, Louisiana, Massachusetts, Maryland, New York, Pennsylvania, Rhode Island, Texas, Vermont, and Wisconsin. The researchers looked at medication treatment use among 18- to 64-year-old Medicaid enrollees, excluding people who were eligible for both Medicare and Medicaid.

The variables for who received medication treatment and data on county characteristics were well defined. Individuals who had received either methadone or buprenorphine were identified as receiving medication treatment. Some patients (3% or less) used both methadone or buprenorphine but were categorized as methadone users in the analysis to better elucidate the role of buprenorphine in medication treatment. Counties were classified as low poverty if the percentage of the county population was below the median (less than 13.5%) of the counties in the 14 states in the analysis and the federal poverty line.

The racial/ethnic makeup of a county was determined to be low percentage of black people if the percentage of the black population was below the median (less than 5.6%) in all counties. Similarly, a county was considered low percentage of Hispanic residents if the proportion of the Hispanic population was below the median of less than 4.2%, reported Dr. Stein, who also is affiliated with the University of Pittsburgh.

The analysis showed that from 2002 to 2009, the proportion of Medicaid users receiving methadone increased by 20% (42,235 to 50,587), accounting for a fraction of the 62% increase in Medicaid enrollment (42,263 to 68,278). The real driver in increased medication treatment rates was the adoption of buprenorphine, which soared from 75 in 2002 to 19,691 in 2009. In 2009, 29% of Medicaid enrollees received medication treatment with buprenorphine. The growth of medication treatment varied by the characteristics of a county’s population. In 2002, urban counties had substantially higher rates of primarily methadone therapy than did rural counties (P less than.001). But no significant differences were found across the county based the concentration of black residents or poverty. Communities that did not have low concentrations of Hispanic residents experienced higher rates of medication treatment, regardless of poverty (P less than .01 for low poverty and not low poverty)

 

 

Those trends changed by 2009. Compared with individuals living in all other types of counties, those living in counties with a lower proportion of black residents and a low poverty rate were much more likely to receive medication treatment. A similar pattern was seen among populations with a lower proportion of Hispanic residents and a low poverty rate, compared with communities with high numbers of Hispanics and not low poverty rate.

Dr. Stein and his colleagues cited several limitations. First, because the study analyzed Medicaid enrollees, it is not known how the findings might translate to uninsured or commercially insured patients. Another limitation is that the study data analyzed patients until 2009, making it difficult to generalize the findings to the population today. Finally, the researchers used a population-based approach.

Nevertheless, they said, the study advances understanding of the impact of buprenorphine on medication treatment among patients who receive Medicaid.

“At a time of intensive policymaker and regulatory efforts to increase [medication treatment] availability, our findings highlight the importance of ensuring that benefits of such policies are equitably distributed across society and reach disadvantaged individuals who may be at higher risk of suffering from these disorders,” Dr. Stein and his colleagues wrote.

The study was supported by a grant from the National Institute on Drug Abuse. The authors disclosed no relevant conflicts of interest.

SOURCE: Stein BD et al. Subst Abuse. 2018 Jun 22. doi: 10.1080/08897077.2018.1449166.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM SUBSTANCE ABUSE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Medication treatment access for opioid use disorders varies greatly among Medicaid enrollees.

Major finding: Residents of counties with a lower proportion of black residents and a low poverty rate are much more likely to receive medication treatment.

Study details: An analysis of Medicaid claims from 2002 to 2009 from 14 states representing 53% of the U.S. population and 47% of 2009 Medicaid enrollees.

Disclosures: This study was supported by a grant from the National Institute on Drug Abuse. The authors disclosed no relevant conflicts of interest.

Source: Stein BD et al. Subst Abuse. 2018 Jun 22. doi: 10.1080/08897077.2018.1449166.

Disqus Comments
Default
Use ProPublica

Methamphetamine use climbing among opioid users

Article Type
Changed
Fri, 01/18/2019 - 17:46

 

– As the deadly opioid epidemic continues, a new study suggests that a fast-rising number of users are turning to another drug of abuse – methamphetamine. In some cases, a researcher says, their co-use is reminiscent of the fad for “speedball” mixtures of cocaine and heroin.

During 2011-2017, the percentage of surveyed opioid users seeking treatment who reported also using methamphetamine over the past month skyrocketed from 19% to 34%, researchers reported at the 2018 annual meeting of the College on Problems of Drug Dependence.

Matthew Ellis

Use of crystal meth specifically went up by 82% and the use of prescription stimulants rose by 15%. By contrast, use of marijuana went up by just 6%, while the use of muscle relaxants and prescription sleep drugs fell by more than half.

The findings matter, because the use of multiple illicit drugs is even more dangerous than one alone, said study coauthor and doctoral candidate Matthew S. Ellis, of Washington University in St. Louis, in an interview. “Illicit opioids carry their own serious risks such as unknown purity, not knowing if heroin is laced with fentanyl, or inexperience of users who are used to clearly marked prescription pills. Add in a secondary drug, also often used in non-oral ways, and your risk for overdose is going to significantly increase.”

The rising use of methamphetamine, which comes in such forms as crystal meth, has been overshadowed by news about the opioid epidemic. Still, as a 2018 Lancet report put it, “while the opioid crisis has exploded, the lull in the methamphetamine epidemic has quietly and swiftly reversed course, now accounting for 11% of the total number of overdose deaths.”

In regard to co-use of opioids and methamphetamines, the report said, “in states including Wisconsin and Oregon, new patterns suggest they are beginning to overlap as increasing numbers of people use both drugs” (Lancet. 2018 Feb. 24;391[10122]:713).

Meanwhile, the New York Times published a story in February 2018 headlined “Meth, the forgotten killer, is back. And it’s everywhere.” It noted that meth overdose deaths in Oregon outpace those from opioids and added: “At the United States border, agents are seizing 10-20 times the amounts they did a decade ago. Methamphetamine, experts say, has never been purer, cheaper, or more lethal.”

Overall, there’s little known about co-use of opioids and methamphetamines, said study lead author Mr. Ellis. “The reason for this is that opioid use patterns and populations of users have drastically changed in the past 20 years, and continue to do so,” he said. “Methamphetamine is becoming increasingly available at the same time that heroin and illicit fentanyl are as well. Reports suggest that the United States has shifted from a market of home-grown methamphetamine to that manufactured and sent from other countries, creating a broader market than previously seen.”

For the new study, Mr. Ellis and his colleagues examined statistics from a U.S. surveillance program of opioid users entering substance abuse programs. They focused on 13,521 participants in 47 states during 2011-2017.

Of 12 drug classes examined, only co-use of methamphetamine rose significantly over the 6-year period, Mr. Ellis said.

Among demographic and geographic groups, the researchers saw the largest increases in co-use of the two drugs in the West, Northeast, and Midwest regions, in rural and suburban areas, among groups aged 18-44 years, and among whites.

Why is co-use among opioid users increasing? “We have begun to do some qualitative work with a number of participants suggesting they use opioids and methamphetamine to balance each other out,” Mr. Ellis said. “So an addict can use opioids, but if they need to go to work, they can reinvigorate themselves with methamphetamine.”

Mr. Ellis said “this is not necessarily a new trend,” noting that the co-use of the drugs is akin to the “speedball” – a mixture of cocaine and heroin designed to blend their opposite modes of action.

However, Mr. Ellis said, “the rates we are seeing appear to be much higher than what was seen for speedballs. The increases in production and spread of illicit opioids and methamphetamine into an existing market of those previously using prescription opioids was a perfect storm for these two drugs to be a problem, both separately and together.”

He said researchers also are finding that “if methamphetamine is the only thing an opioid addict can find, they will use it to stave off withdrawals as well.”

Indeed, National Public Radio reported in June 2018 that “as opioids are becoming harder to obtain, more and more users are turning to cheap methamphetamine” in Ohio’s tiny Vinton County, near Columbus.

Moving forward, Mr. Ellis said, “we cannot treat substance use in a silo of a single drug. If we attempt to treat opioid abusers by simply treating their opioid abuse – and not other drugs – then we have less of a chance of success. More of a focus needs to be put on the fact that the vast majority of opioid abusers are polysubstance users.”

The study is funded by the RADARS (Researched Abuse, Diversion and Addiction-Related Surveillance) System, an independent, nonprofit postmarketing surveillance system supported by subscription fees from pharmaceutical manufacturers that use RADARS data to track medication use and meet regulatory obligations. The study authors report no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– As the deadly opioid epidemic continues, a new study suggests that a fast-rising number of users are turning to another drug of abuse – methamphetamine. In some cases, a researcher says, their co-use is reminiscent of the fad for “speedball” mixtures of cocaine and heroin.

During 2011-2017, the percentage of surveyed opioid users seeking treatment who reported also using methamphetamine over the past month skyrocketed from 19% to 34%, researchers reported at the 2018 annual meeting of the College on Problems of Drug Dependence.

Matthew Ellis

Use of crystal meth specifically went up by 82% and the use of prescription stimulants rose by 15%. By contrast, use of marijuana went up by just 6%, while the use of muscle relaxants and prescription sleep drugs fell by more than half.

The findings matter, because the use of multiple illicit drugs is even more dangerous than one alone, said study coauthor and doctoral candidate Matthew S. Ellis, of Washington University in St. Louis, in an interview. “Illicit opioids carry their own serious risks such as unknown purity, not knowing if heroin is laced with fentanyl, or inexperience of users who are used to clearly marked prescription pills. Add in a secondary drug, also often used in non-oral ways, and your risk for overdose is going to significantly increase.”

The rising use of methamphetamine, which comes in such forms as crystal meth, has been overshadowed by news about the opioid epidemic. Still, as a 2018 Lancet report put it, “while the opioid crisis has exploded, the lull in the methamphetamine epidemic has quietly and swiftly reversed course, now accounting for 11% of the total number of overdose deaths.”

In regard to co-use of opioids and methamphetamines, the report said, “in states including Wisconsin and Oregon, new patterns suggest they are beginning to overlap as increasing numbers of people use both drugs” (Lancet. 2018 Feb. 24;391[10122]:713).

Meanwhile, the New York Times published a story in February 2018 headlined “Meth, the forgotten killer, is back. And it’s everywhere.” It noted that meth overdose deaths in Oregon outpace those from opioids and added: “At the United States border, agents are seizing 10-20 times the amounts they did a decade ago. Methamphetamine, experts say, has never been purer, cheaper, or more lethal.”

Overall, there’s little known about co-use of opioids and methamphetamines, said study lead author Mr. Ellis. “The reason for this is that opioid use patterns and populations of users have drastically changed in the past 20 years, and continue to do so,” he said. “Methamphetamine is becoming increasingly available at the same time that heroin and illicit fentanyl are as well. Reports suggest that the United States has shifted from a market of home-grown methamphetamine to that manufactured and sent from other countries, creating a broader market than previously seen.”

For the new study, Mr. Ellis and his colleagues examined statistics from a U.S. surveillance program of opioid users entering substance abuse programs. They focused on 13,521 participants in 47 states during 2011-2017.

Of 12 drug classes examined, only co-use of methamphetamine rose significantly over the 6-year period, Mr. Ellis said.

Among demographic and geographic groups, the researchers saw the largest increases in co-use of the two drugs in the West, Northeast, and Midwest regions, in rural and suburban areas, among groups aged 18-44 years, and among whites.

Why is co-use among opioid users increasing? “We have begun to do some qualitative work with a number of participants suggesting they use opioids and methamphetamine to balance each other out,” Mr. Ellis said. “So an addict can use opioids, but if they need to go to work, they can reinvigorate themselves with methamphetamine.”

Mr. Ellis said “this is not necessarily a new trend,” noting that the co-use of the drugs is akin to the “speedball” – a mixture of cocaine and heroin designed to blend their opposite modes of action.

However, Mr. Ellis said, “the rates we are seeing appear to be much higher than what was seen for speedballs. The increases in production and spread of illicit opioids and methamphetamine into an existing market of those previously using prescription opioids was a perfect storm for these two drugs to be a problem, both separately and together.”

He said researchers also are finding that “if methamphetamine is the only thing an opioid addict can find, they will use it to stave off withdrawals as well.”

Indeed, National Public Radio reported in June 2018 that “as opioids are becoming harder to obtain, more and more users are turning to cheap methamphetamine” in Ohio’s tiny Vinton County, near Columbus.

Moving forward, Mr. Ellis said, “we cannot treat substance use in a silo of a single drug. If we attempt to treat opioid abusers by simply treating their opioid abuse – and not other drugs – then we have less of a chance of success. More of a focus needs to be put on the fact that the vast majority of opioid abusers are polysubstance users.”

The study is funded by the RADARS (Researched Abuse, Diversion and Addiction-Related Surveillance) System, an independent, nonprofit postmarketing surveillance system supported by subscription fees from pharmaceutical manufacturers that use RADARS data to track medication use and meet regulatory obligations. The study authors report no relevant disclosures.

 

– As the deadly opioid epidemic continues, a new study suggests that a fast-rising number of users are turning to another drug of abuse – methamphetamine. In some cases, a researcher says, their co-use is reminiscent of the fad for “speedball” mixtures of cocaine and heroin.

During 2011-2017, the percentage of surveyed opioid users seeking treatment who reported also using methamphetamine over the past month skyrocketed from 19% to 34%, researchers reported at the 2018 annual meeting of the College on Problems of Drug Dependence.

Matthew Ellis

Use of crystal meth specifically went up by 82% and the use of prescription stimulants rose by 15%. By contrast, use of marijuana went up by just 6%, while the use of muscle relaxants and prescription sleep drugs fell by more than half.

The findings matter, because the use of multiple illicit drugs is even more dangerous than one alone, said study coauthor and doctoral candidate Matthew S. Ellis, of Washington University in St. Louis, in an interview. “Illicit opioids carry their own serious risks such as unknown purity, not knowing if heroin is laced with fentanyl, or inexperience of users who are used to clearly marked prescription pills. Add in a secondary drug, also often used in non-oral ways, and your risk for overdose is going to significantly increase.”

The rising use of methamphetamine, which comes in such forms as crystal meth, has been overshadowed by news about the opioid epidemic. Still, as a 2018 Lancet report put it, “while the opioid crisis has exploded, the lull in the methamphetamine epidemic has quietly and swiftly reversed course, now accounting for 11% of the total number of overdose deaths.”

In regard to co-use of opioids and methamphetamines, the report said, “in states including Wisconsin and Oregon, new patterns suggest they are beginning to overlap as increasing numbers of people use both drugs” (Lancet. 2018 Feb. 24;391[10122]:713).

Meanwhile, the New York Times published a story in February 2018 headlined “Meth, the forgotten killer, is back. And it’s everywhere.” It noted that meth overdose deaths in Oregon outpace those from opioids and added: “At the United States border, agents are seizing 10-20 times the amounts they did a decade ago. Methamphetamine, experts say, has never been purer, cheaper, or more lethal.”

Overall, there’s little known about co-use of opioids and methamphetamines, said study lead author Mr. Ellis. “The reason for this is that opioid use patterns and populations of users have drastically changed in the past 20 years, and continue to do so,” he said. “Methamphetamine is becoming increasingly available at the same time that heroin and illicit fentanyl are as well. Reports suggest that the United States has shifted from a market of home-grown methamphetamine to that manufactured and sent from other countries, creating a broader market than previously seen.”

For the new study, Mr. Ellis and his colleagues examined statistics from a U.S. surveillance program of opioid users entering substance abuse programs. They focused on 13,521 participants in 47 states during 2011-2017.

Of 12 drug classes examined, only co-use of methamphetamine rose significantly over the 6-year period, Mr. Ellis said.

Among demographic and geographic groups, the researchers saw the largest increases in co-use of the two drugs in the West, Northeast, and Midwest regions, in rural and suburban areas, among groups aged 18-44 years, and among whites.

Why is co-use among opioid users increasing? “We have begun to do some qualitative work with a number of participants suggesting they use opioids and methamphetamine to balance each other out,” Mr. Ellis said. “So an addict can use opioids, but if they need to go to work, they can reinvigorate themselves with methamphetamine.”

Mr. Ellis said “this is not necessarily a new trend,” noting that the co-use of the drugs is akin to the “speedball” – a mixture of cocaine and heroin designed to blend their opposite modes of action.

However, Mr. Ellis said, “the rates we are seeing appear to be much higher than what was seen for speedballs. The increases in production and spread of illicit opioids and methamphetamine into an existing market of those previously using prescription opioids was a perfect storm for these two drugs to be a problem, both separately and together.”

He said researchers also are finding that “if methamphetamine is the only thing an opioid addict can find, they will use it to stave off withdrawals as well.”

Indeed, National Public Radio reported in June 2018 that “as opioids are becoming harder to obtain, more and more users are turning to cheap methamphetamine” in Ohio’s tiny Vinton County, near Columbus.

Moving forward, Mr. Ellis said, “we cannot treat substance use in a silo of a single drug. If we attempt to treat opioid abusers by simply treating their opioid abuse – and not other drugs – then we have less of a chance of success. More of a focus needs to be put on the fact that the vast majority of opioid abusers are polysubstance users.”

The study is funded by the RADARS (Researched Abuse, Diversion and Addiction-Related Surveillance) System, an independent, nonprofit postmarketing surveillance system supported by subscription fees from pharmaceutical manufacturers that use RADARS data to track medication use and meet regulatory obligations. The study authors report no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CPDD 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The percentage of opioid users who also use methamphetamine is on the rise.

Major finding: During 2011-2017, the percentage of opioid users reporting methamphetamine use over the past month grew from 19% to 34%.

Study details: Analysis of 2011-2017 data from 13,521 opioid-using participants entering substance abuse programs.

Disclosures: The study is funded by the RADARS System, an independent, nonprofit postmarketing surveillance system supported by subscription fees from pharmaceutical manufacturers that use RADARS data to track medication use and meet regulatory obligations. The study authors report no relevant disclosures.


 

Disqus Comments
Default
Use ProPublica

What to do if you encounter Candida auris

Article Type
Changed
Fri, 09/14/2018 - 11:52
Display Headline
What to do if you encounter Candida auris

Closely monitor patients for treatment failure

 

Candida auris is an emerging, often multidrug-resistant yeast that causes invasive infections (such as bloodstream, intra-abdominal) and is transmitted in health care settings. It is difficult to diagnose using traditional yeast identification methods. C. auris also has been found in noninvasive body sites and can colonize a person without causing active infection and hence permitting transmission of the pathogen between patients. These sites include skin, urine, external ear, wounds, and respiratory specimens.

This fungus was first described in 2009 in an ear-discharge culture from a patient in Japan. The first clinical cases were described in South Korea in 2011. An unknown pathogen before 2009, C. auris caused 4%-8% of candidemia in Indian ICUs during 2011-2012 and 38% of candidemia in one Kenyan hospital during 2010-2013. It has now spread across Asia and Europe, only to arrive in the United States in 2016.

As of Aug. 31, 2017, a total of 153 clinical cases of C. auris infection have been reported to CDC from 10 U.S. states; most have occurred in New York and New Jersey. An additional 143 patients have been found to be colonized with C. auris. Based on epidemiologic and molecular information, including whole genome sequencing, the Centers for Disease Control and Prevention infers that most U.S. cases likely resulted from local transmission of C. auris following previous introduction from other countries in Asia.

Dr. Raghavendra Tirupathi


The majority of infections within the United States have been in blood streams. The reported all-cause mortality from these infections has been up to 60%. Most C. auris isolates in the United States have been resistant to at least one antifungal, most commonly fluconazole, and patients have developed resistance to echinocandin drugs while on treatment. Amphotericin B resistance also has been seen in about 30% of isolates.

In response to global reports and a large outbreak in a specialty hospital in the United Kingdom, the CDC issued its first advisory and clinical alert to health care facilities in June 2016. It is essential for hospitalist physicians to be aware of this emerging pathogen and also of the interventions needed to curb its spread, given they are the frontline warriors in the fight against hospital-acquired infections.

The first step in controlling C. auris is identification. C. auris can be misidentified when using traditional biochemical methods. They are most commonly misidentified as Candida haemulonii. Currently, accurate identification for C. auris can be performed by Vitek MS and matrix-assisted laser desorption/ionization time-of-flight using research use–only databases. Hospitalists should be aware of the diagnostic instruments used in their hospital laboratories and their ability to detect C. auris. Clinical laboratories should request testing of suspect C. auris isolates from their state or regional public health laboratory or the CDC. Laboratories should also consider reviewing historical microbiology records for suspect isolates (e.g., C. haemulonii) to identify missed cases of C. auris.

All cultures positive for Candida should be further speciated and antifungal susceptibilities should be reported as per new Infectious Diseases Society of America guidelines for candidiasis from 2016. As many clinical laboratories do not determine the species of Candida from noninvasive sites, C. auris colonization may go unrecognized and lead to transmission. About 54% of recognized U.S. clinical cases have been identified from blood cultures. The remaining patients with positive C. auris cultures, including those with recent hospitalizations abroad, have had the organism isolated from other body sites, including skin wounds, urine, respiratory specimens, bile fluid, and ears. Determining the species of Candida for isolates from these noninvasive sites in certain situations may allow for more rapid identification of C. auris and allow for timely implementation of targeted infection control measures to reduce transmission.

Patients have been persistently colonized with C. auris, posing long-term risk of transmission. Currently, data on effective decolonization methods are lacking. Patients with suspected or confirmed C. auris infection should be placed in a single room if possible and standard and contact precautions should be initiated and thorough environmental cleaning and disinfection of the patient care area should be undertaken. Using an Environmental Protection Agency–registered antimicrobial product active against Clostridium difficile for routine and terminal disinfection is recommended.

Implement contact tracing and testing to identify other patients colonized with C. auris. Review past microbiology records (at least for the preceding 1 year) for suspect or confirmed cases of C. auris at the institution. Set up enhanced surveillance for C. auris in the laboratory setting.

Echinocandin drugs are the first-line treatment for most invasive Candida infections, making resistance to this class of antifungal drugs particularly concerning. As of Sept. 15, 2017, at least five patients in the United States had echinocandin-resistant isolates. In one patient, resistance to echinocandin drugs developed while being treated with echinocandins.

Based on these findings, CDC is concerned that echinocandin-resistant C. auris could become more common. Patients with C. auris infection should be closely monitored for treatment failure, as indicated by persistently positive clinical cultures (lasting more than 5 days). Consultation with an infectious disease specialist is highly recommended.

Dr. Tirupathi is medical director, infectious diseases/HIV at Keystone Health, and chair, infection prevention, at Summit Health, both in Chambersburg, Pa. He is clinical assistant professor of medicine at Penn State University, Hershey.

Publications
Topics
Sections
Related Articles

Closely monitor patients for treatment failure

Closely monitor patients for treatment failure

 

Candida auris is an emerging, often multidrug-resistant yeast that causes invasive infections (such as bloodstream, intra-abdominal) and is transmitted in health care settings. It is difficult to diagnose using traditional yeast identification methods. C. auris also has been found in noninvasive body sites and can colonize a person without causing active infection and hence permitting transmission of the pathogen between patients. These sites include skin, urine, external ear, wounds, and respiratory specimens.

This fungus was first described in 2009 in an ear-discharge culture from a patient in Japan. The first clinical cases were described in South Korea in 2011. An unknown pathogen before 2009, C. auris caused 4%-8% of candidemia in Indian ICUs during 2011-2012 and 38% of candidemia in one Kenyan hospital during 2010-2013. It has now spread across Asia and Europe, only to arrive in the United States in 2016.

As of Aug. 31, 2017, a total of 153 clinical cases of C. auris infection have been reported to CDC from 10 U.S. states; most have occurred in New York and New Jersey. An additional 143 patients have been found to be colonized with C. auris. Based on epidemiologic and molecular information, including whole genome sequencing, the Centers for Disease Control and Prevention infers that most U.S. cases likely resulted from local transmission of C. auris following previous introduction from other countries in Asia.

Dr. Raghavendra Tirupathi


The majority of infections within the United States have been in blood streams. The reported all-cause mortality from these infections has been up to 60%. Most C. auris isolates in the United States have been resistant to at least one antifungal, most commonly fluconazole, and patients have developed resistance to echinocandin drugs while on treatment. Amphotericin B resistance also has been seen in about 30% of isolates.

In response to global reports and a large outbreak in a specialty hospital in the United Kingdom, the CDC issued its first advisory and clinical alert to health care facilities in June 2016. It is essential for hospitalist physicians to be aware of this emerging pathogen and also of the interventions needed to curb its spread, given they are the frontline warriors in the fight against hospital-acquired infections.

The first step in controlling C. auris is identification. C. auris can be misidentified when using traditional biochemical methods. They are most commonly misidentified as Candida haemulonii. Currently, accurate identification for C. auris can be performed by Vitek MS and matrix-assisted laser desorption/ionization time-of-flight using research use–only databases. Hospitalists should be aware of the diagnostic instruments used in their hospital laboratories and their ability to detect C. auris. Clinical laboratories should request testing of suspect C. auris isolates from their state or regional public health laboratory or the CDC. Laboratories should also consider reviewing historical microbiology records for suspect isolates (e.g., C. haemulonii) to identify missed cases of C. auris.

All cultures positive for Candida should be further speciated and antifungal susceptibilities should be reported as per new Infectious Diseases Society of America guidelines for candidiasis from 2016. As many clinical laboratories do not determine the species of Candida from noninvasive sites, C. auris colonization may go unrecognized and lead to transmission. About 54% of recognized U.S. clinical cases have been identified from blood cultures. The remaining patients with positive C. auris cultures, including those with recent hospitalizations abroad, have had the organism isolated from other body sites, including skin wounds, urine, respiratory specimens, bile fluid, and ears. Determining the species of Candida for isolates from these noninvasive sites in certain situations may allow for more rapid identification of C. auris and allow for timely implementation of targeted infection control measures to reduce transmission.

Patients have been persistently colonized with C. auris, posing long-term risk of transmission. Currently, data on effective decolonization methods are lacking. Patients with suspected or confirmed C. auris infection should be placed in a single room if possible and standard and contact precautions should be initiated and thorough environmental cleaning and disinfection of the patient care area should be undertaken. Using an Environmental Protection Agency–registered antimicrobial product active against Clostridium difficile for routine and terminal disinfection is recommended.

Implement contact tracing and testing to identify other patients colonized with C. auris. Review past microbiology records (at least for the preceding 1 year) for suspect or confirmed cases of C. auris at the institution. Set up enhanced surveillance for C. auris in the laboratory setting.

Echinocandin drugs are the first-line treatment for most invasive Candida infections, making resistance to this class of antifungal drugs particularly concerning. As of Sept. 15, 2017, at least five patients in the United States had echinocandin-resistant isolates. In one patient, resistance to echinocandin drugs developed while being treated with echinocandins.

Based on these findings, CDC is concerned that echinocandin-resistant C. auris could become more common. Patients with C. auris infection should be closely monitored for treatment failure, as indicated by persistently positive clinical cultures (lasting more than 5 days). Consultation with an infectious disease specialist is highly recommended.

Dr. Tirupathi is medical director, infectious diseases/HIV at Keystone Health, and chair, infection prevention, at Summit Health, both in Chambersburg, Pa. He is clinical assistant professor of medicine at Penn State University, Hershey.

 

Candida auris is an emerging, often multidrug-resistant yeast that causes invasive infections (such as bloodstream, intra-abdominal) and is transmitted in health care settings. It is difficult to diagnose using traditional yeast identification methods. C. auris also has been found in noninvasive body sites and can colonize a person without causing active infection and hence permitting transmission of the pathogen between patients. These sites include skin, urine, external ear, wounds, and respiratory specimens.

This fungus was first described in 2009 in an ear-discharge culture from a patient in Japan. The first clinical cases were described in South Korea in 2011. An unknown pathogen before 2009, C. auris caused 4%-8% of candidemia in Indian ICUs during 2011-2012 and 38% of candidemia in one Kenyan hospital during 2010-2013. It has now spread across Asia and Europe, only to arrive in the United States in 2016.

As of Aug. 31, 2017, a total of 153 clinical cases of C. auris infection have been reported to CDC from 10 U.S. states; most have occurred in New York and New Jersey. An additional 143 patients have been found to be colonized with C. auris. Based on epidemiologic and molecular information, including whole genome sequencing, the Centers for Disease Control and Prevention infers that most U.S. cases likely resulted from local transmission of C. auris following previous introduction from other countries in Asia.

Dr. Raghavendra Tirupathi


The majority of infections within the United States have been in blood streams. The reported all-cause mortality from these infections has been up to 60%. Most C. auris isolates in the United States have been resistant to at least one antifungal, most commonly fluconazole, and patients have developed resistance to echinocandin drugs while on treatment. Amphotericin B resistance also has been seen in about 30% of isolates.

In response to global reports and a large outbreak in a specialty hospital in the United Kingdom, the CDC issued its first advisory and clinical alert to health care facilities in June 2016. It is essential for hospitalist physicians to be aware of this emerging pathogen and also of the interventions needed to curb its spread, given they are the frontline warriors in the fight against hospital-acquired infections.

The first step in controlling C. auris is identification. C. auris can be misidentified when using traditional biochemical methods. They are most commonly misidentified as Candida haemulonii. Currently, accurate identification for C. auris can be performed by Vitek MS and matrix-assisted laser desorption/ionization time-of-flight using research use–only databases. Hospitalists should be aware of the diagnostic instruments used in their hospital laboratories and their ability to detect C. auris. Clinical laboratories should request testing of suspect C. auris isolates from their state or regional public health laboratory or the CDC. Laboratories should also consider reviewing historical microbiology records for suspect isolates (e.g., C. haemulonii) to identify missed cases of C. auris.

All cultures positive for Candida should be further speciated and antifungal susceptibilities should be reported as per new Infectious Diseases Society of America guidelines for candidiasis from 2016. As many clinical laboratories do not determine the species of Candida from noninvasive sites, C. auris colonization may go unrecognized and lead to transmission. About 54% of recognized U.S. clinical cases have been identified from blood cultures. The remaining patients with positive C. auris cultures, including those with recent hospitalizations abroad, have had the organism isolated from other body sites, including skin wounds, urine, respiratory specimens, bile fluid, and ears. Determining the species of Candida for isolates from these noninvasive sites in certain situations may allow for more rapid identification of C. auris and allow for timely implementation of targeted infection control measures to reduce transmission.

Patients have been persistently colonized with C. auris, posing long-term risk of transmission. Currently, data on effective decolonization methods are lacking. Patients with suspected or confirmed C. auris infection should be placed in a single room if possible and standard and contact precautions should be initiated and thorough environmental cleaning and disinfection of the patient care area should be undertaken. Using an Environmental Protection Agency–registered antimicrobial product active against Clostridium difficile for routine and terminal disinfection is recommended.

Implement contact tracing and testing to identify other patients colonized with C. auris. Review past microbiology records (at least for the preceding 1 year) for suspect or confirmed cases of C. auris at the institution. Set up enhanced surveillance for C. auris in the laboratory setting.

Echinocandin drugs are the first-line treatment for most invasive Candida infections, making resistance to this class of antifungal drugs particularly concerning. As of Sept. 15, 2017, at least five patients in the United States had echinocandin-resistant isolates. In one patient, resistance to echinocandin drugs developed while being treated with echinocandins.

Based on these findings, CDC is concerned that echinocandin-resistant C. auris could become more common. Patients with C. auris infection should be closely monitored for treatment failure, as indicated by persistently positive clinical cultures (lasting more than 5 days). Consultation with an infectious disease specialist is highly recommended.

Dr. Tirupathi is medical director, infectious diseases/HIV at Keystone Health, and chair, infection prevention, at Summit Health, both in Chambersburg, Pa. He is clinical assistant professor of medicine at Penn State University, Hershey.

Publications
Publications
Topics
Article Type
Display Headline
What to do if you encounter Candida auris
Display Headline
What to do if you encounter Candida auris
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Low risk of complications from sedation-associated GI endoscopies

Article Type
Changed
Fri, 09/14/2018 - 11:52

Background: Most GI endoscopies use sedation to keep patients comfortable during procedures, but sedation puts patients at increased risk of complications. Most of the available studies reporting sedation-related complications are retrospective and dated. There is a lack of prospective studies investigating sedation-related complications and their associated risk factors.

Study design: Prospective study.

Setting: Thirty-nine hospitals in Germany.

Synopsis: Using data collected from 314,190 adult endoscopies in which sedation was used, this study identified that there was only a 0.01% rate of major complications. Major complications for this study included intubation, ICU admission, resuscitation, or death. Propofol was the most commonly used sedative (61.7% of cases) and had the lowest risk of complications (odds ratio, 0.7509; P = .028). The top risk factors for complications were an American Society of Anesthesiologists class greater than 2 (OR, 2.2998; P less than .001), emergent need for the endoscopy (9 of the 13 fatal cases), and longer procedure length (P less than .001).

Bottom line: GI endoscopic procedures with sedation are tolerated well in the general population and have low risk of complications.

Citation: Behrens A et al. Acute sedation-associated complications in GI endoscopy (ProSed 2 Study): Results from the prospective multicentre electronic registry of sedation-associated complications. Gut. 2018 Jan 3. doi: 10.1136/gutjnl-2015-311037.

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Topics
Sections

Background: Most GI endoscopies use sedation to keep patients comfortable during procedures, but sedation puts patients at increased risk of complications. Most of the available studies reporting sedation-related complications are retrospective and dated. There is a lack of prospective studies investigating sedation-related complications and their associated risk factors.

Study design: Prospective study.

Setting: Thirty-nine hospitals in Germany.

Synopsis: Using data collected from 314,190 adult endoscopies in which sedation was used, this study identified that there was only a 0.01% rate of major complications. Major complications for this study included intubation, ICU admission, resuscitation, or death. Propofol was the most commonly used sedative (61.7% of cases) and had the lowest risk of complications (odds ratio, 0.7509; P = .028). The top risk factors for complications were an American Society of Anesthesiologists class greater than 2 (OR, 2.2998; P less than .001), emergent need for the endoscopy (9 of the 13 fatal cases), and longer procedure length (P less than .001).

Bottom line: GI endoscopic procedures with sedation are tolerated well in the general population and have low risk of complications.

Citation: Behrens A et al. Acute sedation-associated complications in GI endoscopy (ProSed 2 Study): Results from the prospective multicentre electronic registry of sedation-associated complications. Gut. 2018 Jan 3. doi: 10.1136/gutjnl-2015-311037.

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Background: Most GI endoscopies use sedation to keep patients comfortable during procedures, but sedation puts patients at increased risk of complications. Most of the available studies reporting sedation-related complications are retrospective and dated. There is a lack of prospective studies investigating sedation-related complications and their associated risk factors.

Study design: Prospective study.

Setting: Thirty-nine hospitals in Germany.

Synopsis: Using data collected from 314,190 adult endoscopies in which sedation was used, this study identified that there was only a 0.01% rate of major complications. Major complications for this study included intubation, ICU admission, resuscitation, or death. Propofol was the most commonly used sedative (61.7% of cases) and had the lowest risk of complications (odds ratio, 0.7509; P = .028). The top risk factors for complications were an American Society of Anesthesiologists class greater than 2 (OR, 2.2998; P less than .001), emergent need for the endoscopy (9 of the 13 fatal cases), and longer procedure length (P less than .001).

Bottom line: GI endoscopic procedures with sedation are tolerated well in the general population and have low risk of complications.

Citation: Behrens A et al. Acute sedation-associated complications in GI endoscopy (ProSed 2 Study): Results from the prospective multicentre electronic registry of sedation-associated complications. Gut. 2018 Jan 3. doi: 10.1136/gutjnl-2015-311037.

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Chest Pain Choice tool decreases health care utilization

Article Type
Changed
Fri, 09/14/2018 - 11:52

 



Background: Patients who complain of chest pain make up over a quarter of annual hospital admissions, but not all chest pain is attributable to acute coronary syndrome. The one-page CPC document was developed to facilitate joint decision making between low-risk patients and providers regarding the work-up for chest pain.

Study design: Parallel, randomized, controlled trial.

Setting: Six U.S. medical centers.

Synopsis: After reviewing the CPC tool, patients with low cardiac risk who presented to the ED with chest pain were given the option either to be admitted to the hospital for cardiac testing or to not be admitted and instead follow up with their primary care doctor or a cardiologist within 3 days to determine what further cardiac work-up might be warranted.

Upon reviewing data obtained from 898 patient diaries regarding use of health care services, as well as from billing data from the medical centers, the researchers found no statistically significant difference between patients who used the CPC tool and those treated under usual care with regard to hospital readmission rates, length of stay in the ED, repeat ED visits, or clinic visits. However, at the 45-day follow-up mark, those in the CPC group had undergone fewer tests and cardiac imaging studies (decrease of 125.6 tests/100 patients; 95% confidence interval, 29.3-221.6).

Bottom line: Shared decision making between providers and patients with low cardiac risk factors that used the Chest Pain Choice tool decreased some health care utilization without worsening outcomes.

Citation: Schaffer JT et al. Impact of a shared decision-making intervention on health care utilization: A secondary analysis of the Chest Pain Choice multicenter randomized trial. Acad Emerg Med. 2018 Mar;25(3):293-300.

Dr. Maryann T. Ally

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Topics
Sections

 



Background: Patients who complain of chest pain make up over a quarter of annual hospital admissions, but not all chest pain is attributable to acute coronary syndrome. The one-page CPC document was developed to facilitate joint decision making between low-risk patients and providers regarding the work-up for chest pain.

Study design: Parallel, randomized, controlled trial.

Setting: Six U.S. medical centers.

Synopsis: After reviewing the CPC tool, patients with low cardiac risk who presented to the ED with chest pain were given the option either to be admitted to the hospital for cardiac testing or to not be admitted and instead follow up with their primary care doctor or a cardiologist within 3 days to determine what further cardiac work-up might be warranted.

Upon reviewing data obtained from 898 patient diaries regarding use of health care services, as well as from billing data from the medical centers, the researchers found no statistically significant difference between patients who used the CPC tool and those treated under usual care with regard to hospital readmission rates, length of stay in the ED, repeat ED visits, or clinic visits. However, at the 45-day follow-up mark, those in the CPC group had undergone fewer tests and cardiac imaging studies (decrease of 125.6 tests/100 patients; 95% confidence interval, 29.3-221.6).

Bottom line: Shared decision making between providers and patients with low cardiac risk factors that used the Chest Pain Choice tool decreased some health care utilization without worsening outcomes.

Citation: Schaffer JT et al. Impact of a shared decision-making intervention on health care utilization: A secondary analysis of the Chest Pain Choice multicenter randomized trial. Acad Emerg Med. 2018 Mar;25(3):293-300.

Dr. Maryann T. Ally

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

 



Background: Patients who complain of chest pain make up over a quarter of annual hospital admissions, but not all chest pain is attributable to acute coronary syndrome. The one-page CPC document was developed to facilitate joint decision making between low-risk patients and providers regarding the work-up for chest pain.

Study design: Parallel, randomized, controlled trial.

Setting: Six U.S. medical centers.

Synopsis: After reviewing the CPC tool, patients with low cardiac risk who presented to the ED with chest pain were given the option either to be admitted to the hospital for cardiac testing or to not be admitted and instead follow up with their primary care doctor or a cardiologist within 3 days to determine what further cardiac work-up might be warranted.

Upon reviewing data obtained from 898 patient diaries regarding use of health care services, as well as from billing data from the medical centers, the researchers found no statistically significant difference between patients who used the CPC tool and those treated under usual care with regard to hospital readmission rates, length of stay in the ED, repeat ED visits, or clinic visits. However, at the 45-day follow-up mark, those in the CPC group had undergone fewer tests and cardiac imaging studies (decrease of 125.6 tests/100 patients; 95% confidence interval, 29.3-221.6).

Bottom line: Shared decision making between providers and patients with low cardiac risk factors that used the Chest Pain Choice tool decreased some health care utilization without worsening outcomes.

Citation: Schaffer JT et al. Impact of a shared decision-making intervention on health care utilization: A secondary analysis of the Chest Pain Choice multicenter randomized trial. Acad Emerg Med. 2018 Mar;25(3):293-300.

Dr. Maryann T. Ally

Dr. Ally is a hospitalist at UC San Diego Health and an assistant clinical professor at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Steroids do not reduce mortality in patients with septic shock

Article Type
Changed
Fri, 09/14/2018 - 11:52

Clinical question: Among patients with septic shock undergoing mechanical ventilation, does hydrocortisone reduce 90-day mortality?

Background: Septic shock is associated with a significant mortality risk, and there is no proven pharmacologic treatment other than fluids, vasopressors, and antimicrobials. Prior randomized, controlled trials have resulted in mixed outcomes, and meta-analyses and clinical practice guidelines also have not provided consistent guidance.

Study design: Randomized, controlled, double-blinded trial.

Setting: Medical centers in Australia, Denmark, New Zealand, Saudi Arabia, and the United Kingdom.

Synopsis: Over a 4-year period from 2013 to 2017, 3,658 patients with septic shock undergoing mechanical ventilation were randomized to receive either a continuous infusion of 200 mg/day of hydrocortisone for 7 days or placebo. The primary outcome, death within 90 days, occurred in 511 patients (27.9%) in the hydrocortisone group and in 526 patients (28.8%) in the placebo group (P = .50).



In secondary outcome analyses, patients in the hydrocortisone group had faster resolution of shock (3 vs. 4 days; P less than .001) and a shorter duration of initial mechanical ventilation (6 vs. 7 days; P less than .001), and fewer patients received blood transfusions (37.0% vs. 41.7%; P = .004). There was no difference in mortality at 28 days, recurrence of shock, number of days alive out of the ICU and hospital, recurrence of mechanical ventilation, rate of renal replacement therapy, and incidence of new-onset bacteremia or fungemia.

Bottom line: Administering hydrocortisone in patients with septic shock who are undergoing mechanical ventilation does not reduce 90-day mortality.

Citation: Venkatesh B et al. Adjunctive glucocorticoid therapy in patients with septic shock. N Engl J Med. 2018 Jan 19. doi: 10.1056/NEJMoa1705835.

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Publications
Topics
Sections

Clinical question: Among patients with septic shock undergoing mechanical ventilation, does hydrocortisone reduce 90-day mortality?

Background: Septic shock is associated with a significant mortality risk, and there is no proven pharmacologic treatment other than fluids, vasopressors, and antimicrobials. Prior randomized, controlled trials have resulted in mixed outcomes, and meta-analyses and clinical practice guidelines also have not provided consistent guidance.

Study design: Randomized, controlled, double-blinded trial.

Setting: Medical centers in Australia, Denmark, New Zealand, Saudi Arabia, and the United Kingdom.

Synopsis: Over a 4-year period from 2013 to 2017, 3,658 patients with septic shock undergoing mechanical ventilation were randomized to receive either a continuous infusion of 200 mg/day of hydrocortisone for 7 days or placebo. The primary outcome, death within 90 days, occurred in 511 patients (27.9%) in the hydrocortisone group and in 526 patients (28.8%) in the placebo group (P = .50).



In secondary outcome analyses, patients in the hydrocortisone group had faster resolution of shock (3 vs. 4 days; P less than .001) and a shorter duration of initial mechanical ventilation (6 vs. 7 days; P less than .001), and fewer patients received blood transfusions (37.0% vs. 41.7%; P = .004). There was no difference in mortality at 28 days, recurrence of shock, number of days alive out of the ICU and hospital, recurrence of mechanical ventilation, rate of renal replacement therapy, and incidence of new-onset bacteremia or fungemia.

Bottom line: Administering hydrocortisone in patients with septic shock who are undergoing mechanical ventilation does not reduce 90-day mortality.

Citation: Venkatesh B et al. Adjunctive glucocorticoid therapy in patients with septic shock. N Engl J Med. 2018 Jan 19. doi: 10.1056/NEJMoa1705835.

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Clinical question: Among patients with septic shock undergoing mechanical ventilation, does hydrocortisone reduce 90-day mortality?

Background: Septic shock is associated with a significant mortality risk, and there is no proven pharmacologic treatment other than fluids, vasopressors, and antimicrobials. Prior randomized, controlled trials have resulted in mixed outcomes, and meta-analyses and clinical practice guidelines also have not provided consistent guidance.

Study design: Randomized, controlled, double-blinded trial.

Setting: Medical centers in Australia, Denmark, New Zealand, Saudi Arabia, and the United Kingdom.

Synopsis: Over a 4-year period from 2013 to 2017, 3,658 patients with septic shock undergoing mechanical ventilation were randomized to receive either a continuous infusion of 200 mg/day of hydrocortisone for 7 days or placebo. The primary outcome, death within 90 days, occurred in 511 patients (27.9%) in the hydrocortisone group and in 526 patients (28.8%) in the placebo group (P = .50).



In secondary outcome analyses, patients in the hydrocortisone group had faster resolution of shock (3 vs. 4 days; P less than .001) and a shorter duration of initial mechanical ventilation (6 vs. 7 days; P less than .001), and fewer patients received blood transfusions (37.0% vs. 41.7%; P = .004). There was no difference in mortality at 28 days, recurrence of shock, number of days alive out of the ICU and hospital, recurrence of mechanical ventilation, rate of renal replacement therapy, and incidence of new-onset bacteremia or fungemia.

Bottom line: Administering hydrocortisone in patients with septic shock who are undergoing mechanical ventilation does not reduce 90-day mortality.

Citation: Venkatesh B et al. Adjunctive glucocorticoid therapy in patients with septic shock. N Engl J Med. 2018 Jan 19. doi: 10.1056/NEJMoa1705835.

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Prompting during rounds decreases lab utilization in patients nearing discharge

Article Type
Changed
Fri, 09/14/2018 - 11:52

Clinical question: Does prompting hospitalists during interdisciplinary rounds to discontinue lab orders on patients nearing discharge result in a decrease in lab testing?

Background: The Society of Hospital Medicine, as part of the Choosing Wisely campaign, has recommended against “repetitive complete blood count and chemistry testing in the face of clinical and lab stability.” Repeated phlebotomy has been shown to increase iatrogenic anemia and patient discomfort. While past interventions have been effective in decreasing lab testing, this study focused on identifying and intervening on patients who were clinically stable and nearing discharge.

Study design: Prospective, observational study.

Setting: Tertiary care teaching hospital in New York.

Dr. Bryan Huang

Synopsis: As part of structured, bedside, interdisciplinary rounds, over the course of a year, this study incorporated an inquiry to identify patients who were likely to be discharged in the next 24-48 hours; the unit medical director or nurse manager then prompted staff to discontinue labs for these patients when appropriate. This was supplemented by education of clinicians and regular review of lab utilization data with hospitalists.

The percentage of patients with labs ordered in the 24 hours prior to discharge decreased from 50.1% in the preintervention period to 34.5% in the postintervention period (P = .004). The number of labs ordered per patient-day dropped from 1.96 to 1.83 (P = .01).

Bottom line: An intervention with prompting during structured interdisciplinary rounds decreased the frequency of labs ordered for patients nearing hospital discharge.

Citation: Tsega S et al. Bedside assessment of the necessity of daily lab testing for patients nearing discharge. J Hosp Med. 2018 Jan 1;13(1):38-40.
 

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Publications
Topics
Sections

Clinical question: Does prompting hospitalists during interdisciplinary rounds to discontinue lab orders on patients nearing discharge result in a decrease in lab testing?

Background: The Society of Hospital Medicine, as part of the Choosing Wisely campaign, has recommended against “repetitive complete blood count and chemistry testing in the face of clinical and lab stability.” Repeated phlebotomy has been shown to increase iatrogenic anemia and patient discomfort. While past interventions have been effective in decreasing lab testing, this study focused on identifying and intervening on patients who were clinically stable and nearing discharge.

Study design: Prospective, observational study.

Setting: Tertiary care teaching hospital in New York.

Dr. Bryan Huang

Synopsis: As part of structured, bedside, interdisciplinary rounds, over the course of a year, this study incorporated an inquiry to identify patients who were likely to be discharged in the next 24-48 hours; the unit medical director or nurse manager then prompted staff to discontinue labs for these patients when appropriate. This was supplemented by education of clinicians and regular review of lab utilization data with hospitalists.

The percentage of patients with labs ordered in the 24 hours prior to discharge decreased from 50.1% in the preintervention period to 34.5% in the postintervention period (P = .004). The number of labs ordered per patient-day dropped from 1.96 to 1.83 (P = .01).

Bottom line: An intervention with prompting during structured interdisciplinary rounds decreased the frequency of labs ordered for patients nearing hospital discharge.

Citation: Tsega S et al. Bedside assessment of the necessity of daily lab testing for patients nearing discharge. J Hosp Med. 2018 Jan 1;13(1):38-40.
 

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Clinical question: Does prompting hospitalists during interdisciplinary rounds to discontinue lab orders on patients nearing discharge result in a decrease in lab testing?

Background: The Society of Hospital Medicine, as part of the Choosing Wisely campaign, has recommended against “repetitive complete blood count and chemistry testing in the face of clinical and lab stability.” Repeated phlebotomy has been shown to increase iatrogenic anemia and patient discomfort. While past interventions have been effective in decreasing lab testing, this study focused on identifying and intervening on patients who were clinically stable and nearing discharge.

Study design: Prospective, observational study.

Setting: Tertiary care teaching hospital in New York.

Dr. Bryan Huang

Synopsis: As part of structured, bedside, interdisciplinary rounds, over the course of a year, this study incorporated an inquiry to identify patients who were likely to be discharged in the next 24-48 hours; the unit medical director or nurse manager then prompted staff to discontinue labs for these patients when appropriate. This was supplemented by education of clinicians and regular review of lab utilization data with hospitalists.

The percentage of patients with labs ordered in the 24 hours prior to discharge decreased from 50.1% in the preintervention period to 34.5% in the postintervention period (P = .004). The number of labs ordered per patient-day dropped from 1.96 to 1.83 (P = .01).

Bottom line: An intervention with prompting during structured interdisciplinary rounds decreased the frequency of labs ordered for patients nearing hospital discharge.

Citation: Tsega S et al. Bedside assessment of the necessity of daily lab testing for patients nearing discharge. J Hosp Med. 2018 Jan 1;13(1):38-40.
 

Dr. Huang is associate chief of the division of hospital medicine at UC San Diego Health and an associate professor of medicine at the University of California, San Diego.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica