News and Views that Matter to Physicians

Top Sections
Commentary
Teachable Moments
hn
Main menu
HOSP Main Menu
Explore menu
HOSP Explore Menu
Proclivity ID
18825001
Unpublish
Specialty Focus
Cardiology
Critical Care
Imaging
Hospice & Palliative Medicine
Altmetric
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Top 25
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz

U.S. to jump-start antibiotic resistance research

Article Type
Changed
Fri, 01/18/2019 - 16:06
Display Headline
U.S. to jump-start antibiotic resistance research

The Centers for Disease Control and Prevention is providing $67 million to help U.S. health departments address antibiotic resistance and related patient safety concerns.

The new funding was made available through the CDC’s Epidemiology and Laboratory Capacity for Infectious Diseases Cooperative Agreement (ELC), according to a CDC statement, and will support seven new regional laboratories with specialized capabilities allowing rapid detection and identification of emerging antibiotic resistant threats.

James Gathany/CDC
A CDC microbiologist holds up a petri dish, on the right, inoculated with a carbapenem-resistant Enterobacteriaceae (CRE) bacterium that proved to be resistant to all of the antibiotics tested.

The CDC said it would distribute funds to all 50 state health departments, six local health departments (Chicago, the District of Columbia, Houston, Los Angeles County, New York City, and Philadelphia), and Puerto Rico, beginning Aug. 1, 2016. The agency said the grants would allow every state health department lab to test for carbapenem-resistant Enterobacteriaceae and ultimately perform whole genome sequencing on intestinal bacteria, including Salmonella, Shigella, and many Campylobacter strains.

The agency intends to provide support teams in nine state health departments for rapid response activities designed to “quickly identify and respond to the threat” of antibiotic-resistant gonorrhea in the United States, and will support high-level expertise to implement antimicrobial resistance activities in six states.

The CDC also said the promised funding would strengthen states’ ability to conduct foodborne disease tracking, investigation, and prevention, as it includes increased support for the PulseNet and OutbreakNet systems and for the Integrated Food Safety Centers of Excellence, as well as support for the National Antimicrobial Resistance Monitoring System (NARMS).

Global partnerships

Complementing the new CDC grants was an announcement from the U.S. Department of Health & Human Services that it would partner with the Wellcome Trust of London, the AMR Centre of Alderley Park (Cheshire, U.K.), and Boston University School of Law to create one of the world’s largest public-private partnerships focused on preclinical discovery and development of new antimicrobial products.

According to an HHS statement, the Combating Antibiotic Resistant Bacteria Biopharmaceutical Accelerator (CARB-X) will bring together “multiple domestic and international partners and capabilities to find potential antibiotics and move them through preclinical testing to enable safety and efficacy testing in humans and greatly reducing the business risk,” to make antimicrobial development more attractive to private sector investment.

HHS said the federal Biomedical Advanced Research and Development Authority (BARDA) would provide $30 million during the first year of CARB-X, and up to $250 million during the 5-year project. CARB-X will provide funding for research and development, and technical assistance for companies with innovative and promising solutions to antibiotic resistance, HHS said.

“Our hope is that the combination of technical expertise and life science entrepreneurship experience within the CARB-X’s life science accelerators will remove barriers for companies pursuing the development of the next novel drug, diagnostic, or vaccine to combat this public health threat,” said Joe Larsen, PhD, acting BARDA deputy director, in the HHS statement.

[email protected]

On Twitter @richpizzi

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
antibiotic resistance
Author and Disclosure Information

Author and Disclosure Information

The Centers for Disease Control and Prevention is providing $67 million to help U.S. health departments address antibiotic resistance and related patient safety concerns.

The new funding was made available through the CDC’s Epidemiology and Laboratory Capacity for Infectious Diseases Cooperative Agreement (ELC), according to a CDC statement, and will support seven new regional laboratories with specialized capabilities allowing rapid detection and identification of emerging antibiotic resistant threats.

James Gathany/CDC
A CDC microbiologist holds up a petri dish, on the right, inoculated with a carbapenem-resistant Enterobacteriaceae (CRE) bacterium that proved to be resistant to all of the antibiotics tested.

The CDC said it would distribute funds to all 50 state health departments, six local health departments (Chicago, the District of Columbia, Houston, Los Angeles County, New York City, and Philadelphia), and Puerto Rico, beginning Aug. 1, 2016. The agency said the grants would allow every state health department lab to test for carbapenem-resistant Enterobacteriaceae and ultimately perform whole genome sequencing on intestinal bacteria, including Salmonella, Shigella, and many Campylobacter strains.

The agency intends to provide support teams in nine state health departments for rapid response activities designed to “quickly identify and respond to the threat” of antibiotic-resistant gonorrhea in the United States, and will support high-level expertise to implement antimicrobial resistance activities in six states.

The CDC also said the promised funding would strengthen states’ ability to conduct foodborne disease tracking, investigation, and prevention, as it includes increased support for the PulseNet and OutbreakNet systems and for the Integrated Food Safety Centers of Excellence, as well as support for the National Antimicrobial Resistance Monitoring System (NARMS).

Global partnerships

Complementing the new CDC grants was an announcement from the U.S. Department of Health & Human Services that it would partner with the Wellcome Trust of London, the AMR Centre of Alderley Park (Cheshire, U.K.), and Boston University School of Law to create one of the world’s largest public-private partnerships focused on preclinical discovery and development of new antimicrobial products.

According to an HHS statement, the Combating Antibiotic Resistant Bacteria Biopharmaceutical Accelerator (CARB-X) will bring together “multiple domestic and international partners and capabilities to find potential antibiotics and move them through preclinical testing to enable safety and efficacy testing in humans and greatly reducing the business risk,” to make antimicrobial development more attractive to private sector investment.

HHS said the federal Biomedical Advanced Research and Development Authority (BARDA) would provide $30 million during the first year of CARB-X, and up to $250 million during the 5-year project. CARB-X will provide funding for research and development, and technical assistance for companies with innovative and promising solutions to antibiotic resistance, HHS said.

“Our hope is that the combination of technical expertise and life science entrepreneurship experience within the CARB-X’s life science accelerators will remove barriers for companies pursuing the development of the next novel drug, diagnostic, or vaccine to combat this public health threat,” said Joe Larsen, PhD, acting BARDA deputy director, in the HHS statement.

[email protected]

On Twitter @richpizzi

The Centers for Disease Control and Prevention is providing $67 million to help U.S. health departments address antibiotic resistance and related patient safety concerns.

The new funding was made available through the CDC’s Epidemiology and Laboratory Capacity for Infectious Diseases Cooperative Agreement (ELC), according to a CDC statement, and will support seven new regional laboratories with specialized capabilities allowing rapid detection and identification of emerging antibiotic resistant threats.

James Gathany/CDC
A CDC microbiologist holds up a petri dish, on the right, inoculated with a carbapenem-resistant Enterobacteriaceae (CRE) bacterium that proved to be resistant to all of the antibiotics tested.

The CDC said it would distribute funds to all 50 state health departments, six local health departments (Chicago, the District of Columbia, Houston, Los Angeles County, New York City, and Philadelphia), and Puerto Rico, beginning Aug. 1, 2016. The agency said the grants would allow every state health department lab to test for carbapenem-resistant Enterobacteriaceae and ultimately perform whole genome sequencing on intestinal bacteria, including Salmonella, Shigella, and many Campylobacter strains.

The agency intends to provide support teams in nine state health departments for rapid response activities designed to “quickly identify and respond to the threat” of antibiotic-resistant gonorrhea in the United States, and will support high-level expertise to implement antimicrobial resistance activities in six states.

The CDC also said the promised funding would strengthen states’ ability to conduct foodborne disease tracking, investigation, and prevention, as it includes increased support for the PulseNet and OutbreakNet systems and for the Integrated Food Safety Centers of Excellence, as well as support for the National Antimicrobial Resistance Monitoring System (NARMS).

Global partnerships

Complementing the new CDC grants was an announcement from the U.S. Department of Health & Human Services that it would partner with the Wellcome Trust of London, the AMR Centre of Alderley Park (Cheshire, U.K.), and Boston University School of Law to create one of the world’s largest public-private partnerships focused on preclinical discovery and development of new antimicrobial products.

According to an HHS statement, the Combating Antibiotic Resistant Bacteria Biopharmaceutical Accelerator (CARB-X) will bring together “multiple domestic and international partners and capabilities to find potential antibiotics and move them through preclinical testing to enable safety and efficacy testing in humans and greatly reducing the business risk,” to make antimicrobial development more attractive to private sector investment.

HHS said the federal Biomedical Advanced Research and Development Authority (BARDA) would provide $30 million during the first year of CARB-X, and up to $250 million during the 5-year project. CARB-X will provide funding for research and development, and technical assistance for companies with innovative and promising solutions to antibiotic resistance, HHS said.

“Our hope is that the combination of technical expertise and life science entrepreneurship experience within the CARB-X’s life science accelerators will remove barriers for companies pursuing the development of the next novel drug, diagnostic, or vaccine to combat this public health threat,” said Joe Larsen, PhD, acting BARDA deputy director, in the HHS statement.

[email protected]

On Twitter @richpizzi

References

References

Publications
Publications
Topics
Article Type
Display Headline
U.S. to jump-start antibiotic resistance research
Display Headline
U.S. to jump-start antibiotic resistance research
Legacy Keywords
antibiotic resistance
Legacy Keywords
antibiotic resistance
Article Source

PURLs Copyright

Inside the Article

Simple colon surgery bundle accelerated outcomes improvement

Article Type
Changed
Thu, 03/28/2019 - 15:04
Display Headline
Simple colon surgery bundle accelerated outcomes improvement

SAN DIEGO – Implementation of a simple colon bundle decreased the rate of colonic and enteric resections faster, compared with improvements seen for other procedures, according to a study that involved 23 hospitals in Tennessee.

At the American College of Surgeons/National Surgical Quality Improvement Program National Conference, Brian J. Daley, MD, discussed findings from an analysis conducted by members of the Tennessee Surgical Quality Collaborative (TSQC), which he described as “a collection of surgeons who put aside their hospital and regional affiliations to work together to help each other and to help our fellow Tennesseans.” Established in 2008 with member hospitals, the TSQC has grown to 23 member hospitals, including 18 community hospitals and 5 academic medical centers. It provides data on nearly 600 surgeons across the state. “While this only represents about half of the surgical procedures in the state, there is sufficient statistical power to make comments about our surgical performance,” said Dr. Daley of the department of surgery at  the University of Tennessee Medical Center, Knoxville.

©Dmitrii Kotin/Thinkstock.com

To quantify TSQC’s impact on surgical outcomes, surgeons at the member hospitals evaluated the TSQC colon bundle, which was developed in 2012 and implemented in 2013. It bundles four processes of care: maintaining intraoperative oxygen delivery, maintaining a temperature of 36° C, making sure the patient’s blood glucose is normal, and choosing the appropriate antibiotics. “We kept it simple: easy, not expensive, and hopefully helpful,” Dr. Daley said.

With other procedures as a baseline, they used statistical analyses to determine if implementation of the bundle led to an incremental acceleration of reduced complications, compared with other improvements observed in other procedures. “To understand our outcomes, we needed to prove three points: that the trend improved [a negative trend in the resection rate], that this negative trend was more negative than trends for other comparator procedures, and that the trend for intercept was not equal to the comparator in any way,” Dr. Daley explained.

Following adoption of the bundle, he and his associates observed that the rate of decrease in postoperative recurrences was greater in colectomy, compared with that for all other surgical procedures (P less than .001 for both the trend and the intercept statistical models). Adoption of the bundle also positively impacted decreases in postoperative recurrences among enterectomy cases (P less than .001 for both the trend and the intercept statistical models).

“We were able to demonstrate that our TSQC bundle paid dividends in improving colectomy outcomes,” Dr. Daley concluded. “We have seen these efforts spill over into enterectomy. From this we can also infer that participation in the collaborative improves outcomes and is imperative to maintain the acceleration in surgical improvement.”

Dr. Daley reported that he and his coauthors had no relevant financial disclosures.

[email protected]

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

SAN DIEGO – Implementation of a simple colon bundle decreased the rate of colonic and enteric resections faster, compared with improvements seen for other procedures, according to a study that involved 23 hospitals in Tennessee.

At the American College of Surgeons/National Surgical Quality Improvement Program National Conference, Brian J. Daley, MD, discussed findings from an analysis conducted by members of the Tennessee Surgical Quality Collaborative (TSQC), which he described as “a collection of surgeons who put aside their hospital and regional affiliations to work together to help each other and to help our fellow Tennesseans.” Established in 2008 with member hospitals, the TSQC has grown to 23 member hospitals, including 18 community hospitals and 5 academic medical centers. It provides data on nearly 600 surgeons across the state. “While this only represents about half of the surgical procedures in the state, there is sufficient statistical power to make comments about our surgical performance,” said Dr. Daley of the department of surgery at  the University of Tennessee Medical Center, Knoxville.

©Dmitrii Kotin/Thinkstock.com

To quantify TSQC’s impact on surgical outcomes, surgeons at the member hospitals evaluated the TSQC colon bundle, which was developed in 2012 and implemented in 2013. It bundles four processes of care: maintaining intraoperative oxygen delivery, maintaining a temperature of 36° C, making sure the patient’s blood glucose is normal, and choosing the appropriate antibiotics. “We kept it simple: easy, not expensive, and hopefully helpful,” Dr. Daley said.

With other procedures as a baseline, they used statistical analyses to determine if implementation of the bundle led to an incremental acceleration of reduced complications, compared with other improvements observed in other procedures. “To understand our outcomes, we needed to prove three points: that the trend improved [a negative trend in the resection rate], that this negative trend was more negative than trends for other comparator procedures, and that the trend for intercept was not equal to the comparator in any way,” Dr. Daley explained.

Following adoption of the bundle, he and his associates observed that the rate of decrease in postoperative recurrences was greater in colectomy, compared with that for all other surgical procedures (P less than .001 for both the trend and the intercept statistical models). Adoption of the bundle also positively impacted decreases in postoperative recurrences among enterectomy cases (P less than .001 for both the trend and the intercept statistical models).

“We were able to demonstrate that our TSQC bundle paid dividends in improving colectomy outcomes,” Dr. Daley concluded. “We have seen these efforts spill over into enterectomy. From this we can also infer that participation in the collaborative improves outcomes and is imperative to maintain the acceleration in surgical improvement.”

Dr. Daley reported that he and his coauthors had no relevant financial disclosures.

[email protected]

SAN DIEGO – Implementation of a simple colon bundle decreased the rate of colonic and enteric resections faster, compared with improvements seen for other procedures, according to a study that involved 23 hospitals in Tennessee.

At the American College of Surgeons/National Surgical Quality Improvement Program National Conference, Brian J. Daley, MD, discussed findings from an analysis conducted by members of the Tennessee Surgical Quality Collaborative (TSQC), which he described as “a collection of surgeons who put aside their hospital and regional affiliations to work together to help each other and to help our fellow Tennesseans.” Established in 2008 with member hospitals, the TSQC has grown to 23 member hospitals, including 18 community hospitals and 5 academic medical centers. It provides data on nearly 600 surgeons across the state. “While this only represents about half of the surgical procedures in the state, there is sufficient statistical power to make comments about our surgical performance,” said Dr. Daley of the department of surgery at  the University of Tennessee Medical Center, Knoxville.

©Dmitrii Kotin/Thinkstock.com

To quantify TSQC’s impact on surgical outcomes, surgeons at the member hospitals evaluated the TSQC colon bundle, which was developed in 2012 and implemented in 2013. It bundles four processes of care: maintaining intraoperative oxygen delivery, maintaining a temperature of 36° C, making sure the patient’s blood glucose is normal, and choosing the appropriate antibiotics. “We kept it simple: easy, not expensive, and hopefully helpful,” Dr. Daley said.

With other procedures as a baseline, they used statistical analyses to determine if implementation of the bundle led to an incremental acceleration of reduced complications, compared with other improvements observed in other procedures. “To understand our outcomes, we needed to prove three points: that the trend improved [a negative trend in the resection rate], that this negative trend was more negative than trends for other comparator procedures, and that the trend for intercept was not equal to the comparator in any way,” Dr. Daley explained.

Following adoption of the bundle, he and his associates observed that the rate of decrease in postoperative recurrences was greater in colectomy, compared with that for all other surgical procedures (P less than .001 for both the trend and the intercept statistical models). Adoption of the bundle also positively impacted decreases in postoperative recurrences among enterectomy cases (P less than .001 for both the trend and the intercept statistical models).

“We were able to demonstrate that our TSQC bundle paid dividends in improving colectomy outcomes,” Dr. Daley concluded. “We have seen these efforts spill over into enterectomy. From this we can also infer that participation in the collaborative improves outcomes and is imperative to maintain the acceleration in surgical improvement.”

Dr. Daley reported that he and his coauthors had no relevant financial disclosures.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Simple colon surgery bundle accelerated outcomes improvement
Display Headline
Simple colon surgery bundle accelerated outcomes improvement
Sections
Article Source

AT THE ACS NSQIP NATIONAL CONFERENCE

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Adoption of a colon bundle by a collaborative of Tennessee hospitals improved certain colectomy outcomes.

Major finding: Following adoption of a colon bundle, the rate of decrease in postoperative recurrences was greater in colectomy than for all other surgical procedures (P less than .001 for both the trend and the intercept statistical models).

Data source: An analysis conducted by members of the Tennessee Surgical Quality Collaborative, which included 23 hospitals in the state.

Disclosures: The researchers reported having no relevant financial disclosures.

In septic shock, vasopressin not better than norepinephrine

Article Type
Changed
Fri, 01/18/2019 - 16:06
Display Headline
In septic shock, vasopressin not better than norepinephrine

Vasopressin was no better than norepinephrine in preventing kidney failure when used as a first-line treatment for septic shock, according to a report published online Aug. 2 in JAMA.

In a multicenter, double-blind, randomized trial comparing the two approaches in 408 ICU patients with septic shock, the early use of vasopressin didn’t reduce the number of days free of kidney failure, compared with standard norepinephrine.

©decade3d/thinkstockphotos.com

However, “the 95% confidence intervals of the difference between [study] groups has an upper limit of 5 days in favor of vasopressin, which could be clinically important,” said Anthony C. Gordon, MD, of Charing Cross Hospital and Imperial College London, and his associates. “Therefore, these results are still consistent with a potentially clinically important benefit for vasopressin; but a larger trial would be needed to confirm or refute this.”

Norepinephrine is the recommended first-line vasopressor for septic shock, but “there has been a growing interest in the use of vasopressin” ever since researchers described a relative deficiency of vasopressin in the disorder, Dr. Gordon and his associates noted.

“Preclinical and small clinical studies have suggested that vasopressin may be better able to maintain glomerular filtration rate and improve creatinine clearance, compared with norepinephrine,” the investigators said, and other studies have suggested that combining vasopressin with corticosteroids may prevent deterioration in organ function and reduce the duration of shock, thereby improving survival.

To examine those possibilities, they performed the VANISH (Vasopressin vs. Norepinephrine as Initial Therapy in Septic Shock) trial, assessing patients age 16 years and older at 18 general adult ICUs in the United Kingdom during a 2-year period. The study participants were randomly assigned to receive vasopressin plus hydrocortisone (100 patients), vasopressin plus matching placebo (104 patients), norepinephrine plus hydrocortisone (101 patients), or norepinephrine plus matching placebo (103 patients).

The primary outcome measure was the number of days alive and free of kidney failure during the 28 days following randomization. There was no significant difference among the four study groups in the number or the distribution of kidney-failure–free days, the investigators said (JAMA. 2016 Aug 2. doi: 10.1001/jama.2016.10485).

In addition, the percentage of survivors who never developed kidney failure was not significantly different between the two groups who received vasopressin (57.0%) and the two who received norepinephrine (59.2%). And the median number of days free of kidney failure in the subgroup of patients who died or developed kidney failure was not significantly different between those receiving vasopressin (9 days) and those receiving norepinephrine (13 days).

The quantities of IV fluids administered, the total fluid balance, serum lactate levels, and heart rate were all similar across the four study groups. There also was no significant difference in 28-day mortality between patients who received vasopressin (30.9%) and those who received norepinephrine (27.5%). Adverse event profiles also were comparable.

However, the rate of renal replacement therapy was 25.4% with vasopressin, significantly lower than the 35.3% rate in the norepinephrine group. The use of such therapy was not controlled in the trial and was initiated according to the treating physicians’ preference. “It is therefore not possible to know why renal replacement therapy was or was not started,” Dr. Gordon and his associates noted.

The use of renal replacement therapy wasn’t a primary outcome of the trial. Nevertheless, it is an important patient-centered outcome and may be a factor to consider when treating adults who have septic shock, the researchers added.

The study was supported by the U.K. National Institute for Health Research and the U.K. Intensive Care Foundation. Dr. Gordon reported ties to Ferring, HCA International, Orion, and Tenax Therapeutics; his associates reported having no relevant financial disclosures.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
septic shock, vasopressin, norepinephrine
Author and Disclosure Information

Author and Disclosure Information

Vasopressin was no better than norepinephrine in preventing kidney failure when used as a first-line treatment for septic shock, according to a report published online Aug. 2 in JAMA.

In a multicenter, double-blind, randomized trial comparing the two approaches in 408 ICU patients with septic shock, the early use of vasopressin didn’t reduce the number of days free of kidney failure, compared with standard norepinephrine.

©decade3d/thinkstockphotos.com

However, “the 95% confidence intervals of the difference between [study] groups has an upper limit of 5 days in favor of vasopressin, which could be clinically important,” said Anthony C. Gordon, MD, of Charing Cross Hospital and Imperial College London, and his associates. “Therefore, these results are still consistent with a potentially clinically important benefit for vasopressin; but a larger trial would be needed to confirm or refute this.”

Norepinephrine is the recommended first-line vasopressor for septic shock, but “there has been a growing interest in the use of vasopressin” ever since researchers described a relative deficiency of vasopressin in the disorder, Dr. Gordon and his associates noted.

“Preclinical and small clinical studies have suggested that vasopressin may be better able to maintain glomerular filtration rate and improve creatinine clearance, compared with norepinephrine,” the investigators said, and other studies have suggested that combining vasopressin with corticosteroids may prevent deterioration in organ function and reduce the duration of shock, thereby improving survival.

To examine those possibilities, they performed the VANISH (Vasopressin vs. Norepinephrine as Initial Therapy in Septic Shock) trial, assessing patients age 16 years and older at 18 general adult ICUs in the United Kingdom during a 2-year period. The study participants were randomly assigned to receive vasopressin plus hydrocortisone (100 patients), vasopressin plus matching placebo (104 patients), norepinephrine plus hydrocortisone (101 patients), or norepinephrine plus matching placebo (103 patients).

The primary outcome measure was the number of days alive and free of kidney failure during the 28 days following randomization. There was no significant difference among the four study groups in the number or the distribution of kidney-failure–free days, the investigators said (JAMA. 2016 Aug 2. doi: 10.1001/jama.2016.10485).

In addition, the percentage of survivors who never developed kidney failure was not significantly different between the two groups who received vasopressin (57.0%) and the two who received norepinephrine (59.2%). And the median number of days free of kidney failure in the subgroup of patients who died or developed kidney failure was not significantly different between those receiving vasopressin (9 days) and those receiving norepinephrine (13 days).

The quantities of IV fluids administered, the total fluid balance, serum lactate levels, and heart rate were all similar across the four study groups. There also was no significant difference in 28-day mortality between patients who received vasopressin (30.9%) and those who received norepinephrine (27.5%). Adverse event profiles also were comparable.

However, the rate of renal replacement therapy was 25.4% with vasopressin, significantly lower than the 35.3% rate in the norepinephrine group. The use of such therapy was not controlled in the trial and was initiated according to the treating physicians’ preference. “It is therefore not possible to know why renal replacement therapy was or was not started,” Dr. Gordon and his associates noted.

The use of renal replacement therapy wasn’t a primary outcome of the trial. Nevertheless, it is an important patient-centered outcome and may be a factor to consider when treating adults who have septic shock, the researchers added.

The study was supported by the U.K. National Institute for Health Research and the U.K. Intensive Care Foundation. Dr. Gordon reported ties to Ferring, HCA International, Orion, and Tenax Therapeutics; his associates reported having no relevant financial disclosures.

Vasopressin was no better than norepinephrine in preventing kidney failure when used as a first-line treatment for septic shock, according to a report published online Aug. 2 in JAMA.

In a multicenter, double-blind, randomized trial comparing the two approaches in 408 ICU patients with septic shock, the early use of vasopressin didn’t reduce the number of days free of kidney failure, compared with standard norepinephrine.

©decade3d/thinkstockphotos.com

However, “the 95% confidence intervals of the difference between [study] groups has an upper limit of 5 days in favor of vasopressin, which could be clinically important,” said Anthony C. Gordon, MD, of Charing Cross Hospital and Imperial College London, and his associates. “Therefore, these results are still consistent with a potentially clinically important benefit for vasopressin; but a larger trial would be needed to confirm or refute this.”

Norepinephrine is the recommended first-line vasopressor for septic shock, but “there has been a growing interest in the use of vasopressin” ever since researchers described a relative deficiency of vasopressin in the disorder, Dr. Gordon and his associates noted.

“Preclinical and small clinical studies have suggested that vasopressin may be better able to maintain glomerular filtration rate and improve creatinine clearance, compared with norepinephrine,” the investigators said, and other studies have suggested that combining vasopressin with corticosteroids may prevent deterioration in organ function and reduce the duration of shock, thereby improving survival.

To examine those possibilities, they performed the VANISH (Vasopressin vs. Norepinephrine as Initial Therapy in Septic Shock) trial, assessing patients age 16 years and older at 18 general adult ICUs in the United Kingdom during a 2-year period. The study participants were randomly assigned to receive vasopressin plus hydrocortisone (100 patients), vasopressin plus matching placebo (104 patients), norepinephrine plus hydrocortisone (101 patients), or norepinephrine plus matching placebo (103 patients).

The primary outcome measure was the number of days alive and free of kidney failure during the 28 days following randomization. There was no significant difference among the four study groups in the number or the distribution of kidney-failure–free days, the investigators said (JAMA. 2016 Aug 2. doi: 10.1001/jama.2016.10485).

In addition, the percentage of survivors who never developed kidney failure was not significantly different between the two groups who received vasopressin (57.0%) and the two who received norepinephrine (59.2%). And the median number of days free of kidney failure in the subgroup of patients who died or developed kidney failure was not significantly different between those receiving vasopressin (9 days) and those receiving norepinephrine (13 days).

The quantities of IV fluids administered, the total fluid balance, serum lactate levels, and heart rate were all similar across the four study groups. There also was no significant difference in 28-day mortality between patients who received vasopressin (30.9%) and those who received norepinephrine (27.5%). Adverse event profiles also were comparable.

However, the rate of renal replacement therapy was 25.4% with vasopressin, significantly lower than the 35.3% rate in the norepinephrine group. The use of such therapy was not controlled in the trial and was initiated according to the treating physicians’ preference. “It is therefore not possible to know why renal replacement therapy was or was not started,” Dr. Gordon and his associates noted.

The use of renal replacement therapy wasn’t a primary outcome of the trial. Nevertheless, it is an important patient-centered outcome and may be a factor to consider when treating adults who have septic shock, the researchers added.

The study was supported by the U.K. National Institute for Health Research and the U.K. Intensive Care Foundation. Dr. Gordon reported ties to Ferring, HCA International, Orion, and Tenax Therapeutics; his associates reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
In septic shock, vasopressin not better than norepinephrine
Display Headline
In septic shock, vasopressin not better than norepinephrine
Legacy Keywords
septic shock, vasopressin, norepinephrine
Legacy Keywords
septic shock, vasopressin, norepinephrine
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Vasopressin didn’t perform better than norepinephrine in preventing kidney failure when used as a first-line treatment for septic shock.

Major finding: The primary outcome measure – the number of days alive and free of kidney failure during the first month of treatment – was not significantly different among the four study groups.

Data source: A multicenter, double-blind, randomized clinical trial involving 408 ICU patients treated in the United Kingdom during a 2-year period.

Disclosures: The study was supported by the U.K. National Institute for Health Research and the U.K. Intensive Care Foundation. Dr. Gordon reported ties to Ferring, HCA International, Orion, and Tenax Therapeutics; his associates reported having no relevant financial disclosures.

Interhospital patient transfers must be standardized

Article Type
Changed
Thu, 03/28/2019 - 15:04
Display Headline
Interhospital patient transfers must be standardized

Imagine the following scenario: a hospitalist on the previous shift accepted a patient from another hospital and received a verbal sign-out at the time of acceptance. Now, 14 hours later, a bed at your hospital is finally available. You were advised that the patient was hemodynamically stable, but that was 8 hours ago. The patient arrives in respiratory distress with a blood pressure of 75/40, and phenylephrine running through a 20g IV in the forearm.

A 400-page printout of the patient’s electronic chart arrives – but no discharge summary is found. You are now responsible for stabilizing the patient and getting to the bottom of why your patient decompensated.

Dr. Dana Herrigel

The above vignette is the “worst-case” scenario, yet it highlights how treacherous interhospital transfer can be. A recent study, published in the Journal of Hospital Medicine (doi: 10.1002/jhm.2515), found increased in-hospital mortality (adjusted odds ratio 1.36 [1.29-1.43]) for medical interhospital transfer patients as compared with those admitted from the ED. When care is transferred between hospitals, additional hurdles such as lack of face-to-face sign-out, delays in transport and bed availability, and lack of electronic medical record (EMR) interoperability all contribute to miscommunication and may lead to errors in diagnosis and delay of definitive care.

Diametrically opposed to our many victories in providing technologically advanced medical care, our inability to coordinate even the most basic care across hospitals is an unfortunate reality of our fragmented health care system, and must be promptly addressed.

There currently exists no widely accepted standard of care for communication between hospitals regarding transferred patients. Commonalities include a mandatory three-way recorded physician verbal handoff and a transmission of an insurance face sheet. However, real-time concurrent EMR connectivity and clinical status updates as frequently as every 2 hours in critically ill patients are uncommon, as our own study found (doi: 10.1002/jhm.2577).

Dr. Madeline Carroll

The lack of a standard of care for interhospital handoffs is, in part, why every transfer is potentially problematic. Many tertiary referral centers receive patients from more than 100 different hospitals and networks, amplifying the need for universal expectations. With differences in expectations among sending and receiving hospitals, there is ample room for variable outcomes, ranging from smooth transfers to the worst-case scenario described above. Enhanced shared decision making between providers at both hospitals, facilitated via communication tools and transfer centers, could lead to more fluid care of the transferred patient.

In order to establish standardized interhospital handoffs, a multicenter study is needed to examine outcomes of various transfer practices. A standard of communication and transfer handoff practices, based on those that lead to better outcomes, could potentially be established. Until this is studied, it is imperative that hospital systems and the government work to adopt broader EMR interoperability and radiology networks; comprehensive health information exchanges can minimize redundancy and provide real-time clinical data to make transfers safer.

Ideally, interhospital transfer should provide no more risk to a patient than a routine shift change of care providers.

Dr. Dana Herrigel is associate program director, internal medicine residency at Robert Wood Johnson Medical School, New Brunswick, N.J. Dr. Madeline Carroll is PGY-3 internal medicine at Robert Wood Johnson Medical School.

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Imagine the following scenario: a hospitalist on the previous shift accepted a patient from another hospital and received a verbal sign-out at the time of acceptance. Now, 14 hours later, a bed at your hospital is finally available. You were advised that the patient was hemodynamically stable, but that was 8 hours ago. The patient arrives in respiratory distress with a blood pressure of 75/40, and phenylephrine running through a 20g IV in the forearm.

A 400-page printout of the patient’s electronic chart arrives – but no discharge summary is found. You are now responsible for stabilizing the patient and getting to the bottom of why your patient decompensated.

Dr. Dana Herrigel

The above vignette is the “worst-case” scenario, yet it highlights how treacherous interhospital transfer can be. A recent study, published in the Journal of Hospital Medicine (doi: 10.1002/jhm.2515), found increased in-hospital mortality (adjusted odds ratio 1.36 [1.29-1.43]) for medical interhospital transfer patients as compared with those admitted from the ED. When care is transferred between hospitals, additional hurdles such as lack of face-to-face sign-out, delays in transport and bed availability, and lack of electronic medical record (EMR) interoperability all contribute to miscommunication and may lead to errors in diagnosis and delay of definitive care.

Diametrically opposed to our many victories in providing technologically advanced medical care, our inability to coordinate even the most basic care across hospitals is an unfortunate reality of our fragmented health care system, and must be promptly addressed.

There currently exists no widely accepted standard of care for communication between hospitals regarding transferred patients. Commonalities include a mandatory three-way recorded physician verbal handoff and a transmission of an insurance face sheet. However, real-time concurrent EMR connectivity and clinical status updates as frequently as every 2 hours in critically ill patients are uncommon, as our own study found (doi: 10.1002/jhm.2577).

Dr. Madeline Carroll

The lack of a standard of care for interhospital handoffs is, in part, why every transfer is potentially problematic. Many tertiary referral centers receive patients from more than 100 different hospitals and networks, amplifying the need for universal expectations. With differences in expectations among sending and receiving hospitals, there is ample room for variable outcomes, ranging from smooth transfers to the worst-case scenario described above. Enhanced shared decision making between providers at both hospitals, facilitated via communication tools and transfer centers, could lead to more fluid care of the transferred patient.

In order to establish standardized interhospital handoffs, a multicenter study is needed to examine outcomes of various transfer practices. A standard of communication and transfer handoff practices, based on those that lead to better outcomes, could potentially be established. Until this is studied, it is imperative that hospital systems and the government work to adopt broader EMR interoperability and radiology networks; comprehensive health information exchanges can minimize redundancy and provide real-time clinical data to make transfers safer.

Ideally, interhospital transfer should provide no more risk to a patient than a routine shift change of care providers.

Dr. Dana Herrigel is associate program director, internal medicine residency at Robert Wood Johnson Medical School, New Brunswick, N.J. Dr. Madeline Carroll is PGY-3 internal medicine at Robert Wood Johnson Medical School.

Imagine the following scenario: a hospitalist on the previous shift accepted a patient from another hospital and received a verbal sign-out at the time of acceptance. Now, 14 hours later, a bed at your hospital is finally available. You were advised that the patient was hemodynamically stable, but that was 8 hours ago. The patient arrives in respiratory distress with a blood pressure of 75/40, and phenylephrine running through a 20g IV in the forearm.

A 400-page printout of the patient’s electronic chart arrives – but no discharge summary is found. You are now responsible for stabilizing the patient and getting to the bottom of why your patient decompensated.

Dr. Dana Herrigel

The above vignette is the “worst-case” scenario, yet it highlights how treacherous interhospital transfer can be. A recent study, published in the Journal of Hospital Medicine (doi: 10.1002/jhm.2515), found increased in-hospital mortality (adjusted odds ratio 1.36 [1.29-1.43]) for medical interhospital transfer patients as compared with those admitted from the ED. When care is transferred between hospitals, additional hurdles such as lack of face-to-face sign-out, delays in transport and bed availability, and lack of electronic medical record (EMR) interoperability all contribute to miscommunication and may lead to errors in diagnosis and delay of definitive care.

Diametrically opposed to our many victories in providing technologically advanced medical care, our inability to coordinate even the most basic care across hospitals is an unfortunate reality of our fragmented health care system, and must be promptly addressed.

There currently exists no widely accepted standard of care for communication between hospitals regarding transferred patients. Commonalities include a mandatory three-way recorded physician verbal handoff and a transmission of an insurance face sheet. However, real-time concurrent EMR connectivity and clinical status updates as frequently as every 2 hours in critically ill patients are uncommon, as our own study found (doi: 10.1002/jhm.2577).

Dr. Madeline Carroll

The lack of a standard of care for interhospital handoffs is, in part, why every transfer is potentially problematic. Many tertiary referral centers receive patients from more than 100 different hospitals and networks, amplifying the need for universal expectations. With differences in expectations among sending and receiving hospitals, there is ample room for variable outcomes, ranging from smooth transfers to the worst-case scenario described above. Enhanced shared decision making between providers at both hospitals, facilitated via communication tools and transfer centers, could lead to more fluid care of the transferred patient.

In order to establish standardized interhospital handoffs, a multicenter study is needed to examine outcomes of various transfer practices. A standard of communication and transfer handoff practices, based on those that lead to better outcomes, could potentially be established. Until this is studied, it is imperative that hospital systems and the government work to adopt broader EMR interoperability and radiology networks; comprehensive health information exchanges can minimize redundancy and provide real-time clinical data to make transfers safer.

Ideally, interhospital transfer should provide no more risk to a patient than a routine shift change of care providers.

Dr. Dana Herrigel is associate program director, internal medicine residency at Robert Wood Johnson Medical School, New Brunswick, N.J. Dr. Madeline Carroll is PGY-3 internal medicine at Robert Wood Johnson Medical School.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Interhospital patient transfers must be standardized
Display Headline
Interhospital patient transfers must be standardized
Sections
Article Source

PURLs Copyright

Inside the Article

Daily fish oil dose boosts healing after heart attack

Article Type
Changed
Fri, 01/18/2019 - 16:06
Display Headline
Daily fish oil dose boosts healing after heart attack

A daily dose of omega-3 fatty acids from fish oil significantly improved heart function in adults after heart attacks, based on data from a randomized trial of 358 heart attack survivors. The findings were published online Aug. 1 in Circulation.

Patients who received 4 grams of omega-3 fatty acids from fish oil (O-3FA) for 6 months had significant reductions in left ventricular end-systolic volume index (–5.8%) and noninfarct myocardial fibrosis (–5.6%), compared with placebo patients, Bobak Heydari, MD, MPH, of Brigham and Women’s Hospital, Boston, and colleagues.

The effects remained significant after adjusting for factors including guideline-based standard post-heart attack medical therapies, they noted.

Treatment with omega-3 fatty acids (O-3FA) “also was associated with a significant reduction of both biomarkers of inflammation (myeloperoxidase, lipoprotein-associated phospholipase A2) and myocardial fibrosis (ST2),” the researchers wrote. “We therefore speculate that O-3FA treatment provides the aforementioned improvement in LV remodeling and noninfarct myocardial fibrosis through suppression of inflammation at both systemic and myocardial levels during the convalescent healing phase after acute MI,” they noted.

The results build on data from a previous study showing an association between daily doses of O-3FA and improved survival rates in heart attack patients, but the specific impact on heart structure and tissue has not been well studied, the researchers noted (Circulation. 2016;134:378-91 doi: 10.1161/circulationaha.115.019949).

The OMEGA-REMODEL trial (Omega-3 Acid Ethyl Esters on Left Ventricular Remodeling After Acute Myocardial Infarction) was designed to assess the impact of omega-3 fatty acids on heart healing after a heart attack. The average age of the patients was about 60 years. Demographic characteristics and cardiovascular disease histories were not significantly different between the groups.

Compliance for both treatment and placebo groups was 96% based on pill counts. Nausea was the most common side effect, reported by 5.9% of treatment patients and 5.4% of placebo patients. No serious adverse events associated with treatment were reported.

The findings were limited by several factors, including the possible use of over-the-counter fish oil supplementation by patients, the researchers noted. “However, dose-response relationship between O-3FA therapy and our main study endpoints strongly supported our intention-to-treat analysis,” they said.

The study was funded by the National Heart, Lung, and Blood Institute. The researchers had no financial conflicts to disclose.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

A daily dose of omega-3 fatty acids from fish oil significantly improved heart function in adults after heart attacks, based on data from a randomized trial of 358 heart attack survivors. The findings were published online Aug. 1 in Circulation.

Patients who received 4 grams of omega-3 fatty acids from fish oil (O-3FA) for 6 months had significant reductions in left ventricular end-systolic volume index (–5.8%) and noninfarct myocardial fibrosis (–5.6%), compared with placebo patients, Bobak Heydari, MD, MPH, of Brigham and Women’s Hospital, Boston, and colleagues.

The effects remained significant after adjusting for factors including guideline-based standard post-heart attack medical therapies, they noted.

Treatment with omega-3 fatty acids (O-3FA) “also was associated with a significant reduction of both biomarkers of inflammation (myeloperoxidase, lipoprotein-associated phospholipase A2) and myocardial fibrosis (ST2),” the researchers wrote. “We therefore speculate that O-3FA treatment provides the aforementioned improvement in LV remodeling and noninfarct myocardial fibrosis through suppression of inflammation at both systemic and myocardial levels during the convalescent healing phase after acute MI,” they noted.

The results build on data from a previous study showing an association between daily doses of O-3FA and improved survival rates in heart attack patients, but the specific impact on heart structure and tissue has not been well studied, the researchers noted (Circulation. 2016;134:378-91 doi: 10.1161/circulationaha.115.019949).

The OMEGA-REMODEL trial (Omega-3 Acid Ethyl Esters on Left Ventricular Remodeling After Acute Myocardial Infarction) was designed to assess the impact of omega-3 fatty acids on heart healing after a heart attack. The average age of the patients was about 60 years. Demographic characteristics and cardiovascular disease histories were not significantly different between the groups.

Compliance for both treatment and placebo groups was 96% based on pill counts. Nausea was the most common side effect, reported by 5.9% of treatment patients and 5.4% of placebo patients. No serious adverse events associated with treatment were reported.

The findings were limited by several factors, including the possible use of over-the-counter fish oil supplementation by patients, the researchers noted. “However, dose-response relationship between O-3FA therapy and our main study endpoints strongly supported our intention-to-treat analysis,” they said.

The study was funded by the National Heart, Lung, and Blood Institute. The researchers had no financial conflicts to disclose.

A daily dose of omega-3 fatty acids from fish oil significantly improved heart function in adults after heart attacks, based on data from a randomized trial of 358 heart attack survivors. The findings were published online Aug. 1 in Circulation.

Patients who received 4 grams of omega-3 fatty acids from fish oil (O-3FA) for 6 months had significant reductions in left ventricular end-systolic volume index (–5.8%) and noninfarct myocardial fibrosis (–5.6%), compared with placebo patients, Bobak Heydari, MD, MPH, of Brigham and Women’s Hospital, Boston, and colleagues.

The effects remained significant after adjusting for factors including guideline-based standard post-heart attack medical therapies, they noted.

Treatment with omega-3 fatty acids (O-3FA) “also was associated with a significant reduction of both biomarkers of inflammation (myeloperoxidase, lipoprotein-associated phospholipase A2) and myocardial fibrosis (ST2),” the researchers wrote. “We therefore speculate that O-3FA treatment provides the aforementioned improvement in LV remodeling and noninfarct myocardial fibrosis through suppression of inflammation at both systemic and myocardial levels during the convalescent healing phase after acute MI,” they noted.

The results build on data from a previous study showing an association between daily doses of O-3FA and improved survival rates in heart attack patients, but the specific impact on heart structure and tissue has not been well studied, the researchers noted (Circulation. 2016;134:378-91 doi: 10.1161/circulationaha.115.019949).

The OMEGA-REMODEL trial (Omega-3 Acid Ethyl Esters on Left Ventricular Remodeling After Acute Myocardial Infarction) was designed to assess the impact of omega-3 fatty acids on heart healing after a heart attack. The average age of the patients was about 60 years. Demographic characteristics and cardiovascular disease histories were not significantly different between the groups.

Compliance for both treatment and placebo groups was 96% based on pill counts. Nausea was the most common side effect, reported by 5.9% of treatment patients and 5.4% of placebo patients. No serious adverse events associated with treatment were reported.

The findings were limited by several factors, including the possible use of over-the-counter fish oil supplementation by patients, the researchers noted. “However, dose-response relationship between O-3FA therapy and our main study endpoints strongly supported our intention-to-treat analysis,” they said.

The study was funded by the National Heart, Lung, and Blood Institute. The researchers had no financial conflicts to disclose.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Daily fish oil dose boosts healing after heart attack
Display Headline
Daily fish oil dose boosts healing after heart attack
Article Source

FROM CIRCULATION

PURLs Copyright

Inside the Article

Vitals

Key clinical point: A daily dose of omega-3 fatty acids for 6 months after a heart attack improved heart function and reduced scarring.

Major finding: Heart attack patients who received 4 grams of omega-3 fatty acids from fish oil daily had significant reductions in both left ventricular end-systolic volume index (-5.8%) and noninfarct myocardial fibrosis (-5.6%), compared with placebo patients after 6 months.

Data source: A randomized trial of 360 heart attack survivors.

Disclosures: The study was funded by the National Heart, Lung, and Blood Institute. The researchers had no financial conflicts to disclose.

Post-AMI death risk model has high predictive accuracy

One score does not fit all
Article Type
Changed
Fri, 01/18/2019 - 16:06
Display Headline
Post-AMI death risk model has high predictive accuracy

An updated risk model based on data from patients presenting after acute myocardial infarction to a broad spectrum of U.S. hospitals appears to predict with a high degree of accuracy which patients are at the greatest risk for in-hospital mortality, investigators say.

Created from data on more than 240,000 patients presenting to one of 655 U.S. hospitals in 2012 and 2013 following ST-segment elevation myocardial infarction (STEMI) or non–ST-segment elevation MI (NSTEMI), the model identified the following independent risk factors for in-hospital mortality: age, heart rate, systolic blood pressure, presentation to the hospital after cardiac arrest, presentation in cardiogenic shock, presentation in heart failure, presentation with STEMI, creatinine clearance, and troponin ratio, reported Robert L. McNamara, MD, of Yale University, New Haven, Conn.

megaflopp/ThinkStock

The investigators are participants in the ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry–GWTG (Get With the Guidelines).

“The new ACTION Registry–GWTG in-hospital mortality risk model and risk score represent robust, parsimonious, and contemporary risk adjustment methodology for use in routine clinical care and hospital quality assessment. The addition of risk adjustment for patients presenting after cardiac arrest is critically important and enables a fairer assessment across hospitals with varied case mix,” they wrote (J Am Coll Cardiol. 2016 Aug 1;68[6]:626-35).

The revised risk model has the potential to facilitate hospital quality assessments and help investigators to identify specific factors that could help clinicians even further lower death rates, the investigators write.

Further mortality reductions?

Although improvements in care of patients with acute MI over the last several decades have driven the in-hospital death rate from 29% in 1969 down to less than 7% today, there are still more than 100,000 AMI-related in-hospital deaths in the United States annually, with wide variations across hospitals, Dr. McNamara and colleagues noted.

A previous risk model published by ACTION Registry–GWTG members included data on patients treated at 306 U.S. hospitals and provided a simple, validated in-hospital mortality and risk score.

Since that model was published, however, the dataset was expanded to include patients presenting after cardiac arrest at the time of AMI presentation.

“Being able to adjust for cardiac arrest is critical because it is a well-documented predictor of mortality. Moreover, continued improvement in AMI care mandates periodic updates to the risk models so that hospitals can assess their quality as contemporary care continues to evolve,” the authors wrote.

To see whether they could develop a new and improved model and risk score, they analyzed data on 243,440 patients treated at one of 655 hospitals in the voluntary network. Data on 145,952 patients (60% of the total), 57,039 of whom presented with STEMI, and 88,913 of whom presented with NSTEMI, were used to for the derivation sample.

Data on the remaining 97,488 (38,060 with STEMI and 59,428 with NSTEMI) were used to create the validation sample.

The authors found that for the total cohort, the in-hospital mortality rate was 4.6%. In multivariate models controlled for demographic and clinical factors, independent risk factors significantly associated with in-hospital mortality (validation cohort) were:

• Presentation after cardiac arrest (odds ratio, 5.15).

• Presentation in cardiogenic shock (OR, 4.22).

• Presentation in heart failure (OR, 1.83).

• STEMI on electrocardiography (OR, 1.81).

• Age, per 5 years (OR, 1.24).

• Systolic BP, per 10 mm Hg decrease (OR, 1.19).

• Creatinine clearance per 5/mL/min/1.73 m2 decrease (OR, 1.11).

• Heart rate per 10 beats/min (OR, 1.09).

• Troponin ratio, per 5 units (OR, 1.05).

The 95% confidence intervals for all of the above factors were significant.

The C-statistic, a standard measure of the predictive accuracy of a logistic regression model, was 0.88, indicating that the final ACTION Registry–GWTG in-hospital mortality model had a high level of discrimination in both the derivation and validation populations, the authors state.

The ACTION Registry–GWTG is a Program of the American College of Cardiology and the American Heart Association, with funding from Schering-Plough and the Bristol-Myers Squibb/Sanofi Pharmaceutical Partnership. Dr. McNamara serves on a clinical trials endpoint adjudication committee for Pfizer. Other coauthors reported multiple financial relationships with pharmaceutical and medical device companies.

References

Body

Data analyses for the risk models developed by the ACTION Registry generally showed good accuracy and precision. The calibration information showed that patients with a cardiac arrest experienced much greater risk for mortality than did the other major groups (STEMI, NSTEMI, or no cardiac arrest). Until now, clinicians and researchers have generally used either the TIMI [Thrombolysis in Myocardial Infarction] or GRACE [Global Registry of Acute Coronary Events] score to guide therapeutic decisions. With the advent of the ACTION score, which appears to be most helpful for patients with moderate to severe disease, and the HEART [history, ECG, age, risk factor, troponin] score, which targets care for patients with minimal to mild disease, there are other options. Recently, the DAPT (Dual Antiplatelet Therapy) investigators published a prediction algorithm that provides yet another prognostic score to assess risk of ischemic events and risk of bleeding in patients who have undergone percutaneous coronary intervention. The key variables in the DAPT score are age, cigarette smoking, diabetes, MI at presentation, previous percutaneous coronary intervention or previous MI, use of a paclitaxel-eluting stent, stent diameter of less than 3 mm, heart failure or reduced ejection fraction, and use of a vein graft stent.

A comprehensive cross validation and comparison across at least some of the algorithms – TIMI, GRACE, HEART, DAPT, and ACTION – would help at this point. Interventions and decision points have evolved over the past 15 years, and evaluation of relatively contemporary data would be especially helpful. For example, the HEART score is likely to be used in situations in which the negative predictive capabilities are most important. The ACTION score is likely to be most useful in severely ill patients and to provide guidance for newer interventions. If detailed information concerning stents is available, then the DAPT score should prove helpful.

It is likely that one score does not fit all. Each algorithm provides a useful summary of risk to help guide decision making for patients with ischemic symptoms, depending on the severity of the signs and symptoms at presentation and the duration of the follow-up interval. Consensus building would help to move this field forward for hospital-based management of patients evaluated for cardiac ischemia.

Peter W.F. Wilson, MD, of the Atlanta VAMC and Emory Clinical Cardiovascular Research

Institute, Atlanta; and Ralph B. D’Agostino Sr., PhD, of the department of mathematics and statistics, Boston University, made these comments in an accompanying editorial (J Am Coll Cardiol. 2016 Aug 1;68[6]:636-8). They reported no relevant disclosures.

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Body

Data analyses for the risk models developed by the ACTION Registry generally showed good accuracy and precision. The calibration information showed that patients with a cardiac arrest experienced much greater risk for mortality than did the other major groups (STEMI, NSTEMI, or no cardiac arrest). Until now, clinicians and researchers have generally used either the TIMI [Thrombolysis in Myocardial Infarction] or GRACE [Global Registry of Acute Coronary Events] score to guide therapeutic decisions. With the advent of the ACTION score, which appears to be most helpful for patients with moderate to severe disease, and the HEART [history, ECG, age, risk factor, troponin] score, which targets care for patients with minimal to mild disease, there are other options. Recently, the DAPT (Dual Antiplatelet Therapy) investigators published a prediction algorithm that provides yet another prognostic score to assess risk of ischemic events and risk of bleeding in patients who have undergone percutaneous coronary intervention. The key variables in the DAPT score are age, cigarette smoking, diabetes, MI at presentation, previous percutaneous coronary intervention or previous MI, use of a paclitaxel-eluting stent, stent diameter of less than 3 mm, heart failure or reduced ejection fraction, and use of a vein graft stent.

A comprehensive cross validation and comparison across at least some of the algorithms – TIMI, GRACE, HEART, DAPT, and ACTION – would help at this point. Interventions and decision points have evolved over the past 15 years, and evaluation of relatively contemporary data would be especially helpful. For example, the HEART score is likely to be used in situations in which the negative predictive capabilities are most important. The ACTION score is likely to be most useful in severely ill patients and to provide guidance for newer interventions. If detailed information concerning stents is available, then the DAPT score should prove helpful.

It is likely that one score does not fit all. Each algorithm provides a useful summary of risk to help guide decision making for patients with ischemic symptoms, depending on the severity of the signs and symptoms at presentation and the duration of the follow-up interval. Consensus building would help to move this field forward for hospital-based management of patients evaluated for cardiac ischemia.

Peter W.F. Wilson, MD, of the Atlanta VAMC and Emory Clinical Cardiovascular Research

Institute, Atlanta; and Ralph B. D’Agostino Sr., PhD, of the department of mathematics and statistics, Boston University, made these comments in an accompanying editorial (J Am Coll Cardiol. 2016 Aug 1;68[6]:636-8). They reported no relevant disclosures.

Body

Data analyses for the risk models developed by the ACTION Registry generally showed good accuracy and precision. The calibration information showed that patients with a cardiac arrest experienced much greater risk for mortality than did the other major groups (STEMI, NSTEMI, or no cardiac arrest). Until now, clinicians and researchers have generally used either the TIMI [Thrombolysis in Myocardial Infarction] or GRACE [Global Registry of Acute Coronary Events] score to guide therapeutic decisions. With the advent of the ACTION score, which appears to be most helpful for patients with moderate to severe disease, and the HEART [history, ECG, age, risk factor, troponin] score, which targets care for patients with minimal to mild disease, there are other options. Recently, the DAPT (Dual Antiplatelet Therapy) investigators published a prediction algorithm that provides yet another prognostic score to assess risk of ischemic events and risk of bleeding in patients who have undergone percutaneous coronary intervention. The key variables in the DAPT score are age, cigarette smoking, diabetes, MI at presentation, previous percutaneous coronary intervention or previous MI, use of a paclitaxel-eluting stent, stent diameter of less than 3 mm, heart failure or reduced ejection fraction, and use of a vein graft stent.

A comprehensive cross validation and comparison across at least some of the algorithms – TIMI, GRACE, HEART, DAPT, and ACTION – would help at this point. Interventions and decision points have evolved over the past 15 years, and evaluation of relatively contemporary data would be especially helpful. For example, the HEART score is likely to be used in situations in which the negative predictive capabilities are most important. The ACTION score is likely to be most useful in severely ill patients and to provide guidance for newer interventions. If detailed information concerning stents is available, then the DAPT score should prove helpful.

It is likely that one score does not fit all. Each algorithm provides a useful summary of risk to help guide decision making for patients with ischemic symptoms, depending on the severity of the signs and symptoms at presentation and the duration of the follow-up interval. Consensus building would help to move this field forward for hospital-based management of patients evaluated for cardiac ischemia.

Peter W.F. Wilson, MD, of the Atlanta VAMC and Emory Clinical Cardiovascular Research

Institute, Atlanta; and Ralph B. D’Agostino Sr., PhD, of the department of mathematics and statistics, Boston University, made these comments in an accompanying editorial (J Am Coll Cardiol. 2016 Aug 1;68[6]:636-8). They reported no relevant disclosures.

Title
One score does not fit all
One score does not fit all

An updated risk model based on data from patients presenting after acute myocardial infarction to a broad spectrum of U.S. hospitals appears to predict with a high degree of accuracy which patients are at the greatest risk for in-hospital mortality, investigators say.

Created from data on more than 240,000 patients presenting to one of 655 U.S. hospitals in 2012 and 2013 following ST-segment elevation myocardial infarction (STEMI) or non–ST-segment elevation MI (NSTEMI), the model identified the following independent risk factors for in-hospital mortality: age, heart rate, systolic blood pressure, presentation to the hospital after cardiac arrest, presentation in cardiogenic shock, presentation in heart failure, presentation with STEMI, creatinine clearance, and troponin ratio, reported Robert L. McNamara, MD, of Yale University, New Haven, Conn.

megaflopp/ThinkStock

The investigators are participants in the ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry–GWTG (Get With the Guidelines).

“The new ACTION Registry–GWTG in-hospital mortality risk model and risk score represent robust, parsimonious, and contemporary risk adjustment methodology for use in routine clinical care and hospital quality assessment. The addition of risk adjustment for patients presenting after cardiac arrest is critically important and enables a fairer assessment across hospitals with varied case mix,” they wrote (J Am Coll Cardiol. 2016 Aug 1;68[6]:626-35).

The revised risk model has the potential to facilitate hospital quality assessments and help investigators to identify specific factors that could help clinicians even further lower death rates, the investigators write.

Further mortality reductions?

Although improvements in care of patients with acute MI over the last several decades have driven the in-hospital death rate from 29% in 1969 down to less than 7% today, there are still more than 100,000 AMI-related in-hospital deaths in the United States annually, with wide variations across hospitals, Dr. McNamara and colleagues noted.

A previous risk model published by ACTION Registry–GWTG members included data on patients treated at 306 U.S. hospitals and provided a simple, validated in-hospital mortality and risk score.

Since that model was published, however, the dataset was expanded to include patients presenting after cardiac arrest at the time of AMI presentation.

“Being able to adjust for cardiac arrest is critical because it is a well-documented predictor of mortality. Moreover, continued improvement in AMI care mandates periodic updates to the risk models so that hospitals can assess their quality as contemporary care continues to evolve,” the authors wrote.

To see whether they could develop a new and improved model and risk score, they analyzed data on 243,440 patients treated at one of 655 hospitals in the voluntary network. Data on 145,952 patients (60% of the total), 57,039 of whom presented with STEMI, and 88,913 of whom presented with NSTEMI, were used to for the derivation sample.

Data on the remaining 97,488 (38,060 with STEMI and 59,428 with NSTEMI) were used to create the validation sample.

The authors found that for the total cohort, the in-hospital mortality rate was 4.6%. In multivariate models controlled for demographic and clinical factors, independent risk factors significantly associated with in-hospital mortality (validation cohort) were:

• Presentation after cardiac arrest (odds ratio, 5.15).

• Presentation in cardiogenic shock (OR, 4.22).

• Presentation in heart failure (OR, 1.83).

• STEMI on electrocardiography (OR, 1.81).

• Age, per 5 years (OR, 1.24).

• Systolic BP, per 10 mm Hg decrease (OR, 1.19).

• Creatinine clearance per 5/mL/min/1.73 m2 decrease (OR, 1.11).

• Heart rate per 10 beats/min (OR, 1.09).

• Troponin ratio, per 5 units (OR, 1.05).

The 95% confidence intervals for all of the above factors were significant.

The C-statistic, a standard measure of the predictive accuracy of a logistic regression model, was 0.88, indicating that the final ACTION Registry–GWTG in-hospital mortality model had a high level of discrimination in both the derivation and validation populations, the authors state.

The ACTION Registry–GWTG is a Program of the American College of Cardiology and the American Heart Association, with funding from Schering-Plough and the Bristol-Myers Squibb/Sanofi Pharmaceutical Partnership. Dr. McNamara serves on a clinical trials endpoint adjudication committee for Pfizer. Other coauthors reported multiple financial relationships with pharmaceutical and medical device companies.

An updated risk model based on data from patients presenting after acute myocardial infarction to a broad spectrum of U.S. hospitals appears to predict with a high degree of accuracy which patients are at the greatest risk for in-hospital mortality, investigators say.

Created from data on more than 240,000 patients presenting to one of 655 U.S. hospitals in 2012 and 2013 following ST-segment elevation myocardial infarction (STEMI) or non–ST-segment elevation MI (NSTEMI), the model identified the following independent risk factors for in-hospital mortality: age, heart rate, systolic blood pressure, presentation to the hospital after cardiac arrest, presentation in cardiogenic shock, presentation in heart failure, presentation with STEMI, creatinine clearance, and troponin ratio, reported Robert L. McNamara, MD, of Yale University, New Haven, Conn.

megaflopp/ThinkStock

The investigators are participants in the ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry–GWTG (Get With the Guidelines).

“The new ACTION Registry–GWTG in-hospital mortality risk model and risk score represent robust, parsimonious, and contemporary risk adjustment methodology for use in routine clinical care and hospital quality assessment. The addition of risk adjustment for patients presenting after cardiac arrest is critically important and enables a fairer assessment across hospitals with varied case mix,” they wrote (J Am Coll Cardiol. 2016 Aug 1;68[6]:626-35).

The revised risk model has the potential to facilitate hospital quality assessments and help investigators to identify specific factors that could help clinicians even further lower death rates, the investigators write.

Further mortality reductions?

Although improvements in care of patients with acute MI over the last several decades have driven the in-hospital death rate from 29% in 1969 down to less than 7% today, there are still more than 100,000 AMI-related in-hospital deaths in the United States annually, with wide variations across hospitals, Dr. McNamara and colleagues noted.

A previous risk model published by ACTION Registry–GWTG members included data on patients treated at 306 U.S. hospitals and provided a simple, validated in-hospital mortality and risk score.

Since that model was published, however, the dataset was expanded to include patients presenting after cardiac arrest at the time of AMI presentation.

“Being able to adjust for cardiac arrest is critical because it is a well-documented predictor of mortality. Moreover, continued improvement in AMI care mandates periodic updates to the risk models so that hospitals can assess their quality as contemporary care continues to evolve,” the authors wrote.

To see whether they could develop a new and improved model and risk score, they analyzed data on 243,440 patients treated at one of 655 hospitals in the voluntary network. Data on 145,952 patients (60% of the total), 57,039 of whom presented with STEMI, and 88,913 of whom presented with NSTEMI, were used to for the derivation sample.

Data on the remaining 97,488 (38,060 with STEMI and 59,428 with NSTEMI) were used to create the validation sample.

The authors found that for the total cohort, the in-hospital mortality rate was 4.6%. In multivariate models controlled for demographic and clinical factors, independent risk factors significantly associated with in-hospital mortality (validation cohort) were:

• Presentation after cardiac arrest (odds ratio, 5.15).

• Presentation in cardiogenic shock (OR, 4.22).

• Presentation in heart failure (OR, 1.83).

• STEMI on electrocardiography (OR, 1.81).

• Age, per 5 years (OR, 1.24).

• Systolic BP, per 10 mm Hg decrease (OR, 1.19).

• Creatinine clearance per 5/mL/min/1.73 m2 decrease (OR, 1.11).

• Heart rate per 10 beats/min (OR, 1.09).

• Troponin ratio, per 5 units (OR, 1.05).

The 95% confidence intervals for all of the above factors were significant.

The C-statistic, a standard measure of the predictive accuracy of a logistic regression model, was 0.88, indicating that the final ACTION Registry–GWTG in-hospital mortality model had a high level of discrimination in both the derivation and validation populations, the authors state.

The ACTION Registry–GWTG is a Program of the American College of Cardiology and the American Heart Association, with funding from Schering-Plough and the Bristol-Myers Squibb/Sanofi Pharmaceutical Partnership. Dr. McNamara serves on a clinical trials endpoint adjudication committee for Pfizer. Other coauthors reported multiple financial relationships with pharmaceutical and medical device companies.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Post-AMI death risk model has high predictive accuracy
Display Headline
Post-AMI death risk model has high predictive accuracy
Article Source

FROM JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: An updated cardiac mortality risk model may help to further reduce in-hospital deaths following acute myocardial infarction.

Major finding: The C-statistic for the model, a measure of predictive accuracy, was 0.88.

Data source: Updated risk model and in-hospital mortality score based on data from 243,440 patients following an AMI in 655 U.S. hospitals.

Disclosures: The ACTION Registry-GWTG is a Program of the American College of Cardiology and the American Heart Association, with funding from Schering-Plough and the Bristol-Myers Squibb/Sanofi Pharmaceutical Partnership. Dr. McNamara serves on a clinical trials endpoint adjudication committee for Pfizer. Other coauthors reported multiple financial relationships with pharmaceutical and medical device companies.

Two incretin-based drugs linked to increased bile duct disease but not pancreatitis

Article Type
Changed
Tue, 05/03/2022 - 15:33
Display Headline
Two incretin-based drugs linked to increased bile duct disease but not pancreatitis

At least two incretin-based drugs – glucagon-like peptide 1 agonists and dipeptidyl peptidase 4 inhibitors – do not appear to increase the risk of acute pancreatitis in individuals with diabetes but are associated with an increased risk of bile duct and gallbladder disease.

Two studies examining the impact on the pancreas of incretin-based drugs, including dipeptidyl peptidase 4 (DPP-4) inhibitors and glucagon-like peptide 1 (GLP-1) agonists, have been published online August 1 in JAMA Internal Medicine.

Incretin-based drugs have been associated with increased risk of elevated pancreatic enzyme levels, while GLP-1 has been shown to increase the proliferation and activity of cholangiocytes, which have raised concerns of an impact on the bile duct, gallbladder, and pancreas.

The first study was an international, population-based cohort study using the health records of more than 1.5 million individuals with type 2 diabetes, who began treatment with antidiabetic drugs between January 2007 and June 2013.

Analysis of these data showed there was no difference in the risk of hospitalization for acute pancreatitis between those taking incretin-based drugs and those on two or more other oral antidiabetic medications (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1522).

The study also found no significant increase in the risk of acute pancreatitis either with DPP-4 inhibitors or GLP-1 agonists, nor was there any increase with a longer duration of use or in patients with a history of acute or chronic pancreatitis.

Most previous observational studies of incretin-based drugs and pancreatitis had reported null findings, but four studies did find a positive association. Laurent Azoulay, PhD, from the Lady Davis Institute at Montreal’s Jewish General Hospital, and his coauthors suggested this heterogeneity was likely the result of methodologic shortcomings such as the use of inappropriate comparator groups and confoundings.

“Although it remains possible that these drugs may be associated with acute pancreatitis, the upper limit of our 95% [confidence interval] suggests that this risk is likely to be small,” the authors wrote. “Thus, the findings of this study should provide some reassurance to patients treated with incretin-based drugs.”

Meanwhile, a second population-based cohort study in 71,368 patients starting an antidiabetic drug found the use of GLP-1 analogues was associated with a significant 79% increase in the risk of bile duct and gallbladder disease, compared with the use of at least two other oral antidiabetic medications.

When stratified by duration of use, individuals taking GLP-1 analogues for less than 180 days showed a twofold increase in the risk of bile duct and gallbladder disease (adjusted hazard ratio, 2.01; 95% CI, 1.23-3.29) but those taking the drugs for longer than 180 days did not show an increased risk.

The use of GLP-1 analogues was also associated with a two-fold increase in the risk of undergoing a cholecystectomy.

However, the study found no increased risk of bile duct or gallbladder disease with DPP-4 inhibitors (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1531).

Jean-Luc Faillie, MD, PhD, of the University of Montpellier (France) and his associates suggested that rapid weight loss associated with GLP-1 analogues may explain the association with bile duct and gallbladder disease, which would also account for the observation that the association did not occur in patients taking the drugs for a longer period of time.

“Weight loss leads to supersaturation of cholesterol in the bile, a known risk factor for gallstones,” the authors wrote.

DPP-4 inhibitors have different effects on the GLP-1 pharmacologic factors and a weaker incretin action, which the authors suggested may explain the lack of association with bile duct and gallbladder disease, as well as their lower incidence of gastrointestinal adverse events.

“Although further studies are needed to confirm our findings and the mechanisms involved, physicians prescribing GLP-1 analogues should be aware of this association and carefully monitor patients for biliary tract complications.”

The first study was enabled by data-sharing agreements with the Canadian Network for Observational Drug Effect Studies, which is funded by the Canadian Institutes of Health Research. Two authors declared consulting fees, grant support, or financial compensation from the pharmaceutical industry, but there were no other conflicts of interest declared.

The second study was funded by the Canadian Institutes of Health Research. No conflicts of interest were declared.

Publications
Topics

At least two incretin-based drugs – glucagon-like peptide 1 agonists and dipeptidyl peptidase 4 inhibitors – do not appear to increase the risk of acute pancreatitis in individuals with diabetes but are associated with an increased risk of bile duct and gallbladder disease.

Two studies examining the impact on the pancreas of incretin-based drugs, including dipeptidyl peptidase 4 (DPP-4) inhibitors and glucagon-like peptide 1 (GLP-1) agonists, have been published online August 1 in JAMA Internal Medicine.

Incretin-based drugs have been associated with increased risk of elevated pancreatic enzyme levels, while GLP-1 has been shown to increase the proliferation and activity of cholangiocytes, which have raised concerns of an impact on the bile duct, gallbladder, and pancreas.

The first study was an international, population-based cohort study using the health records of more than 1.5 million individuals with type 2 diabetes, who began treatment with antidiabetic drugs between January 2007 and June 2013.

Analysis of these data showed there was no difference in the risk of hospitalization for acute pancreatitis between those taking incretin-based drugs and those on two or more other oral antidiabetic medications (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1522).

The study also found no significant increase in the risk of acute pancreatitis either with DPP-4 inhibitors or GLP-1 agonists, nor was there any increase with a longer duration of use or in patients with a history of acute or chronic pancreatitis.

Most previous observational studies of incretin-based drugs and pancreatitis had reported null findings, but four studies did find a positive association. Laurent Azoulay, PhD, from the Lady Davis Institute at Montreal’s Jewish General Hospital, and his coauthors suggested this heterogeneity was likely the result of methodologic shortcomings such as the use of inappropriate comparator groups and confoundings.

“Although it remains possible that these drugs may be associated with acute pancreatitis, the upper limit of our 95% [confidence interval] suggests that this risk is likely to be small,” the authors wrote. “Thus, the findings of this study should provide some reassurance to patients treated with incretin-based drugs.”

Meanwhile, a second population-based cohort study in 71,368 patients starting an antidiabetic drug found the use of GLP-1 analogues was associated with a significant 79% increase in the risk of bile duct and gallbladder disease, compared with the use of at least two other oral antidiabetic medications.

When stratified by duration of use, individuals taking GLP-1 analogues for less than 180 days showed a twofold increase in the risk of bile duct and gallbladder disease (adjusted hazard ratio, 2.01; 95% CI, 1.23-3.29) but those taking the drugs for longer than 180 days did not show an increased risk.

The use of GLP-1 analogues was also associated with a two-fold increase in the risk of undergoing a cholecystectomy.

However, the study found no increased risk of bile duct or gallbladder disease with DPP-4 inhibitors (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1531).

Jean-Luc Faillie, MD, PhD, of the University of Montpellier (France) and his associates suggested that rapid weight loss associated with GLP-1 analogues may explain the association with bile duct and gallbladder disease, which would also account for the observation that the association did not occur in patients taking the drugs for a longer period of time.

“Weight loss leads to supersaturation of cholesterol in the bile, a known risk factor for gallstones,” the authors wrote.

DPP-4 inhibitors have different effects on the GLP-1 pharmacologic factors and a weaker incretin action, which the authors suggested may explain the lack of association with bile duct and gallbladder disease, as well as their lower incidence of gastrointestinal adverse events.

“Although further studies are needed to confirm our findings and the mechanisms involved, physicians prescribing GLP-1 analogues should be aware of this association and carefully monitor patients for biliary tract complications.”

The first study was enabled by data-sharing agreements with the Canadian Network for Observational Drug Effect Studies, which is funded by the Canadian Institutes of Health Research. Two authors declared consulting fees, grant support, or financial compensation from the pharmaceutical industry, but there were no other conflicts of interest declared.

The second study was funded by the Canadian Institutes of Health Research. No conflicts of interest were declared.

At least two incretin-based drugs – glucagon-like peptide 1 agonists and dipeptidyl peptidase 4 inhibitors – do not appear to increase the risk of acute pancreatitis in individuals with diabetes but are associated with an increased risk of bile duct and gallbladder disease.

Two studies examining the impact on the pancreas of incretin-based drugs, including dipeptidyl peptidase 4 (DPP-4) inhibitors and glucagon-like peptide 1 (GLP-1) agonists, have been published online August 1 in JAMA Internal Medicine.

Incretin-based drugs have been associated with increased risk of elevated pancreatic enzyme levels, while GLP-1 has been shown to increase the proliferation and activity of cholangiocytes, which have raised concerns of an impact on the bile duct, gallbladder, and pancreas.

The first study was an international, population-based cohort study using the health records of more than 1.5 million individuals with type 2 diabetes, who began treatment with antidiabetic drugs between January 2007 and June 2013.

Analysis of these data showed there was no difference in the risk of hospitalization for acute pancreatitis between those taking incretin-based drugs and those on two or more other oral antidiabetic medications (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1522).

The study also found no significant increase in the risk of acute pancreatitis either with DPP-4 inhibitors or GLP-1 agonists, nor was there any increase with a longer duration of use or in patients with a history of acute or chronic pancreatitis.

Most previous observational studies of incretin-based drugs and pancreatitis had reported null findings, but four studies did find a positive association. Laurent Azoulay, PhD, from the Lady Davis Institute at Montreal’s Jewish General Hospital, and his coauthors suggested this heterogeneity was likely the result of methodologic shortcomings such as the use of inappropriate comparator groups and confoundings.

“Although it remains possible that these drugs may be associated with acute pancreatitis, the upper limit of our 95% [confidence interval] suggests that this risk is likely to be small,” the authors wrote. “Thus, the findings of this study should provide some reassurance to patients treated with incretin-based drugs.”

Meanwhile, a second population-based cohort study in 71,368 patients starting an antidiabetic drug found the use of GLP-1 analogues was associated with a significant 79% increase in the risk of bile duct and gallbladder disease, compared with the use of at least two other oral antidiabetic medications.

When stratified by duration of use, individuals taking GLP-1 analogues for less than 180 days showed a twofold increase in the risk of bile duct and gallbladder disease (adjusted hazard ratio, 2.01; 95% CI, 1.23-3.29) but those taking the drugs for longer than 180 days did not show an increased risk.

The use of GLP-1 analogues was also associated with a two-fold increase in the risk of undergoing a cholecystectomy.

However, the study found no increased risk of bile duct or gallbladder disease with DPP-4 inhibitors (JAMA Intern Med. 2016 Aug 1. doi: 10.1001/jamainternmed.2016.1531).

Jean-Luc Faillie, MD, PhD, of the University of Montpellier (France) and his associates suggested that rapid weight loss associated with GLP-1 analogues may explain the association with bile duct and gallbladder disease, which would also account for the observation that the association did not occur in patients taking the drugs for a longer period of time.

“Weight loss leads to supersaturation of cholesterol in the bile, a known risk factor for gallstones,” the authors wrote.

DPP-4 inhibitors have different effects on the GLP-1 pharmacologic factors and a weaker incretin action, which the authors suggested may explain the lack of association with bile duct and gallbladder disease, as well as their lower incidence of gastrointestinal adverse events.

“Although further studies are needed to confirm our findings and the mechanisms involved, physicians prescribing GLP-1 analogues should be aware of this association and carefully monitor patients for biliary tract complications.”

The first study was enabled by data-sharing agreements with the Canadian Network for Observational Drug Effect Studies, which is funded by the Canadian Institutes of Health Research. Two authors declared consulting fees, grant support, or financial compensation from the pharmaceutical industry, but there were no other conflicts of interest declared.

The second study was funded by the Canadian Institutes of Health Research. No conflicts of interest were declared.

Publications
Publications
Topics
Article Type
Display Headline
Two incretin-based drugs linked to increased bile duct disease but not pancreatitis
Display Headline
Two incretin-based drugs linked to increased bile duct disease but not pancreatitis
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Alternative CME
Vitals

Key clinical point: Glucagon-like peptide 1 agonists do not appear to increase the risk of acute pancreatitis in individuals with diabetes but are associated with an increased risk of bile duct and gallbladder disease.

Major finding: GLP-1 agonists are associated with a 79% increase in the risk of bile duct and gallbladder disease, compared with other oral antidiabetic medications, but do not increase the risk of acute pancreatitis.

Data source: Two population-based cohort studies; one involving more than 1.5 million individuals with type 2 diabetes across three countries, and the other involving 71,368 patients with type 2 diabetes.

Disclosures: The first study was enabled by data-sharing agreements with the Canadian Network for Observational Drug Effect Studies, which is funded by the Canadian Institutes of Health Research. Two authors declared consulting fees, grant support, or financial compensation from the pharmaceutical industry, but there were no other conflicts of interest declared. The second study was funded by the Canadian Institutes of Health Research. No conflicts of interest were declared.

Study highlights cardiovascular benefits, lower GI risks of low-dose aspirin

Article Type
Changed
Fri, 01/18/2019 - 16:02
Display Headline
Study highlights cardiovascular benefits, lower GI risks of low-dose aspirin

Resuming low-dose aspirin after an initial lower gastrointestinal bleed significantly increased the chances of recurrence but protected against serious cardiovascular events, based on a single-center retrospective study published in the August issue of Gastroenterology.

In contrast, “we did not find concomitant use of anticoagulants, antiplatelets, and steroids as a predictor of recurrent lower GI bleeding,” said Dr. Francis Chan of the Prince of Wales Hospital in Hong Kong and his associates. “This may be due to the low percentage of concomitant drug use in both groups. Multicenter studies with a large number of patients will be required to identify additional risk factors for recurrent lower GI bleeding with aspirin use.”

Low-dose aspirin has long been known to help prevent coronary artery and cerebrovascular disease, and more recently has been found to potentially reduce the risk of several types of cancer, the researchers noted. Aspirin is well known to increase the risk of upper GI bleeding, but some studies have also linked it to lower GI bleeding. However, “patients with underlying cardiovascular diseases often require lifelong aspirin,” they added. The risks and benefits of stopping or remaining on aspirin after an initial lower GI bleed are unclear (Gastroenterology 2016 Apr 26. doi: 10.1053/j.gastro.2016.04.013).

Accordingly, the researchers retrospectively studied 295 patients who had an initial aspirin-associated lower GI bleed, defined as 325 mg aspirin a day within a week of bleeding onset. All patients had melena or hematochezia documented by an attending physician and had no endoscopic evidence of upper GI bleeding.

For patients who continued using aspirin at least half the time, the 5-year cumulative incidence of recurrent lower GI bleeding was 19% (95% confidence interval [CI], 13%-25%) – more than double the rate among patients who used aspirin 20% or less of the time (5-year cumulative incidence, 7%; 95% CI, 3%-13%; P = .01). However, the 5-year cumulative incidence of serious cardiovascular events among nonusers was 37% (95% CI, 27%-46%), while the rate among aspirin users was 23% (95% CI, 17%-30%; P = .02). Mortality from noncardiovascular causes was also higher among nonusers (27%) than users (8%; P less than .001), probably because nonusers of aspirin tended to be older than users, but perhaps also because aspirin had a “nonvascular protective effect,” the researchers said.

A multivariate analysis confirmed these findings, linking lower GI bleeding to aspirin but not to use of steroids, anticoagulants, or antiplatelet drugs, or to age, sex, alcohol consumption, smoking, comorbidities, or cardiovascular risks. Indeed, continued aspirin use nearly tripled the chances of a recurrent lower GI bleed (hazard ratio, 2.76; 95% CI, 1.3-6.0; P = .01), but cut the risk of serious cardiovascular events by about 40% (HR, 0.59; 95% CI, 0.4-0.9; P = .02).

Deciding whether to resume aspirin after a severe lower GI bleed “presents a management dilemma for physicians, patients, and their families, particularly in the absence of risk-mitigating therapies and a lack of data on the risks and benefits of resuming aspirin,” the investigators emphasized. Their findings highlight the importance of weighing the cardiovascular benefits of aspirin against GI toxicity, they said. “Since there is substantial risk of recurrent bleeding, physicians should critically evaluate individual patients’ cardiovascular risk before resuming aspirin therapy. Our findings also suggest a need for a composite endpoint to evaluate clinically significant events throughout the GI tract in patients receiving antiplatelet drugs.”

The Chinese University of Hong Kong funded the study. Dr. Chan reported financial ties to Pfizer, Eisai, Takeda, Otsuka, and Astrazeneca.

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Resuming low-dose aspirin after an initial lower gastrointestinal bleed significantly increased the chances of recurrence but protected against serious cardiovascular events, based on a single-center retrospective study published in the August issue of Gastroenterology.

In contrast, “we did not find concomitant use of anticoagulants, antiplatelets, and steroids as a predictor of recurrent lower GI bleeding,” said Dr. Francis Chan of the Prince of Wales Hospital in Hong Kong and his associates. “This may be due to the low percentage of concomitant drug use in both groups. Multicenter studies with a large number of patients will be required to identify additional risk factors for recurrent lower GI bleeding with aspirin use.”

Low-dose aspirin has long been known to help prevent coronary artery and cerebrovascular disease, and more recently has been found to potentially reduce the risk of several types of cancer, the researchers noted. Aspirin is well known to increase the risk of upper GI bleeding, but some studies have also linked it to lower GI bleeding. However, “patients with underlying cardiovascular diseases often require lifelong aspirin,” they added. The risks and benefits of stopping or remaining on aspirin after an initial lower GI bleed are unclear (Gastroenterology 2016 Apr 26. doi: 10.1053/j.gastro.2016.04.013).

Accordingly, the researchers retrospectively studied 295 patients who had an initial aspirin-associated lower GI bleed, defined as 325 mg aspirin a day within a week of bleeding onset. All patients had melena or hematochezia documented by an attending physician and had no endoscopic evidence of upper GI bleeding.

For patients who continued using aspirin at least half the time, the 5-year cumulative incidence of recurrent lower GI bleeding was 19% (95% confidence interval [CI], 13%-25%) – more than double the rate among patients who used aspirin 20% or less of the time (5-year cumulative incidence, 7%; 95% CI, 3%-13%; P = .01). However, the 5-year cumulative incidence of serious cardiovascular events among nonusers was 37% (95% CI, 27%-46%), while the rate among aspirin users was 23% (95% CI, 17%-30%; P = .02). Mortality from noncardiovascular causes was also higher among nonusers (27%) than users (8%; P less than .001), probably because nonusers of aspirin tended to be older than users, but perhaps also because aspirin had a “nonvascular protective effect,” the researchers said.

A multivariate analysis confirmed these findings, linking lower GI bleeding to aspirin but not to use of steroids, anticoagulants, or antiplatelet drugs, or to age, sex, alcohol consumption, smoking, comorbidities, or cardiovascular risks. Indeed, continued aspirin use nearly tripled the chances of a recurrent lower GI bleed (hazard ratio, 2.76; 95% CI, 1.3-6.0; P = .01), but cut the risk of serious cardiovascular events by about 40% (HR, 0.59; 95% CI, 0.4-0.9; P = .02).

Deciding whether to resume aspirin after a severe lower GI bleed “presents a management dilemma for physicians, patients, and their families, particularly in the absence of risk-mitigating therapies and a lack of data on the risks and benefits of resuming aspirin,” the investigators emphasized. Their findings highlight the importance of weighing the cardiovascular benefits of aspirin against GI toxicity, they said. “Since there is substantial risk of recurrent bleeding, physicians should critically evaluate individual patients’ cardiovascular risk before resuming aspirin therapy. Our findings also suggest a need for a composite endpoint to evaluate clinically significant events throughout the GI tract in patients receiving antiplatelet drugs.”

The Chinese University of Hong Kong funded the study. Dr. Chan reported financial ties to Pfizer, Eisai, Takeda, Otsuka, and Astrazeneca.

Resuming low-dose aspirin after an initial lower gastrointestinal bleed significantly increased the chances of recurrence but protected against serious cardiovascular events, based on a single-center retrospective study published in the August issue of Gastroenterology.

In contrast, “we did not find concomitant use of anticoagulants, antiplatelets, and steroids as a predictor of recurrent lower GI bleeding,” said Dr. Francis Chan of the Prince of Wales Hospital in Hong Kong and his associates. “This may be due to the low percentage of concomitant drug use in both groups. Multicenter studies with a large number of patients will be required to identify additional risk factors for recurrent lower GI bleeding with aspirin use.”

Low-dose aspirin has long been known to help prevent coronary artery and cerebrovascular disease, and more recently has been found to potentially reduce the risk of several types of cancer, the researchers noted. Aspirin is well known to increase the risk of upper GI bleeding, but some studies have also linked it to lower GI bleeding. However, “patients with underlying cardiovascular diseases often require lifelong aspirin,” they added. The risks and benefits of stopping or remaining on aspirin after an initial lower GI bleed are unclear (Gastroenterology 2016 Apr 26. doi: 10.1053/j.gastro.2016.04.013).

Accordingly, the researchers retrospectively studied 295 patients who had an initial aspirin-associated lower GI bleed, defined as 325 mg aspirin a day within a week of bleeding onset. All patients had melena or hematochezia documented by an attending physician and had no endoscopic evidence of upper GI bleeding.

For patients who continued using aspirin at least half the time, the 5-year cumulative incidence of recurrent lower GI bleeding was 19% (95% confidence interval [CI], 13%-25%) – more than double the rate among patients who used aspirin 20% or less of the time (5-year cumulative incidence, 7%; 95% CI, 3%-13%; P = .01). However, the 5-year cumulative incidence of serious cardiovascular events among nonusers was 37% (95% CI, 27%-46%), while the rate among aspirin users was 23% (95% CI, 17%-30%; P = .02). Mortality from noncardiovascular causes was also higher among nonusers (27%) than users (8%; P less than .001), probably because nonusers of aspirin tended to be older than users, but perhaps also because aspirin had a “nonvascular protective effect,” the researchers said.

A multivariate analysis confirmed these findings, linking lower GI bleeding to aspirin but not to use of steroids, anticoagulants, or antiplatelet drugs, or to age, sex, alcohol consumption, smoking, comorbidities, or cardiovascular risks. Indeed, continued aspirin use nearly tripled the chances of a recurrent lower GI bleed (hazard ratio, 2.76; 95% CI, 1.3-6.0; P = .01), but cut the risk of serious cardiovascular events by about 40% (HR, 0.59; 95% CI, 0.4-0.9; P = .02).

Deciding whether to resume aspirin after a severe lower GI bleed “presents a management dilemma for physicians, patients, and their families, particularly in the absence of risk-mitigating therapies and a lack of data on the risks and benefits of resuming aspirin,” the investigators emphasized. Their findings highlight the importance of weighing the cardiovascular benefits of aspirin against GI toxicity, they said. “Since there is substantial risk of recurrent bleeding, physicians should critically evaluate individual patients’ cardiovascular risk before resuming aspirin therapy. Our findings also suggest a need for a composite endpoint to evaluate clinically significant events throughout the GI tract in patients receiving antiplatelet drugs.”

The Chinese University of Hong Kong funded the study. Dr. Chan reported financial ties to Pfizer, Eisai, Takeda, Otsuka, and Astrazeneca.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Study highlights cardiovascular benefits, lower GI risks of low-dose aspirin
Display Headline
Study highlights cardiovascular benefits, lower GI risks of low-dose aspirin
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Resuming low-dose aspirin after a lower gastrointestinal bleed increased the risk of recurrence but protected against cardiovascular events.

Major finding: At 5 years, the cumulative incidence of recurrent lower GI bleeding was 19% for patients who stayed on aspirin and 7% for patients who largely stopped it (P = .01). The cumulative incidence of serious cardiovascular events was 25% for users and 37% for nonusers (P = .02).

Data source: A single-center 5-year retrospective cohort study of 295 patients with aspirin-associated melena or hematochezia and no upper gastrointestinal bleeding.

Disclosures: The Chinese University of Hong Kong funded the study. Dr. Chan reported financial ties to Pfizer, Eisai, Takeda, Otsuka, and Astrazeneca.

WHO analysis: Cost of new HCV meds unaffordable globally

Treating those in need now leads to savings downstream
Article Type
Changed
Fri, 01/18/2019 - 16:06
Display Headline
WHO analysis: Cost of new HCV meds unaffordable globally

The cost of new medicines for patients infected with hepatitis C virus vary widely around the globe, especially when adjusted for national wealth, results from an economic analysis led by World Health Organization officials suggest.

“These prices threaten the sustainability of health systems in many countries and prevent large-scale provision of treatment,” Suzanne Hill, PhD, of the World Health Organization, Geneva, and her associates wrote (PLoS Med. 2016 May 31;[5]:e1002032. doi:10.1371/journal.pmed.1002032).

“Stakeholders should implement a fairer pricing framework to deliver lower prices that take account of affordability. Without lower prices, countries are unlikely to be able to increase investment to minimize the burden of hepatitis C.”

In an effort to calculate the potential total cost of sofosbuvir and ledipasvir/sofosbuvir for different national health systems and individual patients in 30 countries, the researchers obtained 2015 prices for a 12-week course of treatment with the medications for as many countries as possible. Sources of reference were the Pharma Price Information service of the Austrian public health institute Gesundheit Österreich GmbH, national government and drug reimbursement authority website, and press releases.

Using data compiled between July 17, 2015, and Jan. 25, 2016, medication prices in Organisation for Economic Co-operation and Development (OECD) member countries and certain low- and middle-income countries were converted to U.S. dollars using period average exchange rates and were adjusted for purchasing power parity (PPP). “We analyzed prices compared to national economic performance and estimated market size and the cost of these drugs in terms of countries’ annual total pharmaceutical expenditure (TPE) and in terms of the duration of time an individual would need to work to pay for treatment out of pocket,” the researchers explained. “Patient affordability was calculated using 2014 OECD average annual wages, supplemented International Labour Organization median wages where necessary.”

Dr. Sullivan and her associates found that HCV medication prices varied significantly across countries, especially when adjusted for national wealth. For example, the median price of a 12-week course of sofosbuvir across 26 OECD countries was $42,017 in U.S. dollars, ranging from $37,729 in Japan to $64,680 in the United States. At the same time, countries in central and eastern Europe had higher PPP-adjusted prices, compared with other countries. For example, the PPP-adjusted prices of sofosbuvir in Poland and Turkey were $101,063 and $70,331, respectively, compared with a price of $64,680 in the Unite States. At the same time, the PPP-adjusted price of ledipasvir/sofosbuvir in Poland was $118,754, compared with a price of $72,765 in the United States.

The researchers also found that the PPP-adjusted price of a full course of sofosbuvir alone would be equivalent to at least 1 year of the PPP-adjusted average earnings for individuals in 12 of the 30 countries analyzed. In Poland, Slovakia, Portugal, and Turkey, a course of sofosbuvir alone would cost at least 2 years’ of average annual wages. “This analysis is conservative because prices were ex-factory prices with an assumed 23% price reduction, and did not include supply chain mark-ups and other costs such as the cost of diagnosis, daclatasvir, ribavirin, and health service costs,” they wrote.

They characterized the costs of sofosbuvir and ledipasvir/sofosbuvir as “not ‘affordable’ for most OECD countries at the nominal and PPP-adjusted prices, with Central and Eastern European countries being the most affected. While determining what is affordable or not is a value judgment, funding these treatments in these national health systems would consume large proportions of their TPE and increase pressure on existing budgets.”

They acknowledged certain limitations of the analysis, including the accuracy of the estimates of the numbers of people infected and of the price information that was accessible. “We have also not included all likely costs, such as the costs of combination treatment with ribavirin, other health care services, and increases in the duration of treatment in patients with cirrhosis; thus, our budget impact estimates are underestimates of the cost of treatment. We are also aware that in some countries, the prices are probably lower than the publicly accessible prices because of confidential discounts or rebates negotiated with the manufacturer.”

Dr. Hill disclosed that she is a member of the PLoS Medicine editorial board.

[email protected]

References

Body

The savings to the medical system in averted future costs of liver complications were excluded from the assessment developed by Suzanne Hill, PhD, and her colleagues. However, studies of the cost-effectiveness of HCV therapies in the United States suggest that these benefits are substantial and can help finance HCV treatment.

Despite the discounts offered in both LMICs (low- and middle-income countries) and OECD (Organisation for Economic Cooperation and Development) countries, the short-term impact of HCV treatment on budgets of health care payers and individuals may limit access. However, there are two mitigating factors. First, once the backlog of prevalent cases is treated, the budgetary impact drops dramatically, as only the relatively few incident cases need be treated. Thus, while fiscally disruptive if all HCV-infected persons were immediately put on treatment, that disruption would last only 1 year. Second, treating everyone in year 1 is implausible. The process of identifying cases, limits to health care system capacity, and patient preferences all suggest a multi-year catch-up process. For those reasons, the fiscal burden expressed as a percent of TPE (total pharmaceutical expenditure) or as a portion of the average annual wage would be much less than the maximum burden as presented in Dr. Hill’s article.

One solution, then, is to spread the upfront cost of treatment over several years. Not everyone is eager to be treated, especially the asymptomatic for whom delay may be less harmful. Beyond this, there are options for phasing in treatment gradually by equity concerns, i.e., treating those with lower access to care first, or by disease stage. Our U.S.-based analysis found that while treating all patients in fibrosis stages 1–4 was cost-effective, initiating treatment in stages 3 and 4 was more cost-effective and would reduce total net treatment costs in the United States by about one-third per individual with chronic hepatitis C. A combination of equity and disease stage criteria can match phase-in plans to different countries’ budgets and political will.

It is in each country’s capacity, and without disruptive budgetary impact, to start treating many of those most in need of care now and to extend coverage to all over the succeeding few years.

These comments were extracted from an accompanying editorial (PLoS Med. 2016 May 31;[5]:e1002031. doi:10.1371/journal.pmed.1002031) by Elliot Marseille, DrPH, and James G. Kahn, MD, MPH. Dr. Marseille is with the Oakland, Calif.-based Health Strategies International. Dr. Kahn is with the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco. The authors reported having no relevant financial disclosures.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
hepatitis, HCV
Author and Disclosure Information

Author and Disclosure Information

Body

The savings to the medical system in averted future costs of liver complications were excluded from the assessment developed by Suzanne Hill, PhD, and her colleagues. However, studies of the cost-effectiveness of HCV therapies in the United States suggest that these benefits are substantial and can help finance HCV treatment.

Despite the discounts offered in both LMICs (low- and middle-income countries) and OECD (Organisation for Economic Cooperation and Development) countries, the short-term impact of HCV treatment on budgets of health care payers and individuals may limit access. However, there are two mitigating factors. First, once the backlog of prevalent cases is treated, the budgetary impact drops dramatically, as only the relatively few incident cases need be treated. Thus, while fiscally disruptive if all HCV-infected persons were immediately put on treatment, that disruption would last only 1 year. Second, treating everyone in year 1 is implausible. The process of identifying cases, limits to health care system capacity, and patient preferences all suggest a multi-year catch-up process. For those reasons, the fiscal burden expressed as a percent of TPE (total pharmaceutical expenditure) or as a portion of the average annual wage would be much less than the maximum burden as presented in Dr. Hill’s article.

One solution, then, is to spread the upfront cost of treatment over several years. Not everyone is eager to be treated, especially the asymptomatic for whom delay may be less harmful. Beyond this, there are options for phasing in treatment gradually by equity concerns, i.e., treating those with lower access to care first, or by disease stage. Our U.S.-based analysis found that while treating all patients in fibrosis stages 1–4 was cost-effective, initiating treatment in stages 3 and 4 was more cost-effective and would reduce total net treatment costs in the United States by about one-third per individual with chronic hepatitis C. A combination of equity and disease stage criteria can match phase-in plans to different countries’ budgets and political will.

It is in each country’s capacity, and without disruptive budgetary impact, to start treating many of those most in need of care now and to extend coverage to all over the succeeding few years.

These comments were extracted from an accompanying editorial (PLoS Med. 2016 May 31;[5]:e1002031. doi:10.1371/journal.pmed.1002031) by Elliot Marseille, DrPH, and James G. Kahn, MD, MPH. Dr. Marseille is with the Oakland, Calif.-based Health Strategies International. Dr. Kahn is with the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco. The authors reported having no relevant financial disclosures.

Body

The savings to the medical system in averted future costs of liver complications were excluded from the assessment developed by Suzanne Hill, PhD, and her colleagues. However, studies of the cost-effectiveness of HCV therapies in the United States suggest that these benefits are substantial and can help finance HCV treatment.

Despite the discounts offered in both LMICs (low- and middle-income countries) and OECD (Organisation for Economic Cooperation and Development) countries, the short-term impact of HCV treatment on budgets of health care payers and individuals may limit access. However, there are two mitigating factors. First, once the backlog of prevalent cases is treated, the budgetary impact drops dramatically, as only the relatively few incident cases need be treated. Thus, while fiscally disruptive if all HCV-infected persons were immediately put on treatment, that disruption would last only 1 year. Second, treating everyone in year 1 is implausible. The process of identifying cases, limits to health care system capacity, and patient preferences all suggest a multi-year catch-up process. For those reasons, the fiscal burden expressed as a percent of TPE (total pharmaceutical expenditure) or as a portion of the average annual wage would be much less than the maximum burden as presented in Dr. Hill’s article.

One solution, then, is to spread the upfront cost of treatment over several years. Not everyone is eager to be treated, especially the asymptomatic for whom delay may be less harmful. Beyond this, there are options for phasing in treatment gradually by equity concerns, i.e., treating those with lower access to care first, or by disease stage. Our U.S.-based analysis found that while treating all patients in fibrosis stages 1–4 was cost-effective, initiating treatment in stages 3 and 4 was more cost-effective and would reduce total net treatment costs in the United States by about one-third per individual with chronic hepatitis C. A combination of equity and disease stage criteria can match phase-in plans to different countries’ budgets and political will.

It is in each country’s capacity, and without disruptive budgetary impact, to start treating many of those most in need of care now and to extend coverage to all over the succeeding few years.

These comments were extracted from an accompanying editorial (PLoS Med. 2016 May 31;[5]:e1002031. doi:10.1371/journal.pmed.1002031) by Elliot Marseille, DrPH, and James G. Kahn, MD, MPH. Dr. Marseille is with the Oakland, Calif.-based Health Strategies International. Dr. Kahn is with the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco. The authors reported having no relevant financial disclosures.

Title
Treating those in need now leads to savings downstream
Treating those in need now leads to savings downstream

The cost of new medicines for patients infected with hepatitis C virus vary widely around the globe, especially when adjusted for national wealth, results from an economic analysis led by World Health Organization officials suggest.

“These prices threaten the sustainability of health systems in many countries and prevent large-scale provision of treatment,” Suzanne Hill, PhD, of the World Health Organization, Geneva, and her associates wrote (PLoS Med. 2016 May 31;[5]:e1002032. doi:10.1371/journal.pmed.1002032).

“Stakeholders should implement a fairer pricing framework to deliver lower prices that take account of affordability. Without lower prices, countries are unlikely to be able to increase investment to minimize the burden of hepatitis C.”

In an effort to calculate the potential total cost of sofosbuvir and ledipasvir/sofosbuvir for different national health systems and individual patients in 30 countries, the researchers obtained 2015 prices for a 12-week course of treatment with the medications for as many countries as possible. Sources of reference were the Pharma Price Information service of the Austrian public health institute Gesundheit Österreich GmbH, national government and drug reimbursement authority website, and press releases.

Using data compiled between July 17, 2015, and Jan. 25, 2016, medication prices in Organisation for Economic Co-operation and Development (OECD) member countries and certain low- and middle-income countries were converted to U.S. dollars using period average exchange rates and were adjusted for purchasing power parity (PPP). “We analyzed prices compared to national economic performance and estimated market size and the cost of these drugs in terms of countries’ annual total pharmaceutical expenditure (TPE) and in terms of the duration of time an individual would need to work to pay for treatment out of pocket,” the researchers explained. “Patient affordability was calculated using 2014 OECD average annual wages, supplemented International Labour Organization median wages where necessary.”

Dr. Sullivan and her associates found that HCV medication prices varied significantly across countries, especially when adjusted for national wealth. For example, the median price of a 12-week course of sofosbuvir across 26 OECD countries was $42,017 in U.S. dollars, ranging from $37,729 in Japan to $64,680 in the United States. At the same time, countries in central and eastern Europe had higher PPP-adjusted prices, compared with other countries. For example, the PPP-adjusted prices of sofosbuvir in Poland and Turkey were $101,063 and $70,331, respectively, compared with a price of $64,680 in the Unite States. At the same time, the PPP-adjusted price of ledipasvir/sofosbuvir in Poland was $118,754, compared with a price of $72,765 in the United States.

The researchers also found that the PPP-adjusted price of a full course of sofosbuvir alone would be equivalent to at least 1 year of the PPP-adjusted average earnings for individuals in 12 of the 30 countries analyzed. In Poland, Slovakia, Portugal, and Turkey, a course of sofosbuvir alone would cost at least 2 years’ of average annual wages. “This analysis is conservative because prices were ex-factory prices with an assumed 23% price reduction, and did not include supply chain mark-ups and other costs such as the cost of diagnosis, daclatasvir, ribavirin, and health service costs,” they wrote.

They characterized the costs of sofosbuvir and ledipasvir/sofosbuvir as “not ‘affordable’ for most OECD countries at the nominal and PPP-adjusted prices, with Central and Eastern European countries being the most affected. While determining what is affordable or not is a value judgment, funding these treatments in these national health systems would consume large proportions of their TPE and increase pressure on existing budgets.”

They acknowledged certain limitations of the analysis, including the accuracy of the estimates of the numbers of people infected and of the price information that was accessible. “We have also not included all likely costs, such as the costs of combination treatment with ribavirin, other health care services, and increases in the duration of treatment in patients with cirrhosis; thus, our budget impact estimates are underestimates of the cost of treatment. We are also aware that in some countries, the prices are probably lower than the publicly accessible prices because of confidential discounts or rebates negotiated with the manufacturer.”

Dr. Hill disclosed that she is a member of the PLoS Medicine editorial board.

[email protected]

The cost of new medicines for patients infected with hepatitis C virus vary widely around the globe, especially when adjusted for national wealth, results from an economic analysis led by World Health Organization officials suggest.

“These prices threaten the sustainability of health systems in many countries and prevent large-scale provision of treatment,” Suzanne Hill, PhD, of the World Health Organization, Geneva, and her associates wrote (PLoS Med. 2016 May 31;[5]:e1002032. doi:10.1371/journal.pmed.1002032).

“Stakeholders should implement a fairer pricing framework to deliver lower prices that take account of affordability. Without lower prices, countries are unlikely to be able to increase investment to minimize the burden of hepatitis C.”

In an effort to calculate the potential total cost of sofosbuvir and ledipasvir/sofosbuvir for different national health systems and individual patients in 30 countries, the researchers obtained 2015 prices for a 12-week course of treatment with the medications for as many countries as possible. Sources of reference were the Pharma Price Information service of the Austrian public health institute Gesundheit Österreich GmbH, national government and drug reimbursement authority website, and press releases.

Using data compiled between July 17, 2015, and Jan. 25, 2016, medication prices in Organisation for Economic Co-operation and Development (OECD) member countries and certain low- and middle-income countries were converted to U.S. dollars using period average exchange rates and were adjusted for purchasing power parity (PPP). “We analyzed prices compared to national economic performance and estimated market size and the cost of these drugs in terms of countries’ annual total pharmaceutical expenditure (TPE) and in terms of the duration of time an individual would need to work to pay for treatment out of pocket,” the researchers explained. “Patient affordability was calculated using 2014 OECD average annual wages, supplemented International Labour Organization median wages where necessary.”

Dr. Sullivan and her associates found that HCV medication prices varied significantly across countries, especially when adjusted for national wealth. For example, the median price of a 12-week course of sofosbuvir across 26 OECD countries was $42,017 in U.S. dollars, ranging from $37,729 in Japan to $64,680 in the United States. At the same time, countries in central and eastern Europe had higher PPP-adjusted prices, compared with other countries. For example, the PPP-adjusted prices of sofosbuvir in Poland and Turkey were $101,063 and $70,331, respectively, compared with a price of $64,680 in the Unite States. At the same time, the PPP-adjusted price of ledipasvir/sofosbuvir in Poland was $118,754, compared with a price of $72,765 in the United States.

The researchers also found that the PPP-adjusted price of a full course of sofosbuvir alone would be equivalent to at least 1 year of the PPP-adjusted average earnings for individuals in 12 of the 30 countries analyzed. In Poland, Slovakia, Portugal, and Turkey, a course of sofosbuvir alone would cost at least 2 years’ of average annual wages. “This analysis is conservative because prices were ex-factory prices with an assumed 23% price reduction, and did not include supply chain mark-ups and other costs such as the cost of diagnosis, daclatasvir, ribavirin, and health service costs,” they wrote.

They characterized the costs of sofosbuvir and ledipasvir/sofosbuvir as “not ‘affordable’ for most OECD countries at the nominal and PPP-adjusted prices, with Central and Eastern European countries being the most affected. While determining what is affordable or not is a value judgment, funding these treatments in these national health systems would consume large proportions of their TPE and increase pressure on existing budgets.”

They acknowledged certain limitations of the analysis, including the accuracy of the estimates of the numbers of people infected and of the price information that was accessible. “We have also not included all likely costs, such as the costs of combination treatment with ribavirin, other health care services, and increases in the duration of treatment in patients with cirrhosis; thus, our budget impact estimates are underestimates of the cost of treatment. We are also aware that in some countries, the prices are probably lower than the publicly accessible prices because of confidential discounts or rebates negotiated with the manufacturer.”

Dr. Hill disclosed that she is a member of the PLoS Medicine editorial board.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
WHO analysis: Cost of new HCV meds unaffordable globally
Display Headline
WHO analysis: Cost of new HCV meds unaffordable globally
Legacy Keywords
hepatitis, HCV
Legacy Keywords
hepatitis, HCV
Article Source

FROM PLoS MEDICINE

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Current prices of new medicines for hepatitis C virus are variable and unaffordable globally.

Major finding: The median price of a 12-week course of sofosbuvir across 26 OECD countries was $42,017 in U.S. dollars, ranging from $37,729 in Japan to $64,680 in the United States.

Data source: An economic analysis of prices, costs, and affordability of new medicines for HCV in 30 countries .

Disclosures: Dr. Hill disclosed that she is a member of the PLoS Medicine editorial board.

Scant evidence for how to avoid seclusion, restraint in psychiatric patients

Article Type
Changed
Mon, 04/16/2018 - 13:55
Display Headline
Scant evidence for how to avoid seclusion, restraint in psychiatric patients

Very little evidence exists for how to avoid using seclusion and restraints when de-escalating aggression in psychiatric patients in acute care settings, a recent report from the Agency for Healthcare Research and Quality shows.

Historically, aggression in patients has been met with either involuntary placement of the patient in a secured area, or with the involuntary administration of some form of restraint, which might be mechanical, pharmacologic.

Since the late 1990s, however, the Centers for Medicare & Medicaid Services and the Joint Commission have required using seclusion and restraints only after less restrictive measures have failed. Nearly two decades since, those requirements have been universally in force. “Despite practice guidelines advocating limitations of the use of seclusion or restraints as much as possible,” the interventions are used for 10%-30% of patients admitted to acute psychiatric units in the United States and Europe, the authors wrote.

Yet, when reviewing strategies such as creating a calm environment, medication modifications, staffing changes, training programs, and peer-based interventions, only risk assessment had “any reasonable evidence” that it is an effective method for avoiding aggression in psychiatric patients in nonpsychiatric hospital settings compared with usual care, the investigators found.

“The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.”

The findings suggest that policymakers are at a disadvantage for measuring performance improvement of these kinds of facilities seeking to reduce their use of seclusion and restraint. The authors asked, “What is the role of quality measures, designed to create incentives to improve the quality of care, when the evidence base for those measures is unclear?”

For the review, patient aggression was defined as making specific imminent verbal threats, or using actual violence toward self, others, or property. The review spanned the literature published between January 1991 and February 2016, and focused on studies with adults having a diagnosed psychiatric disorder, including delirium, who received interventions targeting aggressive behavior in acute care settings. Studies of psychiatric hospitals were excluded, since such facilities often use multimodal strategies that are not suitable for acute care settings that do not care for long-term patients with chronic psychiatric diagnoses.

The studies reviewed had as their primary outcomes either or both data on decreased aggression in terms of frequency, severity, or duration; and a reduction in the use of seclusion and restraints. Ultimately, out of 1,921 potentially relevant citations, the investigators found a combined total of 11 randomized, controlled trials and cluster randomized trials that qualified for their evidence review, the authors wrote in the report’s executive summary.

Finding strong evidence for any one method of de-escalation was complicated by studies that did not adhere to strict use of cluster randomized trial protocols, or did not report precise correlations between specific, targeted interventions for patients actively exhibiting aggression. In addition, the interventions themselves often were described inexactly, or as a matrix of interventions, making them difficult to classify. The reviewers also complained in their report about the absence of data on treatment effect modifiers.

The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.” Evidence for how to de-escalate active aggression was “even more limited” according to the authors.

Until more evidence is gathered and reviewed, policymakers could find themselves wondering whether “implementation decisions [should] be delayed until more evidence becomes available,” they wrote.

The AHRQ’s Effective Health Care Program produced the report, titled “Strategies to de-escalate aggressive behavior in psychiatric patients.”

[email protected]

On Twitter @whitneymcknight

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Very little evidence exists for how to avoid using seclusion and restraints when de-escalating aggression in psychiatric patients in acute care settings, a recent report from the Agency for Healthcare Research and Quality shows.

Historically, aggression in patients has been met with either involuntary placement of the patient in a secured area, or with the involuntary administration of some form of restraint, which might be mechanical, pharmacologic.

Since the late 1990s, however, the Centers for Medicare & Medicaid Services and the Joint Commission have required using seclusion and restraints only after less restrictive measures have failed. Nearly two decades since, those requirements have been universally in force. “Despite practice guidelines advocating limitations of the use of seclusion or restraints as much as possible,” the interventions are used for 10%-30% of patients admitted to acute psychiatric units in the United States and Europe, the authors wrote.

Yet, when reviewing strategies such as creating a calm environment, medication modifications, staffing changes, training programs, and peer-based interventions, only risk assessment had “any reasonable evidence” that it is an effective method for avoiding aggression in psychiatric patients in nonpsychiatric hospital settings compared with usual care, the investigators found.

“The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.”

The findings suggest that policymakers are at a disadvantage for measuring performance improvement of these kinds of facilities seeking to reduce their use of seclusion and restraint. The authors asked, “What is the role of quality measures, designed to create incentives to improve the quality of care, when the evidence base for those measures is unclear?”

For the review, patient aggression was defined as making specific imminent verbal threats, or using actual violence toward self, others, or property. The review spanned the literature published between January 1991 and February 2016, and focused on studies with adults having a diagnosed psychiatric disorder, including delirium, who received interventions targeting aggressive behavior in acute care settings. Studies of psychiatric hospitals were excluded, since such facilities often use multimodal strategies that are not suitable for acute care settings that do not care for long-term patients with chronic psychiatric diagnoses.

The studies reviewed had as their primary outcomes either or both data on decreased aggression in terms of frequency, severity, or duration; and a reduction in the use of seclusion and restraints. Ultimately, out of 1,921 potentially relevant citations, the investigators found a combined total of 11 randomized, controlled trials and cluster randomized trials that qualified for their evidence review, the authors wrote in the report’s executive summary.

Finding strong evidence for any one method of de-escalation was complicated by studies that did not adhere to strict use of cluster randomized trial protocols, or did not report precise correlations between specific, targeted interventions for patients actively exhibiting aggression. In addition, the interventions themselves often were described inexactly, or as a matrix of interventions, making them difficult to classify. The reviewers also complained in their report about the absence of data on treatment effect modifiers.

The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.” Evidence for how to de-escalate active aggression was “even more limited” according to the authors.

Until more evidence is gathered and reviewed, policymakers could find themselves wondering whether “implementation decisions [should] be delayed until more evidence becomes available,” they wrote.

The AHRQ’s Effective Health Care Program produced the report, titled “Strategies to de-escalate aggressive behavior in psychiatric patients.”

[email protected]

On Twitter @whitneymcknight

Very little evidence exists for how to avoid using seclusion and restraints when de-escalating aggression in psychiatric patients in acute care settings, a recent report from the Agency for Healthcare Research and Quality shows.

Historically, aggression in patients has been met with either involuntary placement of the patient in a secured area, or with the involuntary administration of some form of restraint, which might be mechanical, pharmacologic.

Since the late 1990s, however, the Centers for Medicare & Medicaid Services and the Joint Commission have required using seclusion and restraints only after less restrictive measures have failed. Nearly two decades since, those requirements have been universally in force. “Despite practice guidelines advocating limitations of the use of seclusion or restraints as much as possible,” the interventions are used for 10%-30% of patients admitted to acute psychiatric units in the United States and Europe, the authors wrote.

Yet, when reviewing strategies such as creating a calm environment, medication modifications, staffing changes, training programs, and peer-based interventions, only risk assessment had “any reasonable evidence” that it is an effective method for avoiding aggression in psychiatric patients in nonpsychiatric hospital settings compared with usual care, the investigators found.

“The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.”

The findings suggest that policymakers are at a disadvantage for measuring performance improvement of these kinds of facilities seeking to reduce their use of seclusion and restraint. The authors asked, “What is the role of quality measures, designed to create incentives to improve the quality of care, when the evidence base for those measures is unclear?”

For the review, patient aggression was defined as making specific imminent verbal threats, or using actual violence toward self, others, or property. The review spanned the literature published between January 1991 and February 2016, and focused on studies with adults having a diagnosed psychiatric disorder, including delirium, who received interventions targeting aggressive behavior in acute care settings. Studies of psychiatric hospitals were excluded, since such facilities often use multimodal strategies that are not suitable for acute care settings that do not care for long-term patients with chronic psychiatric diagnoses.

The studies reviewed had as their primary outcomes either or both data on decreased aggression in terms of frequency, severity, or duration; and a reduction in the use of seclusion and restraints. Ultimately, out of 1,921 potentially relevant citations, the investigators found a combined total of 11 randomized, controlled trials and cluster randomized trials that qualified for their evidence review, the authors wrote in the report’s executive summary.

Finding strong evidence for any one method of de-escalation was complicated by studies that did not adhere to strict use of cluster randomized trial protocols, or did not report precise correlations between specific, targeted interventions for patients actively exhibiting aggression. In addition, the interventions themselves often were described inexactly, or as a matrix of interventions, making them difficult to classify. The reviewers also complained in their report about the absence of data on treatment effect modifiers.

The current evidence base leaves clinicians, administrators, policymakers, and patients without clear guidance,” they wrote, noting that even the strength of the favorable evidence is “at best, low.” Evidence for how to de-escalate active aggression was “even more limited” according to the authors.

Until more evidence is gathered and reviewed, policymakers could find themselves wondering whether “implementation decisions [should] be delayed until more evidence becomes available,” they wrote.

The AHRQ’s Effective Health Care Program produced the report, titled “Strategies to de-escalate aggressive behavior in psychiatric patients.”

[email protected]

On Twitter @whitneymcknight

References

References

Publications
Publications
Topics
Article Type
Display Headline
Scant evidence for how to avoid seclusion, restraint in psychiatric patients
Display Headline
Scant evidence for how to avoid seclusion, restraint in psychiatric patients
Article Source

PURLs Copyright

Inside the Article