News and Views that Matter to Physicians

Top Sections
Commentary
Teachable Moments
hn
Main menu
HOSP Main Menu
Explore menu
HOSP Explore Menu
Proclivity ID
18825001
Unpublish
Specialty Focus
Cardiology
Critical Care
Imaging
Hospice & Palliative Medicine
Altmetric
DSM Affiliated
Display in offset block
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Top 25
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Use larger logo size
Off
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz

Sepsis mortality linked to concentration of critical care fellowships

Article Type
Changed
Wed, 01/02/2019 - 09:42

 

– Compared with other parts of the United States, survival rates for sepsis were highest in the Northeast and in metropolitan areas in the Western regions of the United States, which mirrors the concentration of critical care fellowship programs, results from a descriptive analysis found.

“There must be consideration to redistribute the critical care work force based on the spread of the malady that they are trained to deal with,” lead study author Aditya Shah, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “This could be linked to better reimbursements in the underserved areas.”

Dr. Aditya Shah
Dr. Shah, an internal medicine resident at Advocate Christ Medical Center in Oak Lawn, Ill., and his associates, extracted sepsis mortality data from the National Center for Health Statistics (NCHS)’ Compressed Mortality File, which aggregates US death incidence with regards to geographical distribution. They defined sepsis death as death attributed to an infection. The researchers used National Residency Matching Program data to determine the located of current critical care fellowships. Next, they used using Google fusion tables to map the data and studied them in relation to deaths attributed to infection in the continental United States, after running algorithms through the NCHS software, selecting deaths from infections, in age groups 20 years and older, in all races, and both sexes, with state-wise charting of the data.

Dr. Shah has conducted similar projects in patient populations with HIV and hepatitis, but to his knowledge, this is the first such analysis using NCHS data. “What is unique about this is that we can make real time presentations to see how the work force and the pathology is evolving with regards to an epidemiological stand point with real time data, which can be easily accessed,” he explained. “Depending on what we see, interventions and redistributions could be made with regards to better distributing providers based on where they are needed the most.”

Of 150 critical care fellowship programs identified in the analysis, the majority were concentrated in the Northeast and metropolitan areas in the Western regions of the United States, which parallel similar patterns noted in other specialties. Survival rates for sepsis were also higher in these locations. Dr. Shah said that the findings support previous studies, which indicated that physicians often tend to practice in geographic areas close to their training sites. However, the fact that such variation existed in mortality from sepsis – one of the most common diagnoses in the medical and surgical intensive care units – surprised him. “You would have thought that there would be a work force to deal with this malady,” he said.

He acknowledged certain limitations of the study, including the fact that the NCHS data do not enable researchers to break down mortality from particular causes of sepsis. “Also, the most current data will always lag behind as it is entered retrospectively and needs time to be uploaded online,” he said. “I am still in search of a more real-time database. However, that would require much more intensive time, money, and resources.”

Dr. Shah reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Compared with other parts of the United States, survival rates for sepsis were highest in the Northeast and in metropolitan areas in the Western regions of the United States, which mirrors the concentration of critical care fellowship programs, results from a descriptive analysis found.

“There must be consideration to redistribute the critical care work force based on the spread of the malady that they are trained to deal with,” lead study author Aditya Shah, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “This could be linked to better reimbursements in the underserved areas.”

Dr. Aditya Shah
Dr. Shah, an internal medicine resident at Advocate Christ Medical Center in Oak Lawn, Ill., and his associates, extracted sepsis mortality data from the National Center for Health Statistics (NCHS)’ Compressed Mortality File, which aggregates US death incidence with regards to geographical distribution. They defined sepsis death as death attributed to an infection. The researchers used National Residency Matching Program data to determine the located of current critical care fellowships. Next, they used using Google fusion tables to map the data and studied them in relation to deaths attributed to infection in the continental United States, after running algorithms through the NCHS software, selecting deaths from infections, in age groups 20 years and older, in all races, and both sexes, with state-wise charting of the data.

Dr. Shah has conducted similar projects in patient populations with HIV and hepatitis, but to his knowledge, this is the first such analysis using NCHS data. “What is unique about this is that we can make real time presentations to see how the work force and the pathology is evolving with regards to an epidemiological stand point with real time data, which can be easily accessed,” he explained. “Depending on what we see, interventions and redistributions could be made with regards to better distributing providers based on where they are needed the most.”

Of 150 critical care fellowship programs identified in the analysis, the majority were concentrated in the Northeast and metropolitan areas in the Western regions of the United States, which parallel similar patterns noted in other specialties. Survival rates for sepsis were also higher in these locations. Dr. Shah said that the findings support previous studies, which indicated that physicians often tend to practice in geographic areas close to their training sites. However, the fact that such variation existed in mortality from sepsis – one of the most common diagnoses in the medical and surgical intensive care units – surprised him. “You would have thought that there would be a work force to deal with this malady,” he said.

He acknowledged certain limitations of the study, including the fact that the NCHS data do not enable researchers to break down mortality from particular causes of sepsis. “Also, the most current data will always lag behind as it is entered retrospectively and needs time to be uploaded online,” he said. “I am still in search of a more real-time database. However, that would require much more intensive time, money, and resources.”

Dr. Shah reported having no financial disclosures.

 

– Compared with other parts of the United States, survival rates for sepsis were highest in the Northeast and in metropolitan areas in the Western regions of the United States, which mirrors the concentration of critical care fellowship programs, results from a descriptive analysis found.

“There must be consideration to redistribute the critical care work force based on the spread of the malady that they are trained to deal with,” lead study author Aditya Shah, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “This could be linked to better reimbursements in the underserved areas.”

Dr. Aditya Shah
Dr. Shah, an internal medicine resident at Advocate Christ Medical Center in Oak Lawn, Ill., and his associates, extracted sepsis mortality data from the National Center for Health Statistics (NCHS)’ Compressed Mortality File, which aggregates US death incidence with regards to geographical distribution. They defined sepsis death as death attributed to an infection. The researchers used National Residency Matching Program data to determine the located of current critical care fellowships. Next, they used using Google fusion tables to map the data and studied them in relation to deaths attributed to infection in the continental United States, after running algorithms through the NCHS software, selecting deaths from infections, in age groups 20 years and older, in all races, and both sexes, with state-wise charting of the data.

Dr. Shah has conducted similar projects in patient populations with HIV and hepatitis, but to his knowledge, this is the first such analysis using NCHS data. “What is unique about this is that we can make real time presentations to see how the work force and the pathology is evolving with regards to an epidemiological stand point with real time data, which can be easily accessed,” he explained. “Depending on what we see, interventions and redistributions could be made with regards to better distributing providers based on where they are needed the most.”

Of 150 critical care fellowship programs identified in the analysis, the majority were concentrated in the Northeast and metropolitan areas in the Western regions of the United States, which parallel similar patterns noted in other specialties. Survival rates for sepsis were also higher in these locations. Dr. Shah said that the findings support previous studies, which indicated that physicians often tend to practice in geographic areas close to their training sites. However, the fact that such variation existed in mortality from sepsis – one of the most common diagnoses in the medical and surgical intensive care units – surprised him. “You would have thought that there would be a work force to deal with this malady,” he said.

He acknowledged certain limitations of the study, including the fact that the NCHS data do not enable researchers to break down mortality from particular causes of sepsis. “Also, the most current data will always lag behind as it is entered retrospectively and needs time to be uploaded online,” he said. “I am still in search of a more real-time database. However, that would require much more intensive time, money, and resources.”

Dr. Shah reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT CHEST 2016

Disallow All Ads
Vitals

 

Key clinical point: Sepsis survival rates appear to be highest in the Northeast and in metropolitan areas of the Western United States.

Major finding: Higher survival rates for sepsis were more concentrated in the Northeast and metropolitan areas in the Western regions of the United States, compared with other areas of the country.

Data source: A descriptive analysis that evaluated sepsis mortality data linked to 150 critical care fellowship programs in the United States.

Disclosures: Dr. Shah reported having no financial disclosures.

Smokers’ hand grip strength predicts risk for respiratory events

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

– Hand grip strength is independently predictive of risk for respiratory events in smokers who have or are at risk for chronic obstructive pulmonary disease, results from a single-center study showed.

“Measures of lung function, including spirometry, are used as the main descriptors of COPD severity and prognosis,” Carlos H. Martinez, MD, MPH, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “These measurements, as important as they are, need to be improved, in order to develop better risk and prognostic models of the disease, to identify subgroups at higher risk of poor outcomes ... With our work, we have proved that simple physical tests could be part of future prognostic models.”

Dr. Carlos H. Martinez
Dr. Carlos H. Martinez


Interest has grown in developing multidimensional models to predict respiratory prognosis. Such models include BODE (body mass index, airflow obstruction, dyspnea and exercise capacity), ADO (age, dyspnea and airflow obstruction), and DOSE (dyspnea, airflow obstruction, smoking status, and exacerbation frequency).

In patients with or at risk for COPD, Dr. Martinez, of the University of Michigan Health System, Ann Arbor, and his colleagues tested the associations of hand grip strength with measures of body composition such as pectoralis muscle area and extent of subcutaneous fat, imaging phenotypes, and lung function.

The researchers obtained demographic, clinical, lung function, hand grip strength, and imaging data from 441 smokers with and without COPD participating in the Genetic Epidemiology of COPD Study (COPDGene) at the National Jewish Health in Denver. Imaging methods used in the study were developed by George R. Washko, MD, and his associates at Brigham and Women’s Hospital, Boston, to evaluate patients’ body composition, including chest CTs to obtain measures of airway thickness, emphysema percentage, pectoralis muscle area, and subcutaneous adipose tissue area.

Correlations between measures of lung function, imaging phenotypes, body composition, and hand grip strength were analyzed in univariate analysis and in multivariate linear models. The association between hand grip strength and exacerbations was analyzed at enrollment and during an average follow-up of 2.6 years.

Hand grip strength was similar across groups categorized by spirometry severity and was not associated with emphysema severity.

After adjustment for demographics, smoking history, smoking intensity, comorbidities and lung imaging phenotypes, however, grip strength was associated with pectoralis muscle area (increase of 3.9 kg per one standard deviation of pectoral muscle area) and subcutaneous adipose tissue (a decrement of 5.1 kg per one standard deviation of subcutaneous adipose tissue). These associations were independent of body mass index and the presence of emphysema.

During follow-up, hand grip strength was associated with exacerbations (risk ratio 0.94 per one kg increment on grip strength) and incident exacerbations (incident risk ratio 0.92 per one kg increment on grip strength) in models adjusted for other factors known to be associated with exacerbations.

Research in body composition has mostly relied on dual absorptiometry and bioelectrical impedance, tools not routinely used in clinical practice, Dr. Martinez said. “We were surprised by the ability to show similar results using imaging data that are available from regular chest CTs.”

“We have confirmed prior hypotheses that it is not just weight or BMI that matters (to risk of exacerbations), but how much muscle and how much fat are contributing to our patient’s high or low BMI,” Dr. Martinez said.

Hand grip testing can be challenging in this patient population, he said. Still, “asking relevant questions about (patients’) physical fitness will help us to understand better our patients’ needs. We can also give more attention to the extrapulmonary structures included in the numerous chest CT scans that we order for our patients. These imaging studies, besides the information that they provide about parenchymal and mediastinal structures, include important and easy to discover clues to identify patients at higher risk of exacerbations – those with low muscle and low hand grip could benefit from close follow-up.”

Dr. Martinez acknowledged certain limitations of the study, including the selection of the measures of body composition. “We used analysis of chest CTs, instead of the gold standard of dual absorptiometry (DXA) or other methods such as bioelectrical impedance,” he said. “A final limitation is that we tested a selected group of participants in a cohort study, not a representative sample of the population, [with a] low burden of emphysema and fewer African American participants.”

Dr. Martinez disclosed that his work is supported by the National Institutes of Health and that COPDGene also receives NIH funding. He acknowledged the support and effort of all COPDGene investigators and participants.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Hand grip strength is independently predictive of risk for respiratory events in smokers who have or are at risk for chronic obstructive pulmonary disease, results from a single-center study showed.

“Measures of lung function, including spirometry, are used as the main descriptors of COPD severity and prognosis,” Carlos H. Martinez, MD, MPH, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “These measurements, as important as they are, need to be improved, in order to develop better risk and prognostic models of the disease, to identify subgroups at higher risk of poor outcomes ... With our work, we have proved that simple physical tests could be part of future prognostic models.”

Dr. Carlos H. Martinez
Dr. Carlos H. Martinez


Interest has grown in developing multidimensional models to predict respiratory prognosis. Such models include BODE (body mass index, airflow obstruction, dyspnea and exercise capacity), ADO (age, dyspnea and airflow obstruction), and DOSE (dyspnea, airflow obstruction, smoking status, and exacerbation frequency).

In patients with or at risk for COPD, Dr. Martinez, of the University of Michigan Health System, Ann Arbor, and his colleagues tested the associations of hand grip strength with measures of body composition such as pectoralis muscle area and extent of subcutaneous fat, imaging phenotypes, and lung function.

The researchers obtained demographic, clinical, lung function, hand grip strength, and imaging data from 441 smokers with and without COPD participating in the Genetic Epidemiology of COPD Study (COPDGene) at the National Jewish Health in Denver. Imaging methods used in the study were developed by George R. Washko, MD, and his associates at Brigham and Women’s Hospital, Boston, to evaluate patients’ body composition, including chest CTs to obtain measures of airway thickness, emphysema percentage, pectoralis muscle area, and subcutaneous adipose tissue area.

Correlations between measures of lung function, imaging phenotypes, body composition, and hand grip strength were analyzed in univariate analysis and in multivariate linear models. The association between hand grip strength and exacerbations was analyzed at enrollment and during an average follow-up of 2.6 years.

Hand grip strength was similar across groups categorized by spirometry severity and was not associated with emphysema severity.

After adjustment for demographics, smoking history, smoking intensity, comorbidities and lung imaging phenotypes, however, grip strength was associated with pectoralis muscle area (increase of 3.9 kg per one standard deviation of pectoral muscle area) and subcutaneous adipose tissue (a decrement of 5.1 kg per one standard deviation of subcutaneous adipose tissue). These associations were independent of body mass index and the presence of emphysema.

During follow-up, hand grip strength was associated with exacerbations (risk ratio 0.94 per one kg increment on grip strength) and incident exacerbations (incident risk ratio 0.92 per one kg increment on grip strength) in models adjusted for other factors known to be associated with exacerbations.

Research in body composition has mostly relied on dual absorptiometry and bioelectrical impedance, tools not routinely used in clinical practice, Dr. Martinez said. “We were surprised by the ability to show similar results using imaging data that are available from regular chest CTs.”

“We have confirmed prior hypotheses that it is not just weight or BMI that matters (to risk of exacerbations), but how much muscle and how much fat are contributing to our patient’s high or low BMI,” Dr. Martinez said.

Hand grip testing can be challenging in this patient population, he said. Still, “asking relevant questions about (patients’) physical fitness will help us to understand better our patients’ needs. We can also give more attention to the extrapulmonary structures included in the numerous chest CT scans that we order for our patients. These imaging studies, besides the information that they provide about parenchymal and mediastinal structures, include important and easy to discover clues to identify patients at higher risk of exacerbations – those with low muscle and low hand grip could benefit from close follow-up.”

Dr. Martinez acknowledged certain limitations of the study, including the selection of the measures of body composition. “We used analysis of chest CTs, instead of the gold standard of dual absorptiometry (DXA) or other methods such as bioelectrical impedance,” he said. “A final limitation is that we tested a selected group of participants in a cohort study, not a representative sample of the population, [with a] low burden of emphysema and fewer African American participants.”

Dr. Martinez disclosed that his work is supported by the National Institutes of Health and that COPDGene also receives NIH funding. He acknowledged the support and effort of all COPDGene investigators and participants.

 

– Hand grip strength is independently predictive of risk for respiratory events in smokers who have or are at risk for chronic obstructive pulmonary disease, results from a single-center study showed.

“Measures of lung function, including spirometry, are used as the main descriptors of COPD severity and prognosis,” Carlos H. Martinez, MD, MPH, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “These measurements, as important as they are, need to be improved, in order to develop better risk and prognostic models of the disease, to identify subgroups at higher risk of poor outcomes ... With our work, we have proved that simple physical tests could be part of future prognostic models.”

Dr. Carlos H. Martinez
Dr. Carlos H. Martinez


Interest has grown in developing multidimensional models to predict respiratory prognosis. Such models include BODE (body mass index, airflow obstruction, dyspnea and exercise capacity), ADO (age, dyspnea and airflow obstruction), and DOSE (dyspnea, airflow obstruction, smoking status, and exacerbation frequency).

In patients with or at risk for COPD, Dr. Martinez, of the University of Michigan Health System, Ann Arbor, and his colleagues tested the associations of hand grip strength with measures of body composition such as pectoralis muscle area and extent of subcutaneous fat, imaging phenotypes, and lung function.

The researchers obtained demographic, clinical, lung function, hand grip strength, and imaging data from 441 smokers with and without COPD participating in the Genetic Epidemiology of COPD Study (COPDGene) at the National Jewish Health in Denver. Imaging methods used in the study were developed by George R. Washko, MD, and his associates at Brigham and Women’s Hospital, Boston, to evaluate patients’ body composition, including chest CTs to obtain measures of airway thickness, emphysema percentage, pectoralis muscle area, and subcutaneous adipose tissue area.

Correlations between measures of lung function, imaging phenotypes, body composition, and hand grip strength were analyzed in univariate analysis and in multivariate linear models. The association between hand grip strength and exacerbations was analyzed at enrollment and during an average follow-up of 2.6 years.

Hand grip strength was similar across groups categorized by spirometry severity and was not associated with emphysema severity.

After adjustment for demographics, smoking history, smoking intensity, comorbidities and lung imaging phenotypes, however, grip strength was associated with pectoralis muscle area (increase of 3.9 kg per one standard deviation of pectoral muscle area) and subcutaneous adipose tissue (a decrement of 5.1 kg per one standard deviation of subcutaneous adipose tissue). These associations were independent of body mass index and the presence of emphysema.

During follow-up, hand grip strength was associated with exacerbations (risk ratio 0.94 per one kg increment on grip strength) and incident exacerbations (incident risk ratio 0.92 per one kg increment on grip strength) in models adjusted for other factors known to be associated with exacerbations.

Research in body composition has mostly relied on dual absorptiometry and bioelectrical impedance, tools not routinely used in clinical practice, Dr. Martinez said. “We were surprised by the ability to show similar results using imaging data that are available from regular chest CTs.”

“We have confirmed prior hypotheses that it is not just weight or BMI that matters (to risk of exacerbations), but how much muscle and how much fat are contributing to our patient’s high or low BMI,” Dr. Martinez said.

Hand grip testing can be challenging in this patient population, he said. Still, “asking relevant questions about (patients’) physical fitness will help us to understand better our patients’ needs. We can also give more attention to the extrapulmonary structures included in the numerous chest CT scans that we order for our patients. These imaging studies, besides the information that they provide about parenchymal and mediastinal structures, include important and easy to discover clues to identify patients at higher risk of exacerbations – those with low muscle and low hand grip could benefit from close follow-up.”

Dr. Martinez acknowledged certain limitations of the study, including the selection of the measures of body composition. “We used analysis of chest CTs, instead of the gold standard of dual absorptiometry (DXA) or other methods such as bioelectrical impedance,” he said. “A final limitation is that we tested a selected group of participants in a cohort study, not a representative sample of the population, [with a] low burden of emphysema and fewer African American participants.”

Dr. Martinez disclosed that his work is supported by the National Institutes of Health and that COPDGene also receives NIH funding. He acknowledged the support and effort of all COPDGene investigators and participants.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT CHEST 2016

Disallow All Ads
Vitals

 

Key clinical point: Hand grip strength may be a useful marker of respiratory events in individuals at risk of COPD.

Major finding: During an average follow-up of 2.6 years, hand grip strength was associated with cross-sectional exacerbations (risk ratio 0.94 per one kg increment on grip strength) and incident exacerbations (incident risk ratio 0.92 per one kg increment on grip strength).

Data source: Data from 441 smokers with and without COPD participating in the Genetic Epidemiology of COPD Study (COPDGene) at National Jewish Health in Denver.

Disclosures: Dr. Martinez disclosed that his work is supported by the National Institutes of Health and that COPDGene also receives NIH funding.

LMWH best for preventing PE in patients with major trauma

Article Type
Changed
Wed, 01/02/2019 - 09:42

 

WAIKOLOA, HAWAII – Venous thromboembolism prophylaxis with low molecular weight heparin (LMWH), instead of unfractionated heparin (UH), is associated with lower risk of pulmonary embolism (PE) in patients with major trauma, results from a large study have shown.

The results of the study, based on data from the American College of Surgeons (ACS) Trauma Quality Improvement Program, suggest that LMWH-based strategies for thromboprophylaxis should be preferred after major trauma.

Dr. James Byrne
“Patients with major injury are at high risk for developing venous thromboembolism,” James Byrne, MD, said at the annual meeting of the American Association for the Surgery of Trauma. “Deep vein thrombosis frequently complicates the clinical course, and pulmonary embolism remains a leading cause of delayed mortality. We know that pharmacologic prophylaxis reduces the risk of DVT. For this reason, timely initiation of either low molecular weight or unfractionated heparin is indicated for all patients.”

Dr. Byrne, a general surgery resident at Sunnybrook Health Science Center, Toronto, Ontario, Canada, went on to note that LMWH is often favored because of a randomized controlled trial which showed that LMWH was associated with fewer deep vein thromboses (N Engl. J. Med. 1996;335[10]:701-7). However, significant practice variability continues to exist.

“Practitioners might favor the shorter half-life of unfractionated heparin in patients where they perceive the risk for hemorrhagic complications is high,” he said. “There’s also recent evidence to suggest that dosing may be all important and that unfractionated heparin dosed three times daily may be equivalent to low molecular weight heparin. If this is true, it might suggest that the historically higher cost of low molecular weight heparin could favor the use of unfractionated heparin.”

Furthermore, there is a is a lack of evidence comparing either agent to prevent PE, he added. “This is an important gap in our knowledge, because PE frequently occurs in the absence of an identified DVT and carries a significant risk of death. At present, it is not known how practice patterns with respect to choice of prophylaxis type influence risk of PE at the patient or hospital levels.”

Due to a lack of evidence comparing agents to prevent PE, the researchers set out to compare the effectiveness of LMWH versus UH to prevent PE in patients with major trauma who were treated at trauma centers participating in the ACS Trauma Quality Improvement Program from 2012 to 2015. They included all adults with severe injury who received LMWH or UH and excluded those who died or were discharged within five days, and those with a bleeding disorder or chronic anticoagulation. The exposure was defined as thromboprophylaxis with LMWH versus UH, and the primary outcome was PE confirmed on radiologic imaging. Potential confounders were considered, including patient baseline characteristics, anatomic and global injury severity, presenting characteristics in the emergency department, acute intracranial injuries, orthopedic injuries, early surgical interventions, and timing of prophylaxis initiation.

Dr. Byrne and his associates then used three analytic approaches in the study: a propensity score matching methodology, a multivariable logistic regression model for PE, and a center-level analysis examining the influence of LMWH utilization on hospital rates of PE.

They identified 153,474 trauma patients from 217 trauma centers. Their median age was 50 years and 67% were male. Blunt trauma was most common (89%), with a mean Injury Severity Score score of 20. LMWH was the most common type of thromboprophylaxis used (74%), and PE was diagnosed in 2,722 patients (1.8%).

Compared with patients who received LMWH, those who received UH were older and were significantly more likely to have been injured by falling (42% vs. 28%), with higher rates of severe head injuries (43% vs. 24%) and intracranial hemorrhage (38% vs. 19%). Conversely, LMWH was most favored in patients with orthopedic injuries.

After propensity score matching, patients on LMWH suffered significantly fewer PEs (1.4% vs. 2.4%; odds ratio, 0.56). This result was consistent within propensity-matched subgroups, including for patients with blunt multisystem injuries (OR, 0.60), penetrating truncal injuries (OR, 0.65), shock in the ED (OR, 0.68), isolated severe traumatic brain injury (OR, 0.49), and isolated orthopedic injuries (OR, 0.28).

Results of a sensitivity analysis in which each propensity-matched pair was matched within the same trauma center yielded similar results. Specifically, patients who received LMWH were at significantly lower risk for developing PE (OR, 0.64). “Importantly, this analysis minimized residual confounding due to differences in hospital-level processes of care, such as prophylaxis dosing or frequency, mechanical prophylaxis use, and thromboembolism screening practices,” Dr. Byrne noted.

Multivariable logistic regression also showed that patients who received LMWH had lower odds of PE (OR, 0.59). Other significant predictors of PE included obesity (OR, 1.54), severe chest injury (OR, 1.31), femoral shaft fracture (OR, 1.60), and spinal cord injury (OR, 1.60). Delays in prophylaxis initiation beyond the first day in the hospital were associated with significantly higher rates of PE, with an 80% increased risk of PE for patients who had their prophylaxis initiated after the fourth day.

The researchers conducted a center-level analysis in an effort to answer the question whether practice patterns with respect to choice of prophylaxis type influence hospital rates of PE. Across all 217 trauma centers in the study, the median rate of LMWH use was 80%, while the mean rate of PE was 1.6%. When trauma centers were grouped into quartiles based on their unique rate of LMWH use, trauma centers in the highest quartile (median LMWH use: 95%) were 50 times more likely to use LMWH, compared to those in the lowest quartile (median LMWH use: 39%) after adjusting for patient case mix. Compared with the lowest quartile, trauma centers that used the greatest proportion of LMWH had significantly lower rates of PE (1.2% vs. 2.0%). After adjusting for patient baseline and injury characteristics, patients who were treated at trauma centers in the highest quartile had significantly lower odds of PE (OR, 0.59).

Dr. Byrne acknowledged certain limitations of the study, including the potential for residual confounding and the inability to account for the dosing and frequency of prophylaxis that was given. “We were only able to measure the type and timing of prophylaxis initiation. We don’t know what doses of prophylaxis were used, and it is possible that the trauma centers included in this study favored use of UH twice daily,” he said.

Therefore, it is possible that the results might have been different if they had been able to directly compare LMWH to UH administered three times a day. “We also couldn’t measure interruptions in dosing due to surgery or patient refusal,” he said. “However, if it the case that UH is more likely to be refused based on the need for more frequent dosing, perhaps that is another feather in the cap of low molecular weight heparin-based thromboprophylaxis strategies. Larger prospective studies are needed, that take into account prophylaxis type and dosing, and are powered to detect a difference with respect to PE.”

Dr. Byrne reported having no financial disclosures.

 

 

Meeting/Event
Publications
Topics
Meeting/Event
Meeting/Event

 

WAIKOLOA, HAWAII – Venous thromboembolism prophylaxis with low molecular weight heparin (LMWH), instead of unfractionated heparin (UH), is associated with lower risk of pulmonary embolism (PE) in patients with major trauma, results from a large study have shown.

The results of the study, based on data from the American College of Surgeons (ACS) Trauma Quality Improvement Program, suggest that LMWH-based strategies for thromboprophylaxis should be preferred after major trauma.

Dr. James Byrne
“Patients with major injury are at high risk for developing venous thromboembolism,” James Byrne, MD, said at the annual meeting of the American Association for the Surgery of Trauma. “Deep vein thrombosis frequently complicates the clinical course, and pulmonary embolism remains a leading cause of delayed mortality. We know that pharmacologic prophylaxis reduces the risk of DVT. For this reason, timely initiation of either low molecular weight or unfractionated heparin is indicated for all patients.”

Dr. Byrne, a general surgery resident at Sunnybrook Health Science Center, Toronto, Ontario, Canada, went on to note that LMWH is often favored because of a randomized controlled trial which showed that LMWH was associated with fewer deep vein thromboses (N Engl. J. Med. 1996;335[10]:701-7). However, significant practice variability continues to exist.

“Practitioners might favor the shorter half-life of unfractionated heparin in patients where they perceive the risk for hemorrhagic complications is high,” he said. “There’s also recent evidence to suggest that dosing may be all important and that unfractionated heparin dosed three times daily may be equivalent to low molecular weight heparin. If this is true, it might suggest that the historically higher cost of low molecular weight heparin could favor the use of unfractionated heparin.”

Furthermore, there is a is a lack of evidence comparing either agent to prevent PE, he added. “This is an important gap in our knowledge, because PE frequently occurs in the absence of an identified DVT and carries a significant risk of death. At present, it is not known how practice patterns with respect to choice of prophylaxis type influence risk of PE at the patient or hospital levels.”

Due to a lack of evidence comparing agents to prevent PE, the researchers set out to compare the effectiveness of LMWH versus UH to prevent PE in patients with major trauma who were treated at trauma centers participating in the ACS Trauma Quality Improvement Program from 2012 to 2015. They included all adults with severe injury who received LMWH or UH and excluded those who died or were discharged within five days, and those with a bleeding disorder or chronic anticoagulation. The exposure was defined as thromboprophylaxis with LMWH versus UH, and the primary outcome was PE confirmed on radiologic imaging. Potential confounders were considered, including patient baseline characteristics, anatomic and global injury severity, presenting characteristics in the emergency department, acute intracranial injuries, orthopedic injuries, early surgical interventions, and timing of prophylaxis initiation.

Dr. Byrne and his associates then used three analytic approaches in the study: a propensity score matching methodology, a multivariable logistic regression model for PE, and a center-level analysis examining the influence of LMWH utilization on hospital rates of PE.

They identified 153,474 trauma patients from 217 trauma centers. Their median age was 50 years and 67% were male. Blunt trauma was most common (89%), with a mean Injury Severity Score score of 20. LMWH was the most common type of thromboprophylaxis used (74%), and PE was diagnosed in 2,722 patients (1.8%).

Compared with patients who received LMWH, those who received UH were older and were significantly more likely to have been injured by falling (42% vs. 28%), with higher rates of severe head injuries (43% vs. 24%) and intracranial hemorrhage (38% vs. 19%). Conversely, LMWH was most favored in patients with orthopedic injuries.

After propensity score matching, patients on LMWH suffered significantly fewer PEs (1.4% vs. 2.4%; odds ratio, 0.56). This result was consistent within propensity-matched subgroups, including for patients with blunt multisystem injuries (OR, 0.60), penetrating truncal injuries (OR, 0.65), shock in the ED (OR, 0.68), isolated severe traumatic brain injury (OR, 0.49), and isolated orthopedic injuries (OR, 0.28).

Results of a sensitivity analysis in which each propensity-matched pair was matched within the same trauma center yielded similar results. Specifically, patients who received LMWH were at significantly lower risk for developing PE (OR, 0.64). “Importantly, this analysis minimized residual confounding due to differences in hospital-level processes of care, such as prophylaxis dosing or frequency, mechanical prophylaxis use, and thromboembolism screening practices,” Dr. Byrne noted.

Multivariable logistic regression also showed that patients who received LMWH had lower odds of PE (OR, 0.59). Other significant predictors of PE included obesity (OR, 1.54), severe chest injury (OR, 1.31), femoral shaft fracture (OR, 1.60), and spinal cord injury (OR, 1.60). Delays in prophylaxis initiation beyond the first day in the hospital were associated with significantly higher rates of PE, with an 80% increased risk of PE for patients who had their prophylaxis initiated after the fourth day.

The researchers conducted a center-level analysis in an effort to answer the question whether practice patterns with respect to choice of prophylaxis type influence hospital rates of PE. Across all 217 trauma centers in the study, the median rate of LMWH use was 80%, while the mean rate of PE was 1.6%. When trauma centers were grouped into quartiles based on their unique rate of LMWH use, trauma centers in the highest quartile (median LMWH use: 95%) were 50 times more likely to use LMWH, compared to those in the lowest quartile (median LMWH use: 39%) after adjusting for patient case mix. Compared with the lowest quartile, trauma centers that used the greatest proportion of LMWH had significantly lower rates of PE (1.2% vs. 2.0%). After adjusting for patient baseline and injury characteristics, patients who were treated at trauma centers in the highest quartile had significantly lower odds of PE (OR, 0.59).

Dr. Byrne acknowledged certain limitations of the study, including the potential for residual confounding and the inability to account for the dosing and frequency of prophylaxis that was given. “We were only able to measure the type and timing of prophylaxis initiation. We don’t know what doses of prophylaxis were used, and it is possible that the trauma centers included in this study favored use of UH twice daily,” he said.

Therefore, it is possible that the results might have been different if they had been able to directly compare LMWH to UH administered three times a day. “We also couldn’t measure interruptions in dosing due to surgery or patient refusal,” he said. “However, if it the case that UH is more likely to be refused based on the need for more frequent dosing, perhaps that is another feather in the cap of low molecular weight heparin-based thromboprophylaxis strategies. Larger prospective studies are needed, that take into account prophylaxis type and dosing, and are powered to detect a difference with respect to PE.”

Dr. Byrne reported having no financial disclosures.

 

 

 

WAIKOLOA, HAWAII – Venous thromboembolism prophylaxis with low molecular weight heparin (LMWH), instead of unfractionated heparin (UH), is associated with lower risk of pulmonary embolism (PE) in patients with major trauma, results from a large study have shown.

The results of the study, based on data from the American College of Surgeons (ACS) Trauma Quality Improvement Program, suggest that LMWH-based strategies for thromboprophylaxis should be preferred after major trauma.

Dr. James Byrne
“Patients with major injury are at high risk for developing venous thromboembolism,” James Byrne, MD, said at the annual meeting of the American Association for the Surgery of Trauma. “Deep vein thrombosis frequently complicates the clinical course, and pulmonary embolism remains a leading cause of delayed mortality. We know that pharmacologic prophylaxis reduces the risk of DVT. For this reason, timely initiation of either low molecular weight or unfractionated heparin is indicated for all patients.”

Dr. Byrne, a general surgery resident at Sunnybrook Health Science Center, Toronto, Ontario, Canada, went on to note that LMWH is often favored because of a randomized controlled trial which showed that LMWH was associated with fewer deep vein thromboses (N Engl. J. Med. 1996;335[10]:701-7). However, significant practice variability continues to exist.

“Practitioners might favor the shorter half-life of unfractionated heparin in patients where they perceive the risk for hemorrhagic complications is high,” he said. “There’s also recent evidence to suggest that dosing may be all important and that unfractionated heparin dosed three times daily may be equivalent to low molecular weight heparin. If this is true, it might suggest that the historically higher cost of low molecular weight heparin could favor the use of unfractionated heparin.”

Furthermore, there is a is a lack of evidence comparing either agent to prevent PE, he added. “This is an important gap in our knowledge, because PE frequently occurs in the absence of an identified DVT and carries a significant risk of death. At present, it is not known how practice patterns with respect to choice of prophylaxis type influence risk of PE at the patient or hospital levels.”

Due to a lack of evidence comparing agents to prevent PE, the researchers set out to compare the effectiveness of LMWH versus UH to prevent PE in patients with major trauma who were treated at trauma centers participating in the ACS Trauma Quality Improvement Program from 2012 to 2015. They included all adults with severe injury who received LMWH or UH and excluded those who died or were discharged within five days, and those with a bleeding disorder or chronic anticoagulation. The exposure was defined as thromboprophylaxis with LMWH versus UH, and the primary outcome was PE confirmed on radiologic imaging. Potential confounders were considered, including patient baseline characteristics, anatomic and global injury severity, presenting characteristics in the emergency department, acute intracranial injuries, orthopedic injuries, early surgical interventions, and timing of prophylaxis initiation.

Dr. Byrne and his associates then used three analytic approaches in the study: a propensity score matching methodology, a multivariable logistic regression model for PE, and a center-level analysis examining the influence of LMWH utilization on hospital rates of PE.

They identified 153,474 trauma patients from 217 trauma centers. Their median age was 50 years and 67% were male. Blunt trauma was most common (89%), with a mean Injury Severity Score score of 20. LMWH was the most common type of thromboprophylaxis used (74%), and PE was diagnosed in 2,722 patients (1.8%).

Compared with patients who received LMWH, those who received UH were older and were significantly more likely to have been injured by falling (42% vs. 28%), with higher rates of severe head injuries (43% vs. 24%) and intracranial hemorrhage (38% vs. 19%). Conversely, LMWH was most favored in patients with orthopedic injuries.

After propensity score matching, patients on LMWH suffered significantly fewer PEs (1.4% vs. 2.4%; odds ratio, 0.56). This result was consistent within propensity-matched subgroups, including for patients with blunt multisystem injuries (OR, 0.60), penetrating truncal injuries (OR, 0.65), shock in the ED (OR, 0.68), isolated severe traumatic brain injury (OR, 0.49), and isolated orthopedic injuries (OR, 0.28).

Results of a sensitivity analysis in which each propensity-matched pair was matched within the same trauma center yielded similar results. Specifically, patients who received LMWH were at significantly lower risk for developing PE (OR, 0.64). “Importantly, this analysis minimized residual confounding due to differences in hospital-level processes of care, such as prophylaxis dosing or frequency, mechanical prophylaxis use, and thromboembolism screening practices,” Dr. Byrne noted.

Multivariable logistic regression also showed that patients who received LMWH had lower odds of PE (OR, 0.59). Other significant predictors of PE included obesity (OR, 1.54), severe chest injury (OR, 1.31), femoral shaft fracture (OR, 1.60), and spinal cord injury (OR, 1.60). Delays in prophylaxis initiation beyond the first day in the hospital were associated with significantly higher rates of PE, with an 80% increased risk of PE for patients who had their prophylaxis initiated after the fourth day.

The researchers conducted a center-level analysis in an effort to answer the question whether practice patterns with respect to choice of prophylaxis type influence hospital rates of PE. Across all 217 trauma centers in the study, the median rate of LMWH use was 80%, while the mean rate of PE was 1.6%. When trauma centers were grouped into quartiles based on their unique rate of LMWH use, trauma centers in the highest quartile (median LMWH use: 95%) were 50 times more likely to use LMWH, compared to those in the lowest quartile (median LMWH use: 39%) after adjusting for patient case mix. Compared with the lowest quartile, trauma centers that used the greatest proportion of LMWH had significantly lower rates of PE (1.2% vs. 2.0%). After adjusting for patient baseline and injury characteristics, patients who were treated at trauma centers in the highest quartile had significantly lower odds of PE (OR, 0.59).

Dr. Byrne acknowledged certain limitations of the study, including the potential for residual confounding and the inability to account for the dosing and frequency of prophylaxis that was given. “We were only able to measure the type and timing of prophylaxis initiation. We don’t know what doses of prophylaxis were used, and it is possible that the trauma centers included in this study favored use of UH twice daily,” he said.

Therefore, it is possible that the results might have been different if they had been able to directly compare LMWH to UH administered three times a day. “We also couldn’t measure interruptions in dosing due to surgery or patient refusal,” he said. “However, if it the case that UH is more likely to be refused based on the need for more frequent dosing, perhaps that is another feather in the cap of low molecular weight heparin-based thromboprophylaxis strategies. Larger prospective studies are needed, that take into account prophylaxis type and dosing, and are powered to detect a difference with respect to PE.”

Dr. Byrne reported having no financial disclosures.

 

 

Publications
Publications
Topics
Article Type
Article Source

AT THE AAST ANNUAL MEETING

Disallow All Ads
Vitals

 

Key clinical point: LMWH-based strategies for thromboprophylaxis should be preferred after major trauma.

Major finding: After propensity score matching, patients on LMWH had significantly fewer PEs, compared with those on unfractionated heparin (1.4% vs. 2.4%; odds ratio, 0.56). Data source: A multicenter analysis of 2,722 trauma patients who were diagnosed with pulmonary embolism.

Disclosures: Dr. Byrne reported having no financial disclosures.

Pelvic fracture pattern predicts the need for hemorrhage control

Article Type
Changed
Wed, 01/02/2019 - 09:42

 

WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.

Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).

Dr. Todd W. Costantini
“They were able to show that certain pelvic fractures were associated with soft tissue injury and pelvic hemorrhage,” said Dr. Costantini, of the division of trauma, surgical critical care, burns and acute care surgery at the University of California, San Diego. “Since then, several single center studies have been conducted in an attempt to correlate fracture pattern with the risk of pelvic hemorrhage. A majority of these studies evaluated angiogram as the endpoint for hemorrhage control. Modern trauma care has evolved to include multiple modalities to control hemorrhage, which include pelvic external fixator placement, pelvic angiography and embolization, preperitoneal pelvic packing, and the use of the REBOA [Resuscitative Endovascular Balloon Occlusion of the Aorta] catheter as an adjunct to hemorrhage control.”

In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.

These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.

The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.

Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).

Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).

The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).

On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.

The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”

Dr. Costantini reported having no financial disclosures.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.

Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).

Dr. Todd W. Costantini
“They were able to show that certain pelvic fractures were associated with soft tissue injury and pelvic hemorrhage,” said Dr. Costantini, of the division of trauma, surgical critical care, burns and acute care surgery at the University of California, San Diego. “Since then, several single center studies have been conducted in an attempt to correlate fracture pattern with the risk of pelvic hemorrhage. A majority of these studies evaluated angiogram as the endpoint for hemorrhage control. Modern trauma care has evolved to include multiple modalities to control hemorrhage, which include pelvic external fixator placement, pelvic angiography and embolization, preperitoneal pelvic packing, and the use of the REBOA [Resuscitative Endovascular Balloon Occlusion of the Aorta] catheter as an adjunct to hemorrhage control.”

In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.

These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.

The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.

Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).

Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).

The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).

On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.

The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”

Dr. Costantini reported having no financial disclosures.

 

 

 

WAIKOLOA, HAWAII – Blunt trauma patients admitted in shock with anterior posterior compression III or vertical shear fracture patterns, or patients with open pelvic fracture are at greatest risk of severe bleeding requiring pelvic hemorrhage control intervention, results from a multicenter trial demonstrated.

Thirty years ago, researchers defined a classification of pelvic fracture based on a pattern of force applied to the pelvis, Todd W. Costantini, MD, said at the annual meeting of the American Association for the Surgery of Trauma. They identified three main force patterns, including lateral compression, anterior posterior compression, and vertical shear (Radiology. 1986 Aug;160 [2]:445-51).

Dr. Todd W. Costantini
“They were able to show that certain pelvic fractures were associated with soft tissue injury and pelvic hemorrhage,” said Dr. Costantini, of the division of trauma, surgical critical care, burns and acute care surgery at the University of California, San Diego. “Since then, several single center studies have been conducted in an attempt to correlate fracture pattern with the risk of pelvic hemorrhage. A majority of these studies evaluated angiogram as the endpoint for hemorrhage control. Modern trauma care has evolved to include multiple modalities to control hemorrhage, which include pelvic external fixator placement, pelvic angiography and embolization, preperitoneal pelvic packing, and the use of the REBOA [Resuscitative Endovascular Balloon Occlusion of the Aorta] catheter as an adjunct to hemorrhage control.”

In a recently published study, Dr. Costantini and his associates found wide variability in the use of various pelvic hemorrhage control methods (J Trauma Acute Care Surg. 2016 May;80 [5]:717-25). “While angioembolization alone and external fixator placement alone were the most common methods used, there were various combinations of these methods used at different times by different institutions,” he said.

These results prompted the researchers to prospectively evaluate the correlation between pelvic fracture pattern and modern care of pelvic hemorrhage control at 11 Level I trauma centers over a two year period. Inclusion criteria for the study, which was sponsored by the AAST Multi-institutional Trials Committee, were patients over the age of 18, blunt mechanism of injury, and shock on admission, which was defined as an admission systolic blood pressure of less than 90 mm Hg, or heart rate greater than 120, or base deficit greater than 5. Exclusion criteria included isolated hip fracture, pregnancy, and lack of pelvic imaging.

The researchers evaluated the pelvic fracture pattern for each patient in the study. “Each pelvic image was evaluated by a trauma surgeon, orthopedic surgeon, or radiologist and classified using the Young-Burgess Classification system,” Dr. Costantini said. Next, they used univariate and multivariate logistic regression analysis to analyze predictors for hemorrhage control intervention and mortality. The objective was to determine whether pelvic fracture pattern would predict the need for a hemorrhage control intervention.

Of the 46,716 trauma patients admitted over the two year period, 1,339 sustained a pelvic fracture. Of these, 178 met criteria for shock. The researchers excluded 15 patients due to lack of pelvic imaging, which left 163 patients in the final analysis. Their mean age was 44 years and 58% were male. On admission, their mean systolic blood pressure was 93 mm Hg, their mean heart rate was 117 beats per minute, and their median Injury Severity Score was 28. The mean hospital length of stay was 12 days and the mortality rate was 30%. The three most common mechanisms of injury were motor vehicle crash (42%), followed by pedestrian versus auto (23%), and falls (18%).

Compared with patients who did not require hemorrhage control intervention, those who did received more transfusion of packed red blood cells (13 vs. 7 units, respectively; P less than .01) and fresh frozen plasma (10 vs. 5 units; P = .01). In addition, 67% of patients with open pelvic fracture required a hemorrhage control intervention. The rate of mortality was similar between the patients who required a pelvic hemorrhage control intervention and those who did not (34% vs. 28%; P = .47).

The three most common types of pelvic fracture patterns were lateral compression I (36%) and II (23%), followed by vertical shear (13%). Patients with lateral compression I and II fractures were least likely to require hemorrhage control intervention (22% and 19%, respectively). However, on univariate analysis, patients with anterior posterior compression III fractures and those with vertical shear fractures were more likely to require a pelvic hemorrhage control intervention, compared with those who sustained other types of pelvic fractures (83% and 55%, respectively).

On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Pelvic fracture pattern did not predict mortality on multivariate analysis.

The invited discussant, Joseph M. Galante, MD, trauma medical director for the University of California, Davis Health System, characterized the study as important, “because it examines all forms of hemorrhage control, not just arterioembolism in the treatment of pelvic fractures,” he said. “The ability to predict who will need hemorrhage control allows for earlier mobilization to resources, both in the operating room or interventional suite and in the resuscitation bay.”

Dr. Costantini reported having no financial disclosures.

 

 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE AAST ANNUAL MEETING

Disallow All Ads
Vitals

 

Key clinical point: Patients with anterior posterior compression III pelvic fractures face an especially high risk of severe bleeding that requires a hemorrhage control intervention.

Major finding: On multivariate analysis, the three main independent predictors of need for a hemorrhagic control intervention were anterior posterior compression III fracture (odds ratio, 109.43; P less than .001), open pelvic fracture (OR, 7.36; P = .014), and vertical shear fracture (OR, 6.99; P = .002). Data source: A prospective evaluation of 163 patients with pelvic fracture who were admitted to 11 Level I trauma centers over a two-year period.

Disclosures: Dr. Costantini reported having no financial disclosures.

C. difficile risk linked to antibiotic use in prior hospital bed occupant

Article Type
Changed
Sat, 12/08/2018 - 03:03
Display Headline
C. difficile risk linked to antibiotic use in prior hospital bed occupant

 

Inpatients are at increased risk for Clostridium difficile infection if the previous occupant of their hospital bed received antibiotics, according to a report published online October 10 in JAMA Internal Medicine.

The increase in risk was characterized as “modest,” but it is important because the use of antibiotics in hospitals is so common. “Our results show that antibiotics can potentially cause harm to patients who do not themselves receive the antibiotics and thus emphasize the value of antibiotic stewardship,” said Daniel E. Freedberg, MD, a gastroenterologist at Columbia University, New York, and his associates (JAMA Intern Med. 2016 Oct 10. doi: 10.1001/jamainternmed.2016.6193).

They performed a large retrospective cohort study of sequentially hospitalized adults at four New York City area hospitals between 2010 and 2015. They focused on 100,615 pairs of patients in which the first patient was hospitalized for at least 24 hours and was discharged less than 1 week before the second patient was hospitalized in the same bed for at least 48 hours. A total of 576 “second patients” developed C. difficile infection 2 to14 days after hospitalization.

There were no C. difficile outbreaks during the study period, and the incidence of C. difficile infections remained constant. The “first patient” occupied the bed for a median of 3.0 days, and the median interval before the “second patient” arrived at the bed was 10 hours. Among those who developed a C. difficile infection, the median time from admission into the bed to the development of the infection was 6.4 days.

The cumulative incidence of C. difficile infections was significantly higher among second patients when the prior bed occupants had received antibiotics (0.72%) than when the prior bed occupants had not received antibiotics (0.43%). This correlation remained strong and significant when the data were adjusted to account for potential confounders such as the second patient’s comorbidities and use of antibiotics, the number of nearby patients who already had a C. difficile infection, and the type of hospital ward involved.

The strong association also persisted through numerous sensitivity analyses, including one that excluded the 1,497 patient pairs in which the first patient had had a recent C. difficile infection (adjusted hazard ratio, 1.20). In a further analysis examining multiple risk factors for infection, receipt of antibiotics by the “first patient” was the only factor associated with subsequent patients’ infection risk. The investigators noted that the four hospitals involved in this study were among the many that routinely single out the rooms of patients with C. difficile infection for intensive cleaning, including UV radiation.

These findings “support the hypothesis that antibiotics given to one patient may alter the local microenvironment to influence a different patients’ risk” for C. difficile infection, the investigators concluded.

The study was supported in part by the American Gastroenterological Association and the National Center for Advancing Translational Sciences. Dr. Freedberg and his associates reported having no relevant financial disclosures.

Publications
Topics
Sections

 

Inpatients are at increased risk for Clostridium difficile infection if the previous occupant of their hospital bed received antibiotics, according to a report published online October 10 in JAMA Internal Medicine.

The increase in risk was characterized as “modest,” but it is important because the use of antibiotics in hospitals is so common. “Our results show that antibiotics can potentially cause harm to patients who do not themselves receive the antibiotics and thus emphasize the value of antibiotic stewardship,” said Daniel E. Freedberg, MD, a gastroenterologist at Columbia University, New York, and his associates (JAMA Intern Med. 2016 Oct 10. doi: 10.1001/jamainternmed.2016.6193).

They performed a large retrospective cohort study of sequentially hospitalized adults at four New York City area hospitals between 2010 and 2015. They focused on 100,615 pairs of patients in which the first patient was hospitalized for at least 24 hours and was discharged less than 1 week before the second patient was hospitalized in the same bed for at least 48 hours. A total of 576 “second patients” developed C. difficile infection 2 to14 days after hospitalization.

There were no C. difficile outbreaks during the study period, and the incidence of C. difficile infections remained constant. The “first patient” occupied the bed for a median of 3.0 days, and the median interval before the “second patient” arrived at the bed was 10 hours. Among those who developed a C. difficile infection, the median time from admission into the bed to the development of the infection was 6.4 days.

The cumulative incidence of C. difficile infections was significantly higher among second patients when the prior bed occupants had received antibiotics (0.72%) than when the prior bed occupants had not received antibiotics (0.43%). This correlation remained strong and significant when the data were adjusted to account for potential confounders such as the second patient’s comorbidities and use of antibiotics, the number of nearby patients who already had a C. difficile infection, and the type of hospital ward involved.

The strong association also persisted through numerous sensitivity analyses, including one that excluded the 1,497 patient pairs in which the first patient had had a recent C. difficile infection (adjusted hazard ratio, 1.20). In a further analysis examining multiple risk factors for infection, receipt of antibiotics by the “first patient” was the only factor associated with subsequent patients’ infection risk. The investigators noted that the four hospitals involved in this study were among the many that routinely single out the rooms of patients with C. difficile infection for intensive cleaning, including UV radiation.

These findings “support the hypothesis that antibiotics given to one patient may alter the local microenvironment to influence a different patients’ risk” for C. difficile infection, the investigators concluded.

The study was supported in part by the American Gastroenterological Association and the National Center for Advancing Translational Sciences. Dr. Freedberg and his associates reported having no relevant financial disclosures.

 

Inpatients are at increased risk for Clostridium difficile infection if the previous occupant of their hospital bed received antibiotics, according to a report published online October 10 in JAMA Internal Medicine.

The increase in risk was characterized as “modest,” but it is important because the use of antibiotics in hospitals is so common. “Our results show that antibiotics can potentially cause harm to patients who do not themselves receive the antibiotics and thus emphasize the value of antibiotic stewardship,” said Daniel E. Freedberg, MD, a gastroenterologist at Columbia University, New York, and his associates (JAMA Intern Med. 2016 Oct 10. doi: 10.1001/jamainternmed.2016.6193).

They performed a large retrospective cohort study of sequentially hospitalized adults at four New York City area hospitals between 2010 and 2015. They focused on 100,615 pairs of patients in which the first patient was hospitalized for at least 24 hours and was discharged less than 1 week before the second patient was hospitalized in the same bed for at least 48 hours. A total of 576 “second patients” developed C. difficile infection 2 to14 days after hospitalization.

There were no C. difficile outbreaks during the study period, and the incidence of C. difficile infections remained constant. The “first patient” occupied the bed for a median of 3.0 days, and the median interval before the “second patient” arrived at the bed was 10 hours. Among those who developed a C. difficile infection, the median time from admission into the bed to the development of the infection was 6.4 days.

The cumulative incidence of C. difficile infections was significantly higher among second patients when the prior bed occupants had received antibiotics (0.72%) than when the prior bed occupants had not received antibiotics (0.43%). This correlation remained strong and significant when the data were adjusted to account for potential confounders such as the second patient’s comorbidities and use of antibiotics, the number of nearby patients who already had a C. difficile infection, and the type of hospital ward involved.

The strong association also persisted through numerous sensitivity analyses, including one that excluded the 1,497 patient pairs in which the first patient had had a recent C. difficile infection (adjusted hazard ratio, 1.20). In a further analysis examining multiple risk factors for infection, receipt of antibiotics by the “first patient” was the only factor associated with subsequent patients’ infection risk. The investigators noted that the four hospitals involved in this study were among the many that routinely single out the rooms of patients with C. difficile infection for intensive cleaning, including UV radiation.

These findings “support the hypothesis that antibiotics given to one patient may alter the local microenvironment to influence a different patients’ risk” for C. difficile infection, the investigators concluded.

The study was supported in part by the American Gastroenterological Association and the National Center for Advancing Translational Sciences. Dr. Freedberg and his associates reported having no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
C. difficile risk linked to antibiotic use in prior hospital bed occupant
Display Headline
C. difficile risk linked to antibiotic use in prior hospital bed occupant
Sections
Inside the Article

FROM JAMA INTERNAL MEDICINE

Disallow All Ads

TBI scoring system predicts outcomes with only initial head CT findings

Article Type
Changed
Mon, 01/07/2019 - 12:46

 

– A simple 8-point scoring system based on head CT accurately predicts mortality, morbidity, and even discharge disposition among patients with a traumatic brain injury (TBI).

In its first clinical study, the Cranial CT Scoring Tool (CCTST) predictive power rivaled both the Glasgow Coma Score (GCS) and the Abbreviated Injury Scale (AIS), Ronnie Mubang, MD, said at the American College of Surgeons’ Clinical Congress.

In addition to adding valuable prognostic information, the CCTST is quick, easy, and completely objective, said Dr. Mubang, of St. Luke’s University Health Network, Bethlehem, Pa.

“The near-universal head CT makes this tool valuable in immediate prognostication and clinical risk assessment for physicians, patients and families. It can serve as a potential adjunct to the Glasgow score and Abbreviated Injury Score for risk assessment,” he said. Of note, the final AIS-Head may not be available until relatively late in the patient’s clinical course, and the GCS has important limitations in terms of outcome prognostication.

The CCTST is an 8-point assessment with one point assigned to each individual cranial CT finding: epidural hematoma, subdural hematoma, subarachnoid hemorrhage, intraventricular hemorrhage, cerebral contusion/ intraparenchymal hemorrhage, skull fracture, brain edema/herniation, and midline shift. The ninth factor is the presence of an external injury to the head.

Dr. Mubang, a fourth-year surgical resident, and his colleagues retrospectively examined the CCTST in 620 patients included in an administrative database at the three-hospital St. Luke’s Regional Trauma Network. Patients were older than 45 years. Half of them underwent neurosurgical intervention within 24 hours of admission and were matched with 310 patients who did not require neurosurgery. The primary clinical endpoint was mortality from head injury. Secondary endpoints included morbidity, hospital and intensive care unit length of stay, and post-discharge destination.

The mean age of the cohort was 73 years. Almost all injuries (99%) were due to blunt force trauma. The mean GCS was 11; the mean Injury Severity Score (ISS) was 24; and the mean AIS – Head score was 4.6, indicating severe to critical level of TBI. Midline shift was significantly greater in the surgical group (0.74 cm vs. 0.29 cm).

Several CT findings were significantly more common in the surgical group, including subdural hematoma (96% vs. 7%); midline shift (74% vs. 29%); brain edema (39% vs. 23%); and epidural hematoma (10% vs. 3%).

As the total CCTST score increased, outcomes worsened accordingly, Dr. Mubang said. Patients with a score of 1-2 had a 20%-30% chance of complications and an approximately 10% chance of injury-related mortality. Patients with higher scores (7-8) had a 60%-75% chance of morbidity and a 55% chance of mortality.

Rising scores correlated well with both hospital and ICU length of stay, with a score of 1-2 associated with a 3-day average stay, and a score of 8 associated with stays exceeding 10 days. The same pattern occurred with overall hospital length of stay: the lowest scores were associated with a stay of about a week, while the highest scores with a stay exceeding 2 weeks.

CCTST was highly associated with discharge disposition. With every additional point, the chance of discharge to home fell. While the majority of patients with scores below 2 were discharged home, no patients with a score of 8 were discharged home.

Finally, the investigators performed a multivariate analysis that controlled for sex; GCS, ISS, and AIS-head scores; time in the trauma bay; and preinjury anticoagulation treatment. The CCTST score was strongly associated with patient mortality (OR 1.31), rivaling both GCS (OR, 1.14) and AIS-Head (OR, 2.68). Neither ISS nor pre-injury anticoagulation predicted mortality. CCTST was also the only variable independently associated with the need for neurosurgical intervention.

The team is planning a multicenter retrospective validation, followed by a prospective observational study in the next 2 years, according to Dr. Stan Stawicki, the senior investigator, also with St. Luke’s. “CCTST offers potential promise to add much needed granularity to our existing TBI clinical assessment paradigm that continues to rely heavily on AIS-Head and GCS,” he said.

Neither Dr. Mubang nor Dr. Stawicki had any financial disclosures.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A simple 8-point scoring system based on head CT accurately predicts mortality, morbidity, and even discharge disposition among patients with a traumatic brain injury (TBI).

In its first clinical study, the Cranial CT Scoring Tool (CCTST) predictive power rivaled both the Glasgow Coma Score (GCS) and the Abbreviated Injury Scale (AIS), Ronnie Mubang, MD, said at the American College of Surgeons’ Clinical Congress.

In addition to adding valuable prognostic information, the CCTST is quick, easy, and completely objective, said Dr. Mubang, of St. Luke’s University Health Network, Bethlehem, Pa.

“The near-universal head CT makes this tool valuable in immediate prognostication and clinical risk assessment for physicians, patients and families. It can serve as a potential adjunct to the Glasgow score and Abbreviated Injury Score for risk assessment,” he said. Of note, the final AIS-Head may not be available until relatively late in the patient’s clinical course, and the GCS has important limitations in terms of outcome prognostication.

The CCTST is an 8-point assessment with one point assigned to each individual cranial CT finding: epidural hematoma, subdural hematoma, subarachnoid hemorrhage, intraventricular hemorrhage, cerebral contusion/ intraparenchymal hemorrhage, skull fracture, brain edema/herniation, and midline shift. The ninth factor is the presence of an external injury to the head.

Dr. Mubang, a fourth-year surgical resident, and his colleagues retrospectively examined the CCTST in 620 patients included in an administrative database at the three-hospital St. Luke’s Regional Trauma Network. Patients were older than 45 years. Half of them underwent neurosurgical intervention within 24 hours of admission and were matched with 310 patients who did not require neurosurgery. The primary clinical endpoint was mortality from head injury. Secondary endpoints included morbidity, hospital and intensive care unit length of stay, and post-discharge destination.

The mean age of the cohort was 73 years. Almost all injuries (99%) were due to blunt force trauma. The mean GCS was 11; the mean Injury Severity Score (ISS) was 24; and the mean AIS – Head score was 4.6, indicating severe to critical level of TBI. Midline shift was significantly greater in the surgical group (0.74 cm vs. 0.29 cm).

Several CT findings were significantly more common in the surgical group, including subdural hematoma (96% vs. 7%); midline shift (74% vs. 29%); brain edema (39% vs. 23%); and epidural hematoma (10% vs. 3%).

As the total CCTST score increased, outcomes worsened accordingly, Dr. Mubang said. Patients with a score of 1-2 had a 20%-30% chance of complications and an approximately 10% chance of injury-related mortality. Patients with higher scores (7-8) had a 60%-75% chance of morbidity and a 55% chance of mortality.

Rising scores correlated well with both hospital and ICU length of stay, with a score of 1-2 associated with a 3-day average stay, and a score of 8 associated with stays exceeding 10 days. The same pattern occurred with overall hospital length of stay: the lowest scores were associated with a stay of about a week, while the highest scores with a stay exceeding 2 weeks.

CCTST was highly associated with discharge disposition. With every additional point, the chance of discharge to home fell. While the majority of patients with scores below 2 were discharged home, no patients with a score of 8 were discharged home.

Finally, the investigators performed a multivariate analysis that controlled for sex; GCS, ISS, and AIS-head scores; time in the trauma bay; and preinjury anticoagulation treatment. The CCTST score was strongly associated with patient mortality (OR 1.31), rivaling both GCS (OR, 1.14) and AIS-Head (OR, 2.68). Neither ISS nor pre-injury anticoagulation predicted mortality. CCTST was also the only variable independently associated with the need for neurosurgical intervention.

The team is planning a multicenter retrospective validation, followed by a prospective observational study in the next 2 years, according to Dr. Stan Stawicki, the senior investigator, also with St. Luke’s. “CCTST offers potential promise to add much needed granularity to our existing TBI clinical assessment paradigm that continues to rely heavily on AIS-Head and GCS,” he said.

Neither Dr. Mubang nor Dr. Stawicki had any financial disclosures.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

 

– A simple 8-point scoring system based on head CT accurately predicts mortality, morbidity, and even discharge disposition among patients with a traumatic brain injury (TBI).

In its first clinical study, the Cranial CT Scoring Tool (CCTST) predictive power rivaled both the Glasgow Coma Score (GCS) and the Abbreviated Injury Scale (AIS), Ronnie Mubang, MD, said at the American College of Surgeons’ Clinical Congress.

In addition to adding valuable prognostic information, the CCTST is quick, easy, and completely objective, said Dr. Mubang, of St. Luke’s University Health Network, Bethlehem, Pa.

“The near-universal head CT makes this tool valuable in immediate prognostication and clinical risk assessment for physicians, patients and families. It can serve as a potential adjunct to the Glasgow score and Abbreviated Injury Score for risk assessment,” he said. Of note, the final AIS-Head may not be available until relatively late in the patient’s clinical course, and the GCS has important limitations in terms of outcome prognostication.

The CCTST is an 8-point assessment with one point assigned to each individual cranial CT finding: epidural hematoma, subdural hematoma, subarachnoid hemorrhage, intraventricular hemorrhage, cerebral contusion/ intraparenchymal hemorrhage, skull fracture, brain edema/herniation, and midline shift. The ninth factor is the presence of an external injury to the head.

Dr. Mubang, a fourth-year surgical resident, and his colleagues retrospectively examined the CCTST in 620 patients included in an administrative database at the three-hospital St. Luke’s Regional Trauma Network. Patients were older than 45 years. Half of them underwent neurosurgical intervention within 24 hours of admission and were matched with 310 patients who did not require neurosurgery. The primary clinical endpoint was mortality from head injury. Secondary endpoints included morbidity, hospital and intensive care unit length of stay, and post-discharge destination.

The mean age of the cohort was 73 years. Almost all injuries (99%) were due to blunt force trauma. The mean GCS was 11; the mean Injury Severity Score (ISS) was 24; and the mean AIS – Head score was 4.6, indicating severe to critical level of TBI. Midline shift was significantly greater in the surgical group (0.74 cm vs. 0.29 cm).

Several CT findings were significantly more common in the surgical group, including subdural hematoma (96% vs. 7%); midline shift (74% vs. 29%); brain edema (39% vs. 23%); and epidural hematoma (10% vs. 3%).

As the total CCTST score increased, outcomes worsened accordingly, Dr. Mubang said. Patients with a score of 1-2 had a 20%-30% chance of complications and an approximately 10% chance of injury-related mortality. Patients with higher scores (7-8) had a 60%-75% chance of morbidity and a 55% chance of mortality.

Rising scores correlated well with both hospital and ICU length of stay, with a score of 1-2 associated with a 3-day average stay, and a score of 8 associated with stays exceeding 10 days. The same pattern occurred with overall hospital length of stay: the lowest scores were associated with a stay of about a week, while the highest scores with a stay exceeding 2 weeks.

CCTST was highly associated with discharge disposition. With every additional point, the chance of discharge to home fell. While the majority of patients with scores below 2 were discharged home, no patients with a score of 8 were discharged home.

Finally, the investigators performed a multivariate analysis that controlled for sex; GCS, ISS, and AIS-head scores; time in the trauma bay; and preinjury anticoagulation treatment. The CCTST score was strongly associated with patient mortality (OR 1.31), rivaling both GCS (OR, 1.14) and AIS-Head (OR, 2.68). Neither ISS nor pre-injury anticoagulation predicted mortality. CCTST was also the only variable independently associated with the need for neurosurgical intervention.

The team is planning a multicenter retrospective validation, followed by a prospective observational study in the next 2 years, according to Dr. Stan Stawicki, the senior investigator, also with St. Luke’s. “CCTST offers potential promise to add much needed granularity to our existing TBI clinical assessment paradigm that continues to rely heavily on AIS-Head and GCS,” he said.

Neither Dr. Mubang nor Dr. Stawicki had any financial disclosures.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Publications
Publications
Topics
Article Type
Sections
Article Source

AT ACS 2016

Disallow All Ads
Vitals

 

Key clinical point: The Cranial CT Scoring Tool (CCTST) uses eight head CT findings to predict mortality, morbidity, and patient discharge disposition.

Major finding: CCTST score was strongly associated with patient mortality (Odds ratio, 1.31), rivaling both the Glasgow Coma Score (OR, 1.14) and the Abbreviated Injury Score – Head (OR, 2.68)Data source: The retrospective database study comprised 620 head trauma patients.

Disclosures: Neither Ronnie Mubang, MD, or Stan Stawicki, MD, had financial disclosures.

Study links low diastolic blood pressure to myocardial damage, coronary heart disease

Lower is not always better
Article Type
Changed
Fri, 01/18/2019 - 16:17

 

Low diastolic blood pressure (DBP) was significantly associated with myocardial injury and incident coronary heart disease, especially when the systolic blood pressure was 120 mm or higher, investigators reported.

Compared with a DBP of 80 to 89 mm Hg, DBP below 60 mm Hg more than doubled the odds of high-sensitivity cardiac troponin-T levels equaling or exceeding 14 ng per mL, and increased the risk of incident coronary heart disease (CHD) by about 50%, in a large observational study. Associations were strongest when baseline systolic blood pressure was at least 120 mm Hg, signifying elevated pulse pressure, reported Dr. John McEvoy of the Ciccarone Center for the Prevention of Heart Disease, Hopkins University, Baltimore, and associates (J Am Coll Cardiol 2016;68[16]:1713–22).

©Vishnu Kumar/Thinkstock
“Our results have a number of potential implications, particularly in the post-SPRINT era where the threshold for diagnosing and treating hypertension could be redefined,” the investigators emphasized, referring to the Systolic Blood Pressure Intervention Trial (SPRINT), which found a reduced rate of major cardiovascular events and all-cause mortality associated with a targeted systolic blood pressure below 120 mm Hg, vs. less than 140 mm Hg in a high risk population (N Engl J Med 2015; 373:2103-2116). “Despite the undeniable clinical benefits reported in SPRINT, one of many concerns related to aggressive SBP reduction with pharmacotherapy is the possibility of myocardial ischemia by lowering DBP,” they noted.

Their study included 11,565 individuals tracked for 21 years through the Atherosclerosis Risk in Communities Cohort, an observational population-based study of adults from in North Carolina, Mississippi, Minnesota, and Maryland. The researchers excluded participants with known baseline cardiovascular disease or heart failure. High-sensitivity cardiac troponin-T levels were measured at three time points between 1990 and 1992, 1996 and 1998, and 2011 and 2013. Participants averaged 57 years old at enrollment, 57% were female, and 25% were black (J Am Coll Cardiol. 2016 Oct 18. doi: 10.1016/j.jacc.2016.07.754).

Compared with baseline DBP of 80 to 89 mm Hg, DBP under 60 mm Hg was associated with a 2.2-fold greater odds (P = .01) of high-sensitivity cardiac troponin-T levels equal to or exceeding 14 ng per mL during the same visit – indicating prevalent myocardial damage – even after controlling for race, sex, body mass index, smoking and alcohol use, triglyceride and cholesterol levels, diabetes, glomerular filtration rate, and use of antihypertensives and lipid-lowering drugs, said the researchers. The odds of myocardial damage remained increased even when DBP was 60 to 69 mm Hg (odds ratio, 1.5; P = .05). Low DBP also was associated with myocardial damage at any given systolic blood pressure.

Furthermore, low DBP significantly increased the risk of progressively worsening myocardial damage, as indicated by a rising annual change in high-sensitivity cardiac troponin-T levels over 6 years. The association was significant as long as DBP was under 80 mm Hg, but was strongest when DBP was less than 60 mm Hg. Diastolic blood pressure under 60 mm Hg also significantly increased the chances of incident CHD and death, but not stroke.

Low DBP was most strongly linked to subclinical myocardial damage and incident CHD when systolic blood pressure was at least 120 mm Hg, indicating elevated pulse pressure, the researchers reported. Systolic pressure is “the main determinant of cardiac afterload and, thus, a primary driver of myocardial energy requirements,” while low DBP reduces myocardial energy supply, they noted. Therefore, high pulse pressure would lead to the greatest mismatch between myocardial energy demand and supply.

“Among patients being treated to SBP goals of 140 mm Hg or lower, attention may need to be paid not only to SBP, but also, importantly, to achieved DBP. Diastolic and systolic BP are inextricably linked, and our results highlighted the importance of not ignoring the former and focusing only on the latter, instead emphasizing the need to consider both in the optimal treatment of adults with hypertension.,”

The study was supported by the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases and by the National Heart, Lung, and Blood Institute. Roche Diagnostics provided reagents for the cardiac troponin assays. Dr. McEvoy had no disclosures. One author disclosed ties to Roche; one author disclosed ties to Roche, Abbott Diagnostics, and several other relevant companies; and two authors are coinvestigators on a provisional patent filed by Roche for use of biomarkers in predicting heart failure. The other four authors had no disclosures.

Body

 

The average age in the study by McEvoy et al. was 57 years. One might anticipate that in an older population, the side effects from lower BPs [blood pressures] due to drug therapy such as hypotension or syncope would be greater, and the potential for adverse cardiovascular events due to a J-curve would be substantially increased compared with what was seen in the present study. Similarly, an exacerbated potential for lower DBP to be harmful might be expected in patients with established coronary artery disease.

The well done study ... shows that lower may not always be better with respect to blood pressure control and, along with other accumulating evidence, strongly suggests careful thought before pushing blood pressure control below current guideline targets, especially if the diastolic blood pressure falls below 60 mm Hg while the pulse pressure is[greater than] 60 mm Hg.

Deepak L. Bhatt, MD, MPH, is at Brigham and Women’s Hospital Heart & Vascular Center, Boston. He disclosed ties to Amarin, Amgen, AstraZeneca, Bristol-Myers Squibb, Eisai, and a number of other pharmaceutical and medical education companies. His comments are from an accompanying editorial (J Am Coll Cardiol. 2016 Oct 18;68[16]:1723-1726).

Publications
Topics
Sections
Body

 

The average age in the study by McEvoy et al. was 57 years. One might anticipate that in an older population, the side effects from lower BPs [blood pressures] due to drug therapy such as hypotension or syncope would be greater, and the potential for adverse cardiovascular events due to a J-curve would be substantially increased compared with what was seen in the present study. Similarly, an exacerbated potential for lower DBP to be harmful might be expected in patients with established coronary artery disease.

The well done study ... shows that lower may not always be better with respect to blood pressure control and, along with other accumulating evidence, strongly suggests careful thought before pushing blood pressure control below current guideline targets, especially if the diastolic blood pressure falls below 60 mm Hg while the pulse pressure is[greater than] 60 mm Hg.

Deepak L. Bhatt, MD, MPH, is at Brigham and Women’s Hospital Heart & Vascular Center, Boston. He disclosed ties to Amarin, Amgen, AstraZeneca, Bristol-Myers Squibb, Eisai, and a number of other pharmaceutical and medical education companies. His comments are from an accompanying editorial (J Am Coll Cardiol. 2016 Oct 18;68[16]:1723-1726).

Body

 

The average age in the study by McEvoy et al. was 57 years. One might anticipate that in an older population, the side effects from lower BPs [blood pressures] due to drug therapy such as hypotension or syncope would be greater, and the potential for adverse cardiovascular events due to a J-curve would be substantially increased compared with what was seen in the present study. Similarly, an exacerbated potential for lower DBP to be harmful might be expected in patients with established coronary artery disease.

The well done study ... shows that lower may not always be better with respect to blood pressure control and, along with other accumulating evidence, strongly suggests careful thought before pushing blood pressure control below current guideline targets, especially if the diastolic blood pressure falls below 60 mm Hg while the pulse pressure is[greater than] 60 mm Hg.

Deepak L. Bhatt, MD, MPH, is at Brigham and Women’s Hospital Heart & Vascular Center, Boston. He disclosed ties to Amarin, Amgen, AstraZeneca, Bristol-Myers Squibb, Eisai, and a number of other pharmaceutical and medical education companies. His comments are from an accompanying editorial (J Am Coll Cardiol. 2016 Oct 18;68[16]:1723-1726).

Title
Lower is not always better
Lower is not always better

 

Low diastolic blood pressure (DBP) was significantly associated with myocardial injury and incident coronary heart disease, especially when the systolic blood pressure was 120 mm or higher, investigators reported.

Compared with a DBP of 80 to 89 mm Hg, DBP below 60 mm Hg more than doubled the odds of high-sensitivity cardiac troponin-T levels equaling or exceeding 14 ng per mL, and increased the risk of incident coronary heart disease (CHD) by about 50%, in a large observational study. Associations were strongest when baseline systolic blood pressure was at least 120 mm Hg, signifying elevated pulse pressure, reported Dr. John McEvoy of the Ciccarone Center for the Prevention of Heart Disease, Hopkins University, Baltimore, and associates (J Am Coll Cardiol 2016;68[16]:1713–22).

©Vishnu Kumar/Thinkstock
“Our results have a number of potential implications, particularly in the post-SPRINT era where the threshold for diagnosing and treating hypertension could be redefined,” the investigators emphasized, referring to the Systolic Blood Pressure Intervention Trial (SPRINT), which found a reduced rate of major cardiovascular events and all-cause mortality associated with a targeted systolic blood pressure below 120 mm Hg, vs. less than 140 mm Hg in a high risk population (N Engl J Med 2015; 373:2103-2116). “Despite the undeniable clinical benefits reported in SPRINT, one of many concerns related to aggressive SBP reduction with pharmacotherapy is the possibility of myocardial ischemia by lowering DBP,” they noted.

Their study included 11,565 individuals tracked for 21 years through the Atherosclerosis Risk in Communities Cohort, an observational population-based study of adults from in North Carolina, Mississippi, Minnesota, and Maryland. The researchers excluded participants with known baseline cardiovascular disease or heart failure. High-sensitivity cardiac troponin-T levels were measured at three time points between 1990 and 1992, 1996 and 1998, and 2011 and 2013. Participants averaged 57 years old at enrollment, 57% were female, and 25% were black (J Am Coll Cardiol. 2016 Oct 18. doi: 10.1016/j.jacc.2016.07.754).

Compared with baseline DBP of 80 to 89 mm Hg, DBP under 60 mm Hg was associated with a 2.2-fold greater odds (P = .01) of high-sensitivity cardiac troponin-T levels equal to or exceeding 14 ng per mL during the same visit – indicating prevalent myocardial damage – even after controlling for race, sex, body mass index, smoking and alcohol use, triglyceride and cholesterol levels, diabetes, glomerular filtration rate, and use of antihypertensives and lipid-lowering drugs, said the researchers. The odds of myocardial damage remained increased even when DBP was 60 to 69 mm Hg (odds ratio, 1.5; P = .05). Low DBP also was associated with myocardial damage at any given systolic blood pressure.

Furthermore, low DBP significantly increased the risk of progressively worsening myocardial damage, as indicated by a rising annual change in high-sensitivity cardiac troponin-T levels over 6 years. The association was significant as long as DBP was under 80 mm Hg, but was strongest when DBP was less than 60 mm Hg. Diastolic blood pressure under 60 mm Hg also significantly increased the chances of incident CHD and death, but not stroke.

Low DBP was most strongly linked to subclinical myocardial damage and incident CHD when systolic blood pressure was at least 120 mm Hg, indicating elevated pulse pressure, the researchers reported. Systolic pressure is “the main determinant of cardiac afterload and, thus, a primary driver of myocardial energy requirements,” while low DBP reduces myocardial energy supply, they noted. Therefore, high pulse pressure would lead to the greatest mismatch between myocardial energy demand and supply.

“Among patients being treated to SBP goals of 140 mm Hg or lower, attention may need to be paid not only to SBP, but also, importantly, to achieved DBP. Diastolic and systolic BP are inextricably linked, and our results highlighted the importance of not ignoring the former and focusing only on the latter, instead emphasizing the need to consider both in the optimal treatment of adults with hypertension.,”

The study was supported by the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases and by the National Heart, Lung, and Blood Institute. Roche Diagnostics provided reagents for the cardiac troponin assays. Dr. McEvoy had no disclosures. One author disclosed ties to Roche; one author disclosed ties to Roche, Abbott Diagnostics, and several other relevant companies; and two authors are coinvestigators on a provisional patent filed by Roche for use of biomarkers in predicting heart failure. The other four authors had no disclosures.

 

Low diastolic blood pressure (DBP) was significantly associated with myocardial injury and incident coronary heart disease, especially when the systolic blood pressure was 120 mm or higher, investigators reported.

Compared with a DBP of 80 to 89 mm Hg, DBP below 60 mm Hg more than doubled the odds of high-sensitivity cardiac troponin-T levels equaling or exceeding 14 ng per mL, and increased the risk of incident coronary heart disease (CHD) by about 50%, in a large observational study. Associations were strongest when baseline systolic blood pressure was at least 120 mm Hg, signifying elevated pulse pressure, reported Dr. John McEvoy of the Ciccarone Center for the Prevention of Heart Disease, Hopkins University, Baltimore, and associates (J Am Coll Cardiol 2016;68[16]:1713–22).

©Vishnu Kumar/Thinkstock
“Our results have a number of potential implications, particularly in the post-SPRINT era where the threshold for diagnosing and treating hypertension could be redefined,” the investigators emphasized, referring to the Systolic Blood Pressure Intervention Trial (SPRINT), which found a reduced rate of major cardiovascular events and all-cause mortality associated with a targeted systolic blood pressure below 120 mm Hg, vs. less than 140 mm Hg in a high risk population (N Engl J Med 2015; 373:2103-2116). “Despite the undeniable clinical benefits reported in SPRINT, one of many concerns related to aggressive SBP reduction with pharmacotherapy is the possibility of myocardial ischemia by lowering DBP,” they noted.

Their study included 11,565 individuals tracked for 21 years through the Atherosclerosis Risk in Communities Cohort, an observational population-based study of adults from in North Carolina, Mississippi, Minnesota, and Maryland. The researchers excluded participants with known baseline cardiovascular disease or heart failure. High-sensitivity cardiac troponin-T levels were measured at three time points between 1990 and 1992, 1996 and 1998, and 2011 and 2013. Participants averaged 57 years old at enrollment, 57% were female, and 25% were black (J Am Coll Cardiol. 2016 Oct 18. doi: 10.1016/j.jacc.2016.07.754).

Compared with baseline DBP of 80 to 89 mm Hg, DBP under 60 mm Hg was associated with a 2.2-fold greater odds (P = .01) of high-sensitivity cardiac troponin-T levels equal to or exceeding 14 ng per mL during the same visit – indicating prevalent myocardial damage – even after controlling for race, sex, body mass index, smoking and alcohol use, triglyceride and cholesterol levels, diabetes, glomerular filtration rate, and use of antihypertensives and lipid-lowering drugs, said the researchers. The odds of myocardial damage remained increased even when DBP was 60 to 69 mm Hg (odds ratio, 1.5; P = .05). Low DBP also was associated with myocardial damage at any given systolic blood pressure.

Furthermore, low DBP significantly increased the risk of progressively worsening myocardial damage, as indicated by a rising annual change in high-sensitivity cardiac troponin-T levels over 6 years. The association was significant as long as DBP was under 80 mm Hg, but was strongest when DBP was less than 60 mm Hg. Diastolic blood pressure under 60 mm Hg also significantly increased the chances of incident CHD and death, but not stroke.

Low DBP was most strongly linked to subclinical myocardial damage and incident CHD when systolic blood pressure was at least 120 mm Hg, indicating elevated pulse pressure, the researchers reported. Systolic pressure is “the main determinant of cardiac afterload and, thus, a primary driver of myocardial energy requirements,” while low DBP reduces myocardial energy supply, they noted. Therefore, high pulse pressure would lead to the greatest mismatch between myocardial energy demand and supply.

“Among patients being treated to SBP goals of 140 mm Hg or lower, attention may need to be paid not only to SBP, but also, importantly, to achieved DBP. Diastolic and systolic BP are inextricably linked, and our results highlighted the importance of not ignoring the former and focusing only on the latter, instead emphasizing the need to consider both in the optimal treatment of adults with hypertension.,”

The study was supported by the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases and by the National Heart, Lung, and Blood Institute. Roche Diagnostics provided reagents for the cardiac troponin assays. Dr. McEvoy had no disclosures. One author disclosed ties to Roche; one author disclosed ties to Roche, Abbott Diagnostics, and several other relevant companies; and two authors are coinvestigators on a provisional patent filed by Roche for use of biomarkers in predicting heart failure. The other four authors had no disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

From the Journal of the American College of Cardiology

Disallow All Ads
Vitals

 

Key clinical point: Low diastolic blood pressure is associated with myocardial injury and incident coronary heart disease.

Major finding: Diastolic blood pressure below 60 mm Hg more than doubled the odds of high-sensitivity cardiac troponin-T levels equaling or exceeding 14 ng per mL and increased the risk of incident coronary heart disease by about 50%, compared to diastolic blood pressure of 80 to 89 mm Hg. Associations were strongest when pressure was elevated (above 60 mm Hg).

Data source: A prospective observational study of 11,565 adults followed for 21 years as part of the Atherosclerosis Risk in Communities cohort.

Disclosures: The study was supported by the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases and by the National Heart, Lung, and Blood Institute. Roche Diagnostics provided reagents for the cardiac troponin assays. Dr. McEvoy had no disclosures. One author disclosed ties to Roche; one author disclosed ties to Roche, Abbott Diagnostics, and several other relevant companies; and two authors are coinvestigators on a provisional patent filed by Roche for use of biomarkers in predicting heart failure. The other four authors had no disclosures.

CDC study finds worrisome trends in hospital antibiotic use

Incorporate behavioral strategies to cut antibiotic overuse
Article Type
Changed
Fri, 01/18/2019 - 16:17

 

U.S. hospitals have not cut overall antibiotic use and have significantly increased the use of several broad-spectrum agents, according to a first-in-kind analysis of national hospital administrative data.

“We identified significant changes in specific antibiotic classes and regional variation that may have important implications for reducing antibiotic-resistant infections,” James Baggs, PhD, and colleagues from the Centers for Disease Control and Prevention, Atlanta, reported in the study, published online on September 19 in JAMA Internal Medicine.

MacXever/Thinkstock
They found that from 2006 through 2012, hospitals significantly decreased their use of fluoroquinolones and first- and second-generation cephalosporins, but these trends were offset by significant rises in the use of vancomycin and broad-spectrum agents used to treat gram-negative infections, including carbapenem, third- and fourth-generation cephalosporins, and β-lactam/β- lactamase inhibitor combinations. Accordingly, they encouraged hospitals to enroll in the Antibiotic Use Option of the National Healthcare Safety Network, adding that surveillance of this type is crucial to prevent and delay the emergence of resistant bacterial pathogens (JAMA Intern Med. 2016 Sept 19. doi: :10.1001/jamainternmed.2016.5651).

The retrospective study included approximately 300 acute care hospitals in the Truven Health MarketScan Hospital Drug Database, which covered 34 million pediatric and adult patient discharges equating to 166 million patient-daysIn all, 55% of patients received at least one antibiotic dose while in the hospital, and for every 1,000 patient-days, 755 days included antibiotic therapy, the investigators said. Overall antibiotic use rose during the study period by only 5.6 average days of therapy per 1,000 patient-days, which was not statistically significant.

However, the use of third and fourth-generation cephalosporins rose by a mean of 10.3 days of therapy per 1,000 patient-days (95% confidence interval, 3.1 to 17.5), and hospitals also used significantly more macrolides (mean rise, 4.8 days of therapy per 1,000 patient-days; 95% confidence interval, 2.0 to 7.6 days), glycopeptides, (22.4; 17.5 to 27.3); β-lactam/β-lactamase inhibitor combinations (18.0; 13.3 to 22.6), carbapenems (7.4; 4.6 to 10.2), and tetracyclines (3.3; 2.0 to 4.7)

Inpatient antibiotic use also varied significantly by region, the investigators said. Hospitals in rural areas used about 16 more days of antibiotic therapy per 1,000 patient-days compared with those in urban areas. Hospitals in Mid-Atlantic states (New Jersey, New York, Pennsylvania) and Pacific Coast states (Alaska, California, Hawaii, Oregon, and Washington) used the least antibiotics (649 and 665 days per 1,000 patient-days, respectively), while Southwest Central states (Arkansas, Louisiana, Oklahoma, and Texas) used the most (823 days).

The CDC provided funding for the study. The researchers had no disclosures.

Body

 

The dramatic variation in antibiotic prescribing across individual clinicians, regions in the United States, and internationally indicates great potential for improvement. ... In the article by Baggs et al, inpatient antibiotic prescribing in some regions of the United States is roughly 20% lower than other regions. On a per capita basis, Swedes consume less than half the antibiotics per capita than Americans.

Growing patterns of antibiotic resistance have driven calls for more physician education and new diagnostics. While these efforts may help, it is important to recognize that many emotionally salient factors are driving physicians to inappropriately prescribe antibiotics. Future interventions need to counterbalance these factors using tools from behavioral science to reduce the use of inappropriate antibiotics.

Ateev Mehrotra, MD, MPH, and Jeffrey A. Linder, MD, MPH, are at Harvard University, Boston. They had no disclosures. These comments are from an editorial that accompanied the study ( JAMA Intern Med. 2016 Sept 19. doi: 10.1001/jamainternmed.2016.6254).

Publications
Topics
Sections
Body

 

The dramatic variation in antibiotic prescribing across individual clinicians, regions in the United States, and internationally indicates great potential for improvement. ... In the article by Baggs et al, inpatient antibiotic prescribing in some regions of the United States is roughly 20% lower than other regions. On a per capita basis, Swedes consume less than half the antibiotics per capita than Americans.

Growing patterns of antibiotic resistance have driven calls for more physician education and new diagnostics. While these efforts may help, it is important to recognize that many emotionally salient factors are driving physicians to inappropriately prescribe antibiotics. Future interventions need to counterbalance these factors using tools from behavioral science to reduce the use of inappropriate antibiotics.

Ateev Mehrotra, MD, MPH, and Jeffrey A. Linder, MD, MPH, are at Harvard University, Boston. They had no disclosures. These comments are from an editorial that accompanied the study ( JAMA Intern Med. 2016 Sept 19. doi: 10.1001/jamainternmed.2016.6254).

Body

 

The dramatic variation in antibiotic prescribing across individual clinicians, regions in the United States, and internationally indicates great potential for improvement. ... In the article by Baggs et al, inpatient antibiotic prescribing in some regions of the United States is roughly 20% lower than other regions. On a per capita basis, Swedes consume less than half the antibiotics per capita than Americans.

Growing patterns of antibiotic resistance have driven calls for more physician education and new diagnostics. While these efforts may help, it is important to recognize that many emotionally salient factors are driving physicians to inappropriately prescribe antibiotics. Future interventions need to counterbalance these factors using tools from behavioral science to reduce the use of inappropriate antibiotics.

Ateev Mehrotra, MD, MPH, and Jeffrey A. Linder, MD, MPH, are at Harvard University, Boston. They had no disclosures. These comments are from an editorial that accompanied the study ( JAMA Intern Med. 2016 Sept 19. doi: 10.1001/jamainternmed.2016.6254).

Title
Incorporate behavioral strategies to cut antibiotic overuse
Incorporate behavioral strategies to cut antibiotic overuse

 

U.S. hospitals have not cut overall antibiotic use and have significantly increased the use of several broad-spectrum agents, according to a first-in-kind analysis of national hospital administrative data.

“We identified significant changes in specific antibiotic classes and regional variation that may have important implications for reducing antibiotic-resistant infections,” James Baggs, PhD, and colleagues from the Centers for Disease Control and Prevention, Atlanta, reported in the study, published online on September 19 in JAMA Internal Medicine.

MacXever/Thinkstock
They found that from 2006 through 2012, hospitals significantly decreased their use of fluoroquinolones and first- and second-generation cephalosporins, but these trends were offset by significant rises in the use of vancomycin and broad-spectrum agents used to treat gram-negative infections, including carbapenem, third- and fourth-generation cephalosporins, and β-lactam/β- lactamase inhibitor combinations. Accordingly, they encouraged hospitals to enroll in the Antibiotic Use Option of the National Healthcare Safety Network, adding that surveillance of this type is crucial to prevent and delay the emergence of resistant bacterial pathogens (JAMA Intern Med. 2016 Sept 19. doi: :10.1001/jamainternmed.2016.5651).

The retrospective study included approximately 300 acute care hospitals in the Truven Health MarketScan Hospital Drug Database, which covered 34 million pediatric and adult patient discharges equating to 166 million patient-daysIn all, 55% of patients received at least one antibiotic dose while in the hospital, and for every 1,000 patient-days, 755 days included antibiotic therapy, the investigators said. Overall antibiotic use rose during the study period by only 5.6 average days of therapy per 1,000 patient-days, which was not statistically significant.

However, the use of third and fourth-generation cephalosporins rose by a mean of 10.3 days of therapy per 1,000 patient-days (95% confidence interval, 3.1 to 17.5), and hospitals also used significantly more macrolides (mean rise, 4.8 days of therapy per 1,000 patient-days; 95% confidence interval, 2.0 to 7.6 days), glycopeptides, (22.4; 17.5 to 27.3); β-lactam/β-lactamase inhibitor combinations (18.0; 13.3 to 22.6), carbapenems (7.4; 4.6 to 10.2), and tetracyclines (3.3; 2.0 to 4.7)

Inpatient antibiotic use also varied significantly by region, the investigators said. Hospitals in rural areas used about 16 more days of antibiotic therapy per 1,000 patient-days compared with those in urban areas. Hospitals in Mid-Atlantic states (New Jersey, New York, Pennsylvania) and Pacific Coast states (Alaska, California, Hawaii, Oregon, and Washington) used the least antibiotics (649 and 665 days per 1,000 patient-days, respectively), while Southwest Central states (Arkansas, Louisiana, Oklahoma, and Texas) used the most (823 days).

The CDC provided funding for the study. The researchers had no disclosures.

 

U.S. hospitals have not cut overall antibiotic use and have significantly increased the use of several broad-spectrum agents, according to a first-in-kind analysis of national hospital administrative data.

“We identified significant changes in specific antibiotic classes and regional variation that may have important implications for reducing antibiotic-resistant infections,” James Baggs, PhD, and colleagues from the Centers for Disease Control and Prevention, Atlanta, reported in the study, published online on September 19 in JAMA Internal Medicine.

MacXever/Thinkstock
They found that from 2006 through 2012, hospitals significantly decreased their use of fluoroquinolones and first- and second-generation cephalosporins, but these trends were offset by significant rises in the use of vancomycin and broad-spectrum agents used to treat gram-negative infections, including carbapenem, third- and fourth-generation cephalosporins, and β-lactam/β- lactamase inhibitor combinations. Accordingly, they encouraged hospitals to enroll in the Antibiotic Use Option of the National Healthcare Safety Network, adding that surveillance of this type is crucial to prevent and delay the emergence of resistant bacterial pathogens (JAMA Intern Med. 2016 Sept 19. doi: :10.1001/jamainternmed.2016.5651).

The retrospective study included approximately 300 acute care hospitals in the Truven Health MarketScan Hospital Drug Database, which covered 34 million pediatric and adult patient discharges equating to 166 million patient-daysIn all, 55% of patients received at least one antibiotic dose while in the hospital, and for every 1,000 patient-days, 755 days included antibiotic therapy, the investigators said. Overall antibiotic use rose during the study period by only 5.6 average days of therapy per 1,000 patient-days, which was not statistically significant.

However, the use of third and fourth-generation cephalosporins rose by a mean of 10.3 days of therapy per 1,000 patient-days (95% confidence interval, 3.1 to 17.5), and hospitals also used significantly more macrolides (mean rise, 4.8 days of therapy per 1,000 patient-days; 95% confidence interval, 2.0 to 7.6 days), glycopeptides, (22.4; 17.5 to 27.3); β-lactam/β-lactamase inhibitor combinations (18.0; 13.3 to 22.6), carbapenems (7.4; 4.6 to 10.2), and tetracyclines (3.3; 2.0 to 4.7)

Inpatient antibiotic use also varied significantly by region, the investigators said. Hospitals in rural areas used about 16 more days of antibiotic therapy per 1,000 patient-days compared with those in urban areas. Hospitals in Mid-Atlantic states (New Jersey, New York, Pennsylvania) and Pacific Coast states (Alaska, California, Hawaii, Oregon, and Washington) used the least antibiotics (649 and 665 days per 1,000 patient-days, respectively), while Southwest Central states (Arkansas, Louisiana, Oklahoma, and Texas) used the most (823 days).

The CDC provided funding for the study. The researchers had no disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Vitals

 

Key clinical point: Inpatient antibiotic use did not decrease between 2006 and 2012, and the use of several broad-spectrum agents rose significantly.

Major finding: Hospitals significantly decreased their use of fluoroquinolones and first- and second-generation cephalosporins, but these trends were offset by significant rises in the use of vancomycin, carbapenem, third- and fourth-generation cephalosporins, and β-lactam/β- lactamase inhibitor combinations.

Data source: A retrospective study of administrative hospital discharge data for about 300 US hospitals from 2006 through 2012.

Disclosures: The Centers for Disease Control and Prevention provided funding. The researchers had no disclosures.

Lyme disease spirochete helps babesiosis gain a foothold

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

– The spirochete that causes Lyme disease in humans may be lending a helping hand to the weaker protozoan that causes babesiosis, escalating the rate of human babesiosis cases in regions where both are endemic.

Peter Krause, MD, a research scientist in epidemiology, medicine, and pediatrics at Yale University School of Public Health, New Haven, Conn., reviewed what’s known about babesiosis–Lyme disease coinfections at the annual meeting of the American Society for Microbiology.

Dr. Peter Krause
Speaking during a session focused on tick-borne illnesses, Dr. Krause explained that coinfection involves the entire tick-reservoir host cycle, noting that “a total of seven different human pathogens are transmitted by Ixodes scapularis ticks.”

Understanding the entire cycle is necessary, he said, because the effects of coinfection will be different depending on the stage of the cycle, and upon the coinfection pathogens.

Over the course of many years, Dr. Krause and his collaborators have used an interdisciplinary, multi-modal approach to try to understand the interplay between these pathogens, their hosts, and environmental, demographic, and ecologic factors.

One arm of their research has taken them to the lab, where they have modeled coinfection and transmission of Borrelia burgdorferi and Babesia microti from their reservoir host, the white-footed mouse (Peromyscus leucopus), to the vector, the deer tick (Ixodes scapularis), which can transmit both diseases to humans.

In an experimental design that mimicked the natural reservoir-vector ecology, Dr. Krause and his collaborators first infected mice with 5 to 10 nymphal ticks, to approximate the average number of ticks that feed on an individual mouse in the wild. The researchers then tracked the effect of coinfection on transmission of each pathogen to ticks during the larval feeds, finding that B. burgdorferi increased B. microti parasitemia in mice who were coinfected. Coinfection also increases B. microti transmission from mice to ticks. This effect happens at least partly because of the increased parasitemia, Dr. Krause said.

The downstream effect on humans is to increase the risk of babesiosis for those who live in regions where both B. microti and B. burgdorferi are endemic, Dr. Krause said.

B. microti is less “ecologically fit” than B. burgdorferi, Dr. Krause said, noting that there are more ticks and humans infected with the latter, as well as more reservoir mice carrying B. burgdorferi. Also, the rate of geographic expansion is more rapid for B. burgdorferi. “B. microti is only endemic in areas where B. burgdorferi is already endemic; it may not be ‘fit’ enough to establish endemic sites on its own,” Dr. Krause said.

The increased rate of B. microti transmission via ticks from mice, if the mice are coinfected with B. burgdorferi, may help explain the greater-than-expected rate of babesiosis in humans in areas of New England where coinfection is common. “This paradox might be explained by the enhancement of B. microti survival and spread by the coinfecting presence of Borrelia burgdorferi,” Dr. Krause said.

This naturalistic experiment has ecological implications in terms of the human impact as well: “Coinfection may help enhance geographic spread of B. microti to new areas,” Dr. Krause said.

Clinicians in geographic areas where both pathogens are endemic should maintain a high level of suspicion for coinfection, especially for the most ill patients. “Anaplasmosis and/or babesiosis coinfection increases the severity of Lyme disease,” Dr. Krause said. “Health care workers should consider anaplasmosis and/or babesiosis coinfection in Lyme disease patients who have more severe illness or who do not respond to antibiotic therapy.”

Understanding the complex interspecies interplay will be increasingly important as more cases of tick-borne illness are seen, Dr. Krause concluded. “Research on coinfections acquired from Ixodes scapularis has just begun.”

Dr. Krause reported no relevant conflicts of interest.

Issue
Emergency Medicine - 12(6)
Publications
Topics
Sections

 

– The spirochete that causes Lyme disease in humans may be lending a helping hand to the weaker protozoan that causes babesiosis, escalating the rate of human babesiosis cases in regions where both are endemic.

Peter Krause, MD, a research scientist in epidemiology, medicine, and pediatrics at Yale University School of Public Health, New Haven, Conn., reviewed what’s known about babesiosis–Lyme disease coinfections at the annual meeting of the American Society for Microbiology.

Dr. Peter Krause
Speaking during a session focused on tick-borne illnesses, Dr. Krause explained that coinfection involves the entire tick-reservoir host cycle, noting that “a total of seven different human pathogens are transmitted by Ixodes scapularis ticks.”

Understanding the entire cycle is necessary, he said, because the effects of coinfection will be different depending on the stage of the cycle, and upon the coinfection pathogens.

Over the course of many years, Dr. Krause and his collaborators have used an interdisciplinary, multi-modal approach to try to understand the interplay between these pathogens, their hosts, and environmental, demographic, and ecologic factors.

One arm of their research has taken them to the lab, where they have modeled coinfection and transmission of Borrelia burgdorferi and Babesia microti from their reservoir host, the white-footed mouse (Peromyscus leucopus), to the vector, the deer tick (Ixodes scapularis), which can transmit both diseases to humans.

In an experimental design that mimicked the natural reservoir-vector ecology, Dr. Krause and his collaborators first infected mice with 5 to 10 nymphal ticks, to approximate the average number of ticks that feed on an individual mouse in the wild. The researchers then tracked the effect of coinfection on transmission of each pathogen to ticks during the larval feeds, finding that B. burgdorferi increased B. microti parasitemia in mice who were coinfected. Coinfection also increases B. microti transmission from mice to ticks. This effect happens at least partly because of the increased parasitemia, Dr. Krause said.

The downstream effect on humans is to increase the risk of babesiosis for those who live in regions where both B. microti and B. burgdorferi are endemic, Dr. Krause said.

B. microti is less “ecologically fit” than B. burgdorferi, Dr. Krause said, noting that there are more ticks and humans infected with the latter, as well as more reservoir mice carrying B. burgdorferi. Also, the rate of geographic expansion is more rapid for B. burgdorferi. “B. microti is only endemic in areas where B. burgdorferi is already endemic; it may not be ‘fit’ enough to establish endemic sites on its own,” Dr. Krause said.

The increased rate of B. microti transmission via ticks from mice, if the mice are coinfected with B. burgdorferi, may help explain the greater-than-expected rate of babesiosis in humans in areas of New England where coinfection is common. “This paradox might be explained by the enhancement of B. microti survival and spread by the coinfecting presence of Borrelia burgdorferi,” Dr. Krause said.

This naturalistic experiment has ecological implications in terms of the human impact as well: “Coinfection may help enhance geographic spread of B. microti to new areas,” Dr. Krause said.

Clinicians in geographic areas where both pathogens are endemic should maintain a high level of suspicion for coinfection, especially for the most ill patients. “Anaplasmosis and/or babesiosis coinfection increases the severity of Lyme disease,” Dr. Krause said. “Health care workers should consider anaplasmosis and/or babesiosis coinfection in Lyme disease patients who have more severe illness or who do not respond to antibiotic therapy.”

Understanding the complex interspecies interplay will be increasingly important as more cases of tick-borne illness are seen, Dr. Krause concluded. “Research on coinfections acquired from Ixodes scapularis has just begun.”

Dr. Krause reported no relevant conflicts of interest.

 

– The spirochete that causes Lyme disease in humans may be lending a helping hand to the weaker protozoan that causes babesiosis, escalating the rate of human babesiosis cases in regions where both are endemic.

Peter Krause, MD, a research scientist in epidemiology, medicine, and pediatrics at Yale University School of Public Health, New Haven, Conn., reviewed what’s known about babesiosis–Lyme disease coinfections at the annual meeting of the American Society for Microbiology.

Dr. Peter Krause
Speaking during a session focused on tick-borne illnesses, Dr. Krause explained that coinfection involves the entire tick-reservoir host cycle, noting that “a total of seven different human pathogens are transmitted by Ixodes scapularis ticks.”

Understanding the entire cycle is necessary, he said, because the effects of coinfection will be different depending on the stage of the cycle, and upon the coinfection pathogens.

Over the course of many years, Dr. Krause and his collaborators have used an interdisciplinary, multi-modal approach to try to understand the interplay between these pathogens, their hosts, and environmental, demographic, and ecologic factors.

One arm of their research has taken them to the lab, where they have modeled coinfection and transmission of Borrelia burgdorferi and Babesia microti from their reservoir host, the white-footed mouse (Peromyscus leucopus), to the vector, the deer tick (Ixodes scapularis), which can transmit both diseases to humans.

In an experimental design that mimicked the natural reservoir-vector ecology, Dr. Krause and his collaborators first infected mice with 5 to 10 nymphal ticks, to approximate the average number of ticks that feed on an individual mouse in the wild. The researchers then tracked the effect of coinfection on transmission of each pathogen to ticks during the larval feeds, finding that B. burgdorferi increased B. microti parasitemia in mice who were coinfected. Coinfection also increases B. microti transmission from mice to ticks. This effect happens at least partly because of the increased parasitemia, Dr. Krause said.

The downstream effect on humans is to increase the risk of babesiosis for those who live in regions where both B. microti and B. burgdorferi are endemic, Dr. Krause said.

B. microti is less “ecologically fit” than B. burgdorferi, Dr. Krause said, noting that there are more ticks and humans infected with the latter, as well as more reservoir mice carrying B. burgdorferi. Also, the rate of geographic expansion is more rapid for B. burgdorferi. “B. microti is only endemic in areas where B. burgdorferi is already endemic; it may not be ‘fit’ enough to establish endemic sites on its own,” Dr. Krause said.

The increased rate of B. microti transmission via ticks from mice, if the mice are coinfected with B. burgdorferi, may help explain the greater-than-expected rate of babesiosis in humans in areas of New England where coinfection is common. “This paradox might be explained by the enhancement of B. microti survival and spread by the coinfecting presence of Borrelia burgdorferi,” Dr. Krause said.

This naturalistic experiment has ecological implications in terms of the human impact as well: “Coinfection may help enhance geographic spread of B. microti to new areas,” Dr. Krause said.

Clinicians in geographic areas where both pathogens are endemic should maintain a high level of suspicion for coinfection, especially for the most ill patients. “Anaplasmosis and/or babesiosis coinfection increases the severity of Lyme disease,” Dr. Krause said. “Health care workers should consider anaplasmosis and/or babesiosis coinfection in Lyme disease patients who have more severe illness or who do not respond to antibiotic therapy.”

Understanding the complex interspecies interplay will be increasingly important as more cases of tick-borne illness are seen, Dr. Krause concluded. “Research on coinfections acquired from Ixodes scapularis has just begun.”

Dr. Krause reported no relevant conflicts of interest.

Issue
Emergency Medicine - 12(6)
Issue
Emergency Medicine - 12(6)
Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM ASM 2016

Disallow All Ads
Alternative CME

HIV hospitalizations continue to decline

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

The total number of HIV hospitalizations fell by a third during 2000-2013, even though the number of people living with HIV increased by more than 50%, according to an investigation by the Agency for Healthcare Research and Quality.

“To some extent, the considerable reduction in hospital utilization by persons with HIV disease may be attributed to the diffusion of new antiretroviral medications and the enhanced ability of clinicians to control viral replication,” wrote investigator Fred Hellinger, PhD, of AHRQ’s Center for Delivery, Organization, and Markets (Med Care. 2016 Jun;54[6]:639-44).

©MattZ90/Thinkstock.com
Dr. Hellinger said the drop in hospitalization noted with the introduction of highly effective antiretroviral therapy in the 1990s continues. As people live longer with HIV, “the proportion of HIV-infected patients covered by Medicare and their average age are likely to continue to increase,” he said.

Dr. Hellinger used his agency’s State Inpatient Database to collect data on all HIV-related hospital admissions from California, Florida, New Jersey, New York, and South Carolina during 2000-2013. Overall, people with HIV were 64% less likely to be hospitalized in 2013 than they were in 2000; there was also a slight drop in length of stay.

Meanwhile, the average age of hospitalized HIV patients has risen from 41 to 49 years, and the average number of diagnoses from 6 to more than 12. That’s in part because HIV patients are living longer, and “older patients are generally sicker and have more chronic illnesses ... As HIV patients age, they are being hospitalized for conditions that are not closely related to HIV infection,” Dr. Hellinger said.

“Indeed, the principal diagnosis for almost two-thirds of the HIV patients hospitalized in 2013 in our sample was not HIV infection, and as time passes, the mix of diagnoses recorded for hospitalized patients with HIV is likely to resemble the mix of diagnoses found in the general population of hospitalized patients,” he wrote.

U.S. HIV spending continues to go up, but while the number of patients covered by Medicaid has fallen, the number treated by Medicare has risen 50%, reflecting the increase in average life span.

There has not been much demographic change among HIV inpatients. About half are black, a quarter white, and slightly less than one-fifth Hispanic. One-third are women. More than 1.1 million Americans are living with HIV, and 50,000 are newly infected each year.

Dr. Hellinger had no conflicts of interest.

Publications
Topics
Sections

 

The total number of HIV hospitalizations fell by a third during 2000-2013, even though the number of people living with HIV increased by more than 50%, according to an investigation by the Agency for Healthcare Research and Quality.

“To some extent, the considerable reduction in hospital utilization by persons with HIV disease may be attributed to the diffusion of new antiretroviral medications and the enhanced ability of clinicians to control viral replication,” wrote investigator Fred Hellinger, PhD, of AHRQ’s Center for Delivery, Organization, and Markets (Med Care. 2016 Jun;54[6]:639-44).

©MattZ90/Thinkstock.com
Dr. Hellinger said the drop in hospitalization noted with the introduction of highly effective antiretroviral therapy in the 1990s continues. As people live longer with HIV, “the proportion of HIV-infected patients covered by Medicare and their average age are likely to continue to increase,” he said.

Dr. Hellinger used his agency’s State Inpatient Database to collect data on all HIV-related hospital admissions from California, Florida, New Jersey, New York, and South Carolina during 2000-2013. Overall, people with HIV were 64% less likely to be hospitalized in 2013 than they were in 2000; there was also a slight drop in length of stay.

Meanwhile, the average age of hospitalized HIV patients has risen from 41 to 49 years, and the average number of diagnoses from 6 to more than 12. That’s in part because HIV patients are living longer, and “older patients are generally sicker and have more chronic illnesses ... As HIV patients age, they are being hospitalized for conditions that are not closely related to HIV infection,” Dr. Hellinger said.

“Indeed, the principal diagnosis for almost two-thirds of the HIV patients hospitalized in 2013 in our sample was not HIV infection, and as time passes, the mix of diagnoses recorded for hospitalized patients with HIV is likely to resemble the mix of diagnoses found in the general population of hospitalized patients,” he wrote.

U.S. HIV spending continues to go up, but while the number of patients covered by Medicaid has fallen, the number treated by Medicare has risen 50%, reflecting the increase in average life span.

There has not been much demographic change among HIV inpatients. About half are black, a quarter white, and slightly less than one-fifth Hispanic. One-third are women. More than 1.1 million Americans are living with HIV, and 50,000 are newly infected each year.

Dr. Hellinger had no conflicts of interest.

 

The total number of HIV hospitalizations fell by a third during 2000-2013, even though the number of people living with HIV increased by more than 50%, according to an investigation by the Agency for Healthcare Research and Quality.

“To some extent, the considerable reduction in hospital utilization by persons with HIV disease may be attributed to the diffusion of new antiretroviral medications and the enhanced ability of clinicians to control viral replication,” wrote investigator Fred Hellinger, PhD, of AHRQ’s Center for Delivery, Organization, and Markets (Med Care. 2016 Jun;54[6]:639-44).

©MattZ90/Thinkstock.com
Dr. Hellinger said the drop in hospitalization noted with the introduction of highly effective antiretroviral therapy in the 1990s continues. As people live longer with HIV, “the proportion of HIV-infected patients covered by Medicare and their average age are likely to continue to increase,” he said.

Dr. Hellinger used his agency’s State Inpatient Database to collect data on all HIV-related hospital admissions from California, Florida, New Jersey, New York, and South Carolina during 2000-2013. Overall, people with HIV were 64% less likely to be hospitalized in 2013 than they were in 2000; there was also a slight drop in length of stay.

Meanwhile, the average age of hospitalized HIV patients has risen from 41 to 49 years, and the average number of diagnoses from 6 to more than 12. That’s in part because HIV patients are living longer, and “older patients are generally sicker and have more chronic illnesses ... As HIV patients age, they are being hospitalized for conditions that are not closely related to HIV infection,” Dr. Hellinger said.

“Indeed, the principal diagnosis for almost two-thirds of the HIV patients hospitalized in 2013 in our sample was not HIV infection, and as time passes, the mix of diagnoses recorded for hospitalized patients with HIV is likely to resemble the mix of diagnoses found in the general population of hospitalized patients,” he wrote.

U.S. HIV spending continues to go up, but while the number of patients covered by Medicaid has fallen, the number treated by Medicare has risen 50%, reflecting the increase in average life span.

There has not been much demographic change among HIV inpatients. About half are black, a quarter white, and slightly less than one-fifth Hispanic. One-third are women. More than 1.1 million Americans are living with HIV, and 50,000 are newly infected each year.

Dr. Hellinger had no conflicts of interest.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM MEDICAL CARE

Disallow All Ads
Vitals

 

Key clinical point: The total number of HIV hospitalizations fell by a third during 2000-2013, even though the number of people living with HIV increased by more than 50%.

Major finding: People with HIV were 64% less likely to be hospitalized in 2013 than they were in 2000.

Data source: Review of HIV-related hospitalizations in five U.S. states.

Disclosures: The Agency for Healthcare Research and Quality funded the work. The investigator had no disclosures.