Anticoagulation for patients with liver cirrhosis and portal vein thrombosis

Article Type
Changed
Fri, 09/14/2018 - 11:57

 

Clinical question: Should patients with liver cirrhosis with portal vein thrombosis be treated with anticoagulation?

Background: Portal vein thrombosis occurs in about 20% of patients with liver cirrhosis. Previously these patients were not often treated with anticoagulation due to concern for increased bleeding risk associated with advanced liver disease. However, restoring portal vein patency may prevent further sequelae, including intestinal infarction and portal hypertension and may also affect candidacy for liver transplantation.

Dr. Luis Teixeira
Study design: Meta-analysis.

Setting: Multiple sites throughout the world.

Synopsis: The authors of this meta-analysis pooled data from eight clinical trials, comprising 353 patients with liver cirrhosis and portal vein thrombosis, to assess the rates of complete and partial recanalization with anticoagulation therapy (warfarin or low molecular weight heparin) versus no therapy. The authors also assessed the rate of minor and major bleeding complications in patients who received anticoagulation, compared with those who received no therapy. Patients who received anticoagulation therapy had increased recanalization and reduced progression of thrombosis without excessive major and minor bleeding.

Bottom line: This meta-analysis suggests anticoagulation might be safe and effective in treating portal vein thrombosis in patients with cirrhosis; however, this analysis was based on nonrandomized clinical trials and did not address long-term important endpoints, such as the effect of anticoagulation on mortality.

Citation: Loffredo L, Pastori D, Farcomeni A, Violi F. Effects of anticoagulants in patients with cirrhosis and portal vein thrombosis: A systematic review and meta-analysis. Gastroenterology. 2017 May 4. E-published ahead of print.

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

Publications
Topics
Sections

 

Clinical question: Should patients with liver cirrhosis with portal vein thrombosis be treated with anticoagulation?

Background: Portal vein thrombosis occurs in about 20% of patients with liver cirrhosis. Previously these patients were not often treated with anticoagulation due to concern for increased bleeding risk associated with advanced liver disease. However, restoring portal vein patency may prevent further sequelae, including intestinal infarction and portal hypertension and may also affect candidacy for liver transplantation.

Dr. Luis Teixeira
Study design: Meta-analysis.

Setting: Multiple sites throughout the world.

Synopsis: The authors of this meta-analysis pooled data from eight clinical trials, comprising 353 patients with liver cirrhosis and portal vein thrombosis, to assess the rates of complete and partial recanalization with anticoagulation therapy (warfarin or low molecular weight heparin) versus no therapy. The authors also assessed the rate of minor and major bleeding complications in patients who received anticoagulation, compared with those who received no therapy. Patients who received anticoagulation therapy had increased recanalization and reduced progression of thrombosis without excessive major and minor bleeding.

Bottom line: This meta-analysis suggests anticoagulation might be safe and effective in treating portal vein thrombosis in patients with cirrhosis; however, this analysis was based on nonrandomized clinical trials and did not address long-term important endpoints, such as the effect of anticoagulation on mortality.

Citation: Loffredo L, Pastori D, Farcomeni A, Violi F. Effects of anticoagulants in patients with cirrhosis and portal vein thrombosis: A systematic review and meta-analysis. Gastroenterology. 2017 May 4. E-published ahead of print.

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

 

Clinical question: Should patients with liver cirrhosis with portal vein thrombosis be treated with anticoagulation?

Background: Portal vein thrombosis occurs in about 20% of patients with liver cirrhosis. Previously these patients were not often treated with anticoagulation due to concern for increased bleeding risk associated with advanced liver disease. However, restoring portal vein patency may prevent further sequelae, including intestinal infarction and portal hypertension and may also affect candidacy for liver transplantation.

Dr. Luis Teixeira
Study design: Meta-analysis.

Setting: Multiple sites throughout the world.

Synopsis: The authors of this meta-analysis pooled data from eight clinical trials, comprising 353 patients with liver cirrhosis and portal vein thrombosis, to assess the rates of complete and partial recanalization with anticoagulation therapy (warfarin or low molecular weight heparin) versus no therapy. The authors also assessed the rate of minor and major bleeding complications in patients who received anticoagulation, compared with those who received no therapy. Patients who received anticoagulation therapy had increased recanalization and reduced progression of thrombosis without excessive major and minor bleeding.

Bottom line: This meta-analysis suggests anticoagulation might be safe and effective in treating portal vein thrombosis in patients with cirrhosis; however, this analysis was based on nonrandomized clinical trials and did not address long-term important endpoints, such as the effect of anticoagulation on mortality.

Citation: Loffredo L, Pastori D, Farcomeni A, Violi F. Effects of anticoagulants in patients with cirrhosis and portal vein thrombosis: A systematic review and meta-analysis. Gastroenterology. 2017 May 4. E-published ahead of print.

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Bowel rest or early feeding for acute pancreatitis

Article Type
Changed
Fri, 09/14/2018 - 11:57

 

Clinical question: When should you start enteral feedings in patients with acute pancreatitis?

Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.

Study design: Systematic review.

Setting: Europe, New Zealand, United States, and China.

Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).

The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.

Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.

Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
 

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

Publications
Topics
Sections

 

Clinical question: When should you start enteral feedings in patients with acute pancreatitis?

Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.

Study design: Systematic review.

Setting: Europe, New Zealand, United States, and China.

Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).

The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.

Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.

Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
 

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

 

Clinical question: When should you start enteral feedings in patients with acute pancreatitis?

Background: Oral intake stimulates pancreatic exocrine activity and therefore bowel rest has been one of the mainstays of acute pancreatitis treatment. However, some studies suggest that enteral nutrition may reduce the risk of infection by supporting the gut’s protective barrier limiting bacterial translocation and sepsis. Studies thus far comparing early versus delayed enteral nutrition in acute pancreatitis have been conflicting.

Study design: Systematic review.

Setting: Europe, New Zealand, United States, and China.

Synopsis: Study authors attempted to compare the length of hospital stay, mortality, and readmission in hospitalized patients with acute pancreatitis who received early versus delayed feeding. The authors searched for randomized clinical trials that compared early feeding (less than 48 hours after hospitalization) versus delayed feeding (more than 48 hours after hospitalization).

The authors found and analyzed 11 randomized trials comprising 948 patients in which early and delayed feeding strategies were compared. Their review suggests that early feeding in patients with acute pancreatitis is not associated with increased adverse events and may reduce length of hospital stay. Their analysis was limited by markedly different feeding protocols that precluded performing a meta-analysis. Their analysis was also limited by including studies that had high risk or unclear risk of bias and by the small size of most trials limiting power to detect differences in outcome.

Bottom line: Optimal route and timing of nutrition in patients with acute pancreatitis remains unsettled.

Citation: Vaughn VM, Shuster D, Rogers MAM, et al. Early versus delayed feeding in patients with acute pancreatitis: a systematic review. Ann Intern Med. 2017;166(12):883-92.
 

Dr. Teixeira is a hospitalist at Ochsner Health System, New Orleans.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Epidemiology, Consequences of Non-Leg VTE in Critically Ill Patients

Article Type
Changed
Fri, 09/14/2018 - 12:13
Display Headline
Epidemiology, Consequences of Non-Leg VTE in Critically Ill Patients

Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Epidemiology, Consequences of Non-Leg VTE in Critically Ill Patients
Display Headline
Epidemiology, Consequences of Non-Leg VTE in Critically Ill Patients
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Epidemiology, Consequences of Non-Leg VTE

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Epidemiology, Consequences of Non-Leg VTE

Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.

However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.

However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?

Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.

Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.

Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.

Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.

Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.

Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.

However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.

Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.

Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Epidemiology, Consequences of Non-Leg VTE
Display Headline
Epidemiology, Consequences of Non-Leg VTE
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk

Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?

Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.

Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.

Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.

Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.

Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).

Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.

Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.

Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.

Issue
The Hospitalist - 2014(08)
Publications
Topics
Sections

Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?

Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.

Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.

Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.

Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.

Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).

Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.

Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.

Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.

Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?

Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.

Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.

Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.

Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.

Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).

Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.

Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.

Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Topics
Article Type
Display Headline
Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk
Display Headline
Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Emergency Department Visits, Hospitalizations Due to Insulin

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Emergency Department Visits, Hospitalizations Due to Insulin

Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?

Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.

Study design: Observational study using national adverse drug surveillance database and national household survey.

Setting: U.S. hospitals, excluding psychiatric and penal institutions.

Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.

The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.

Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.

Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?

Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.

Study design: Observational study using national adverse drug surveillance database and national household survey.

Setting: U.S. hospitals, excluding psychiatric and penal institutions.

Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.

The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.

Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.

Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.

Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?

Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.

Study design: Observational study using national adverse drug surveillance database and national household survey.

Setting: U.S. hospitals, excluding psychiatric and penal institutions.

Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.

The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.

Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.

Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Emergency Department Visits, Hospitalizations Due to Insulin
Display Headline
Emergency Department Visits, Hospitalizations Due to Insulin
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Healthcare Worker Attire Recommendations

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Healthcare Worker Attire Recommendations

Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?

Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.

Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.

Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.

Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.

Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.

There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.

Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.

BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.

Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.

Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?

Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.

Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.

Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.

Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.

Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.

There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.

Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.

BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.

Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.

Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.

Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?

Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.

Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.

Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.

Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.

Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.

There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.

Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.

BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.

Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.

Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Healthcare Worker Attire Recommendations
Display Headline
Healthcare Worker Attire Recommendations
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Prediction Tool for Readmissions Due to End-of-Life Care

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Prediction Tool for Readmissions Due to End-of-Life Care

Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?

Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.

Study design: Nested case-control.

Setting: Academic, tertiary-care medical center.

Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.

The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.

Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.

Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.

Issue
The Hospitalist - 2014(08)
Publications
Topics
Sections

Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?

Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.

Study design: Nested case-control.

Setting: Academic, tertiary-care medical center.

Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.

The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.

Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.

Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.

Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?

Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.

Study design: Nested case-control.

Setting: Academic, tertiary-care medical center.

Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.

The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.

Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.

Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Topics
Article Type
Display Headline
Prediction Tool for Readmissions Due to End-of-Life Care
Display Headline
Prediction Tool for Readmissions Due to End-of-Life Care
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis

Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?

Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.

Study design: Meta-analysis.

Setting: Search of online databases and the Cochrane Library.

Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).

For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.

Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.

Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.

Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?

Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.

Study design: Meta-analysis.

Setting: Search of online databases and the Cochrane Library.

Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).

For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.

Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.

Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.

Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.

Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?

Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.

Study design: Meta-analysis.

Setting: Search of online databases and the Cochrane Library.

Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).

For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.

Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.

Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.

Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis
Display Headline
Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Physician Burnout Reduced with Intervention Groups

Article Type
Changed
Thu, 12/15/2022 - 16:17
Display Headline
Physician Burnout Reduced with Intervention Groups

Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?

Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.

Study design: Randomized controlled trial (RCT).

Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.

Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.

Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.

Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.

Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.

Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.

Issue
The Hospitalist - 2014(08)
Publications
Sections

Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?

Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.

Study design: Randomized controlled trial (RCT).

Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.

Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.

Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.

Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.

Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.

Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.

Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?

Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.

Study design: Randomized controlled trial (RCT).

Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.

Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.

Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.

Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.

Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.

Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.

Issue
The Hospitalist - 2014(08)
Issue
The Hospitalist - 2014(08)
Publications
Publications
Article Type
Display Headline
Physician Burnout Reduced with Intervention Groups
Display Headline
Physician Burnout Reduced with Intervention Groups
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)