Affiliations
Division of Hospital Medicine, Northwestern University School of Medicine, Chicago, Illinois
Given name(s)
Daniel
Family name
Brotman
Degrees
MD

New Frontiers in High-Value Care Education and Innovation: When Less is Not More

Article Type
Changed
Sun, 05/26/2019 - 00:25

In this issue of the Journal of Hospital Medicine®, Drs. Arora and Moriates highlight an important deficiency in quality improvement efforts designed to reduce overuse of tests and treatments: the potential for trainees—and by extension, more seasoned clinicians—to rationalize minimizing under the guise of high-value care.1 This insightful perspective from the Co-Directors of Costs of Care should serve as a catalyst for further robust and effective care redesign efforts to optimize the use of all medical resources, including tests, treatments, procedures, consultations, emergency department (ED) visits, and hospital admissions. The formula to root out minimizers is not straightforward and requires an evaluation of wasteful practices in a nuanced and holistic manner that considers not only the frequency that the overused test (or treatment) is ordered but also the collateral impact of not ordering it. This principle has implications for measuring, paying for, and studying high-value care.

Overuse of tests and treatments increases costs and carries a risk of harm, from unnecessary use of creatine kinase–myocardial band (CK-MB) in suspected acute coronary syndrome2 to unwarranted administration of antibiotics for asymptomatic bacteriuria3 to over-administration of blood transfusions.4 However, decreasing the use of a commonly ordered test is not always clinically appropriate. To illustrate this point, we consider the evidence-based algorithm to deliver best practice in the work-up of pulmonary embolism (PE) by Raja et al; which integrates pretest probability, PERC assessment, and appropriate use of D-dimer and pulmonary CT angiography (CTA).5 Avoiding D-dimer testing is appropriate in patients with very low pretest probability who pass a pulmonary embolism rule-out criteria (PERC) clinical assessment and is also appropriate in patients who have sufficiently high clinical probability for PE to justify CTA regardless of a D-dimer result. On the other hand, avoiding D-dimer testing by attributing a patient’s symptoms to anxiety (as a minimizer might do) would increase patient risk, and could potentially increase cost if that patient ends up in intensive care after delayed diagnosis. Following diagnostic algorithms that include physician decision-making and evidence-based guidelines can prevent overuse and underuse, thereby maximizing efficiency and effectiveness. Engaging trainees in the development of such algorithms and decision support tools will serve to ingrain these principles into their practice.

Arora and Moriates highlight the importance of caring for a patient along a continuum rather than simply optimizing practice with respect to a single management decision or an isolated care episode. This approach is fundamental to the quality of care we provide, the public trust our profession still commands, and the total cost of care (TCOC). The two largest contributors to debilitating patient healthcare debt are not overuse of tests and treatments, but ED visits and hospitalizations.6 Thus, high-value quality improvement needs to anticipate future healthcare needs, including those that may result from delayed or missed diagnoses. Furthermore, excessive focus on the minutiae of high-value care (fewer daily basic metabolic panels) can lead to change fatigue and divert attention from higher impact utilization. We endorse a holistic approach in which the lens is shifted from the test—and even from the encounter or episode of care—to the entire continuum of care so that we can safeguard against inappropriate minimization. This approach has started to gain traction with policymakers. For example, the state of Maryland has implemented a TCOC hospital payment model predicted to save $1 billion by 2023.7 The TCOC model includes a Care Redesign Program whereby hospitals and nonhospital healthcare providers collaborate to improve the quality of care while reducing spending, and cost savings can be used for incentive payments to the nonhospital providers (gainsharing) while simultaneously monitoring quality measures to guard against rationing.7 In keeping with the authors’ call to prioritize overall health, this new reimbursement model and others similar to it aim to incentivize the delivery of high-value care across a continuum.

Research is needed to guide best practice from this global perspective; as such, value improvement projects aimed at optimizing use of tests and treatments should include rigorous methodology, measures of downstream outcomes and costs, and balancing safety measures.8 For example, the ROMICAT II randomized trial evaluated two diagnostic approaches in emergency department patients with suspected acute coronary syndrome: early coronary computed tomography angiogram (CCTA) and standard ED evaluation.9 In addition to outcomes related to the ED visit itself, downstream testing and outcomes for 28 days after the episode were studied. In the acute setting, CCTA decreased time to diagnosis, reduced mean hospital length of stay by 7.6 hours, and resulted in 47% of patients being discharged within 8.6 hours as opposed to only 12% of the standard evaluation cohort. No cases of ACS were missed, and the CCTA cohort has slightly fewer cardiovascular adverse events (P = .18). However, the CCTA patients received significantly more diagnostic and functional testing and higher radiation exposure than the standard evaluation cohort, and underwent modestly higher rates of coronary angiography and percutaneous coronary intervention. The TCOC over the 28-day period was similar at $4,289 for CCTA versus $4,060 for standard care (P = .65).9

Reducing the TCOC is imperative to protect patients from the burden of healthcare debt, but concerns have been raised about the ethics of high-value care if decision-making is driven by cost considerations.10 A recent viewpoint proposed a framework where high-value care recommendations are categorized as obligatory (protecting patients from harm), permissible (call for shared decision-making), or suspect (entirely cost-driven). By reframing care redesign as thoughtful, responsible care delivery, we can better incentivize physicians to exercise professionalism and maintain medical practice as a public trust.

High-value champions have a great deal of work ahead to redesign care to improve health, reduce TCOC, and investigate outcomes of care redesign. We applaud Drs. Arora and Moriates for once again leading the charge in preparing medical students and residents to deliver higher-value healthcare by emphasizing that effective patient care is not measured by a single episode or clinical decision, but is defined through a lifelong partnership between the patient and the healthcare system. As the country moves toward improved holistic models of care and financing, physician leadership in care redesign is essential to ensure that quality, safety, and patient well-being are not sacrificed at the altar of cost savings.

 

 

Disclosures

Dr. Johnson is a Consultant and Advisory Board Member at Oliver Wyman, receives salary support from an AHRQ grant, and has pending potential royalties from licensure of evidence-based appropriate use guidelines/criteria to AgilMD (Agile is a clinical decision support company). The other authors have no relevant disclosures. Dr. Johnson and Dr. Pahwa are Co-directors, High Value Practice Academic Alliance, www.hvpaa.org

 

References

1. Arora V, Moriates C. Tackling the minimizers behind high value care. J Hos Med. 2019: 14(5):318-319. doi: 10.12788/jhm.3104 PubMed
2. Alvin MD, Jaffe AS, Ziegelstein RC, Trost JC. Eliminating creatine kinase-myocardial band testing in suspected acute coronary syndrome: a value-based quality improvement. JAMA Intern Med. 2017;177(10):1508-1512. doi: 10.1001/jamainternmed.2017.3597. PubMed
3. Daniel M, Keller S, Mozafarihashjin M, Pahwa A, Soong C. An implementation guide to reducing overtreatment of asymptomatic bacteriuria. JAMA Intern Med. 018;178(2):271-276. doi: 10.1001/jamainternmed.2017.7290. PubMed
4. Sadana D, Pratzer A, Scher LJ, et al. Promoting high-value practice by reducing unnecessary transfusions with a patient blood management program. JAMA Intern Med. 2018;178(1):116-122. doi: 10.1001/jamainternmed.2017.6369. PubMed
5. Raja AS, Greenberg JO, Qaseem A, et al. Evaluation of patients with suspected acute pulmonary embolism: Best practice advice from the Clinical Guidelines Committee of the American College of Physicians. Ann Intern Med. 2015;163(9):701-711. doi: 10.7326/M14-1772 PubMed
6. The Burden of Medical Debt: Results from the Kaiser Family Foundation/New York Times Medical Bills Survey. https://www.kff.org/health-costs/report/the-burden-of-medical-debt-results-from-the-kaiser-family-foundationnew-york-times-medical-bills-survey/. Accessed December 2, 2018.
7. Maryland Total Cost of Care Model. https://innovation.cms.gov/initiatives/md-tccm/. Accessed December 2, 2018
8. Grady D, Redberg RF, O’Malley PG. Quality improvement for quality improvement studies. JAMA Intern Med. 2018;178(2):187. doi: 10.1001/jamainternmed.2017.6875. PubMed
9. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med. 2012;367:299-308. doi: 10.1056/NEJMoa1201161. PubMed
10. DeCamp M, Tilburt JC. Ethics and high-value care. J Med Ethics. 2017;43(5):307-309. doi: 10.1136/medethics-2016-103880. PubMed

Article PDF
Issue
Journal of Hospital Medicine 14(5)
Publications
Topics
Page Number
323-324. Published online first April 8, 2019.
Sections
Article PDF
Article PDF
Related Articles

In this issue of the Journal of Hospital Medicine®, Drs. Arora and Moriates highlight an important deficiency in quality improvement efforts designed to reduce overuse of tests and treatments: the potential for trainees—and by extension, more seasoned clinicians—to rationalize minimizing under the guise of high-value care.1 This insightful perspective from the Co-Directors of Costs of Care should serve as a catalyst for further robust and effective care redesign efforts to optimize the use of all medical resources, including tests, treatments, procedures, consultations, emergency department (ED) visits, and hospital admissions. The formula to root out minimizers is not straightforward and requires an evaluation of wasteful practices in a nuanced and holistic manner that considers not only the frequency that the overused test (or treatment) is ordered but also the collateral impact of not ordering it. This principle has implications for measuring, paying for, and studying high-value care.

Overuse of tests and treatments increases costs and carries a risk of harm, from unnecessary use of creatine kinase–myocardial band (CK-MB) in suspected acute coronary syndrome2 to unwarranted administration of antibiotics for asymptomatic bacteriuria3 to over-administration of blood transfusions.4 However, decreasing the use of a commonly ordered test is not always clinically appropriate. To illustrate this point, we consider the evidence-based algorithm to deliver best practice in the work-up of pulmonary embolism (PE) by Raja et al; which integrates pretest probability, PERC assessment, and appropriate use of D-dimer and pulmonary CT angiography (CTA).5 Avoiding D-dimer testing is appropriate in patients with very low pretest probability who pass a pulmonary embolism rule-out criteria (PERC) clinical assessment and is also appropriate in patients who have sufficiently high clinical probability for PE to justify CTA regardless of a D-dimer result. On the other hand, avoiding D-dimer testing by attributing a patient’s symptoms to anxiety (as a minimizer might do) would increase patient risk, and could potentially increase cost if that patient ends up in intensive care after delayed diagnosis. Following diagnostic algorithms that include physician decision-making and evidence-based guidelines can prevent overuse and underuse, thereby maximizing efficiency and effectiveness. Engaging trainees in the development of such algorithms and decision support tools will serve to ingrain these principles into their practice.

Arora and Moriates highlight the importance of caring for a patient along a continuum rather than simply optimizing practice with respect to a single management decision or an isolated care episode. This approach is fundamental to the quality of care we provide, the public trust our profession still commands, and the total cost of care (TCOC). The two largest contributors to debilitating patient healthcare debt are not overuse of tests and treatments, but ED visits and hospitalizations.6 Thus, high-value quality improvement needs to anticipate future healthcare needs, including those that may result from delayed or missed diagnoses. Furthermore, excessive focus on the minutiae of high-value care (fewer daily basic metabolic panels) can lead to change fatigue and divert attention from higher impact utilization. We endorse a holistic approach in which the lens is shifted from the test—and even from the encounter or episode of care—to the entire continuum of care so that we can safeguard against inappropriate minimization. This approach has started to gain traction with policymakers. For example, the state of Maryland has implemented a TCOC hospital payment model predicted to save $1 billion by 2023.7 The TCOC model includes a Care Redesign Program whereby hospitals and nonhospital healthcare providers collaborate to improve the quality of care while reducing spending, and cost savings can be used for incentive payments to the nonhospital providers (gainsharing) while simultaneously monitoring quality measures to guard against rationing.7 In keeping with the authors’ call to prioritize overall health, this new reimbursement model and others similar to it aim to incentivize the delivery of high-value care across a continuum.

Research is needed to guide best practice from this global perspective; as such, value improvement projects aimed at optimizing use of tests and treatments should include rigorous methodology, measures of downstream outcomes and costs, and balancing safety measures.8 For example, the ROMICAT II randomized trial evaluated two diagnostic approaches in emergency department patients with suspected acute coronary syndrome: early coronary computed tomography angiogram (CCTA) and standard ED evaluation.9 In addition to outcomes related to the ED visit itself, downstream testing and outcomes for 28 days after the episode were studied. In the acute setting, CCTA decreased time to diagnosis, reduced mean hospital length of stay by 7.6 hours, and resulted in 47% of patients being discharged within 8.6 hours as opposed to only 12% of the standard evaluation cohort. No cases of ACS were missed, and the CCTA cohort has slightly fewer cardiovascular adverse events (P = .18). However, the CCTA patients received significantly more diagnostic and functional testing and higher radiation exposure than the standard evaluation cohort, and underwent modestly higher rates of coronary angiography and percutaneous coronary intervention. The TCOC over the 28-day period was similar at $4,289 for CCTA versus $4,060 for standard care (P = .65).9

Reducing the TCOC is imperative to protect patients from the burden of healthcare debt, but concerns have been raised about the ethics of high-value care if decision-making is driven by cost considerations.10 A recent viewpoint proposed a framework where high-value care recommendations are categorized as obligatory (protecting patients from harm), permissible (call for shared decision-making), or suspect (entirely cost-driven). By reframing care redesign as thoughtful, responsible care delivery, we can better incentivize physicians to exercise professionalism and maintain medical practice as a public trust.

High-value champions have a great deal of work ahead to redesign care to improve health, reduce TCOC, and investigate outcomes of care redesign. We applaud Drs. Arora and Moriates for once again leading the charge in preparing medical students and residents to deliver higher-value healthcare by emphasizing that effective patient care is not measured by a single episode or clinical decision, but is defined through a lifelong partnership between the patient and the healthcare system. As the country moves toward improved holistic models of care and financing, physician leadership in care redesign is essential to ensure that quality, safety, and patient well-being are not sacrificed at the altar of cost savings.

 

 

Disclosures

Dr. Johnson is a Consultant and Advisory Board Member at Oliver Wyman, receives salary support from an AHRQ grant, and has pending potential royalties from licensure of evidence-based appropriate use guidelines/criteria to AgilMD (Agile is a clinical decision support company). The other authors have no relevant disclosures. Dr. Johnson and Dr. Pahwa are Co-directors, High Value Practice Academic Alliance, www.hvpaa.org

 

In this issue of the Journal of Hospital Medicine®, Drs. Arora and Moriates highlight an important deficiency in quality improvement efforts designed to reduce overuse of tests and treatments: the potential for trainees—and by extension, more seasoned clinicians—to rationalize minimizing under the guise of high-value care.1 This insightful perspective from the Co-Directors of Costs of Care should serve as a catalyst for further robust and effective care redesign efforts to optimize the use of all medical resources, including tests, treatments, procedures, consultations, emergency department (ED) visits, and hospital admissions. The formula to root out minimizers is not straightforward and requires an evaluation of wasteful practices in a nuanced and holistic manner that considers not only the frequency that the overused test (or treatment) is ordered but also the collateral impact of not ordering it. This principle has implications for measuring, paying for, and studying high-value care.

Overuse of tests and treatments increases costs and carries a risk of harm, from unnecessary use of creatine kinase–myocardial band (CK-MB) in suspected acute coronary syndrome2 to unwarranted administration of antibiotics for asymptomatic bacteriuria3 to over-administration of blood transfusions.4 However, decreasing the use of a commonly ordered test is not always clinically appropriate. To illustrate this point, we consider the evidence-based algorithm to deliver best practice in the work-up of pulmonary embolism (PE) by Raja et al; which integrates pretest probability, PERC assessment, and appropriate use of D-dimer and pulmonary CT angiography (CTA).5 Avoiding D-dimer testing is appropriate in patients with very low pretest probability who pass a pulmonary embolism rule-out criteria (PERC) clinical assessment and is also appropriate in patients who have sufficiently high clinical probability for PE to justify CTA regardless of a D-dimer result. On the other hand, avoiding D-dimer testing by attributing a patient’s symptoms to anxiety (as a minimizer might do) would increase patient risk, and could potentially increase cost if that patient ends up in intensive care after delayed diagnosis. Following diagnostic algorithms that include physician decision-making and evidence-based guidelines can prevent overuse and underuse, thereby maximizing efficiency and effectiveness. Engaging trainees in the development of such algorithms and decision support tools will serve to ingrain these principles into their practice.

Arora and Moriates highlight the importance of caring for a patient along a continuum rather than simply optimizing practice with respect to a single management decision or an isolated care episode. This approach is fundamental to the quality of care we provide, the public trust our profession still commands, and the total cost of care (TCOC). The two largest contributors to debilitating patient healthcare debt are not overuse of tests and treatments, but ED visits and hospitalizations.6 Thus, high-value quality improvement needs to anticipate future healthcare needs, including those that may result from delayed or missed diagnoses. Furthermore, excessive focus on the minutiae of high-value care (fewer daily basic metabolic panels) can lead to change fatigue and divert attention from higher impact utilization. We endorse a holistic approach in which the lens is shifted from the test—and even from the encounter or episode of care—to the entire continuum of care so that we can safeguard against inappropriate minimization. This approach has started to gain traction with policymakers. For example, the state of Maryland has implemented a TCOC hospital payment model predicted to save $1 billion by 2023.7 The TCOC model includes a Care Redesign Program whereby hospitals and nonhospital healthcare providers collaborate to improve the quality of care while reducing spending, and cost savings can be used for incentive payments to the nonhospital providers (gainsharing) while simultaneously monitoring quality measures to guard against rationing.7 In keeping with the authors’ call to prioritize overall health, this new reimbursement model and others similar to it aim to incentivize the delivery of high-value care across a continuum.

Research is needed to guide best practice from this global perspective; as such, value improvement projects aimed at optimizing use of tests and treatments should include rigorous methodology, measures of downstream outcomes and costs, and balancing safety measures.8 For example, the ROMICAT II randomized trial evaluated two diagnostic approaches in emergency department patients with suspected acute coronary syndrome: early coronary computed tomography angiogram (CCTA) and standard ED evaluation.9 In addition to outcomes related to the ED visit itself, downstream testing and outcomes for 28 days after the episode were studied. In the acute setting, CCTA decreased time to diagnosis, reduced mean hospital length of stay by 7.6 hours, and resulted in 47% of patients being discharged within 8.6 hours as opposed to only 12% of the standard evaluation cohort. No cases of ACS were missed, and the CCTA cohort has slightly fewer cardiovascular adverse events (P = .18). However, the CCTA patients received significantly more diagnostic and functional testing and higher radiation exposure than the standard evaluation cohort, and underwent modestly higher rates of coronary angiography and percutaneous coronary intervention. The TCOC over the 28-day period was similar at $4,289 for CCTA versus $4,060 for standard care (P = .65).9

Reducing the TCOC is imperative to protect patients from the burden of healthcare debt, but concerns have been raised about the ethics of high-value care if decision-making is driven by cost considerations.10 A recent viewpoint proposed a framework where high-value care recommendations are categorized as obligatory (protecting patients from harm), permissible (call for shared decision-making), or suspect (entirely cost-driven). By reframing care redesign as thoughtful, responsible care delivery, we can better incentivize physicians to exercise professionalism and maintain medical practice as a public trust.

High-value champions have a great deal of work ahead to redesign care to improve health, reduce TCOC, and investigate outcomes of care redesign. We applaud Drs. Arora and Moriates for once again leading the charge in preparing medical students and residents to deliver higher-value healthcare by emphasizing that effective patient care is not measured by a single episode or clinical decision, but is defined through a lifelong partnership between the patient and the healthcare system. As the country moves toward improved holistic models of care and financing, physician leadership in care redesign is essential to ensure that quality, safety, and patient well-being are not sacrificed at the altar of cost savings.

 

 

Disclosures

Dr. Johnson is a Consultant and Advisory Board Member at Oliver Wyman, receives salary support from an AHRQ grant, and has pending potential royalties from licensure of evidence-based appropriate use guidelines/criteria to AgilMD (Agile is a clinical decision support company). The other authors have no relevant disclosures. Dr. Johnson and Dr. Pahwa are Co-directors, High Value Practice Academic Alliance, www.hvpaa.org

 

References

1. Arora V, Moriates C. Tackling the minimizers behind high value care. J Hos Med. 2019: 14(5):318-319. doi: 10.12788/jhm.3104 PubMed
2. Alvin MD, Jaffe AS, Ziegelstein RC, Trost JC. Eliminating creatine kinase-myocardial band testing in suspected acute coronary syndrome: a value-based quality improvement. JAMA Intern Med. 2017;177(10):1508-1512. doi: 10.1001/jamainternmed.2017.3597. PubMed
3. Daniel M, Keller S, Mozafarihashjin M, Pahwa A, Soong C. An implementation guide to reducing overtreatment of asymptomatic bacteriuria. JAMA Intern Med. 018;178(2):271-276. doi: 10.1001/jamainternmed.2017.7290. PubMed
4. Sadana D, Pratzer A, Scher LJ, et al. Promoting high-value practice by reducing unnecessary transfusions with a patient blood management program. JAMA Intern Med. 2018;178(1):116-122. doi: 10.1001/jamainternmed.2017.6369. PubMed
5. Raja AS, Greenberg JO, Qaseem A, et al. Evaluation of patients with suspected acute pulmonary embolism: Best practice advice from the Clinical Guidelines Committee of the American College of Physicians. Ann Intern Med. 2015;163(9):701-711. doi: 10.7326/M14-1772 PubMed
6. The Burden of Medical Debt: Results from the Kaiser Family Foundation/New York Times Medical Bills Survey. https://www.kff.org/health-costs/report/the-burden-of-medical-debt-results-from-the-kaiser-family-foundationnew-york-times-medical-bills-survey/. Accessed December 2, 2018.
7. Maryland Total Cost of Care Model. https://innovation.cms.gov/initiatives/md-tccm/. Accessed December 2, 2018
8. Grady D, Redberg RF, O’Malley PG. Quality improvement for quality improvement studies. JAMA Intern Med. 2018;178(2):187. doi: 10.1001/jamainternmed.2017.6875. PubMed
9. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med. 2012;367:299-308. doi: 10.1056/NEJMoa1201161. PubMed
10. DeCamp M, Tilburt JC. Ethics and high-value care. J Med Ethics. 2017;43(5):307-309. doi: 10.1136/medethics-2016-103880. PubMed

References

1. Arora V, Moriates C. Tackling the minimizers behind high value care. J Hos Med. 2019: 14(5):318-319. doi: 10.12788/jhm.3104 PubMed
2. Alvin MD, Jaffe AS, Ziegelstein RC, Trost JC. Eliminating creatine kinase-myocardial band testing in suspected acute coronary syndrome: a value-based quality improvement. JAMA Intern Med. 2017;177(10):1508-1512. doi: 10.1001/jamainternmed.2017.3597. PubMed
3. Daniel M, Keller S, Mozafarihashjin M, Pahwa A, Soong C. An implementation guide to reducing overtreatment of asymptomatic bacteriuria. JAMA Intern Med. 018;178(2):271-276. doi: 10.1001/jamainternmed.2017.7290. PubMed
4. Sadana D, Pratzer A, Scher LJ, et al. Promoting high-value practice by reducing unnecessary transfusions with a patient blood management program. JAMA Intern Med. 2018;178(1):116-122. doi: 10.1001/jamainternmed.2017.6369. PubMed
5. Raja AS, Greenberg JO, Qaseem A, et al. Evaluation of patients with suspected acute pulmonary embolism: Best practice advice from the Clinical Guidelines Committee of the American College of Physicians. Ann Intern Med. 2015;163(9):701-711. doi: 10.7326/M14-1772 PubMed
6. The Burden of Medical Debt: Results from the Kaiser Family Foundation/New York Times Medical Bills Survey. https://www.kff.org/health-costs/report/the-burden-of-medical-debt-results-from-the-kaiser-family-foundationnew-york-times-medical-bills-survey/. Accessed December 2, 2018.
7. Maryland Total Cost of Care Model. https://innovation.cms.gov/initiatives/md-tccm/. Accessed December 2, 2018
8. Grady D, Redberg RF, O’Malley PG. Quality improvement for quality improvement studies. JAMA Intern Med. 2018;178(2):187. doi: 10.1001/jamainternmed.2017.6875. PubMed
9. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med. 2012;367:299-308. doi: 10.1056/NEJMoa1201161. PubMed
10. DeCamp M, Tilburt JC. Ethics and high-value care. J Med Ethics. 2017;43(5):307-309. doi: 10.1136/medethics-2016-103880. PubMed

Issue
Journal of Hospital Medicine 14(5)
Issue
Journal of Hospital Medicine 14(5)
Page Number
323-324. Published online first April 8, 2019.
Page Number
323-324. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Correspondence Location
E-mail: [email protected]; Telephone: 410 336 0039; Twitter: @PamelaJohnsonMD.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Evidence Needing a Lift

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
BOOST: Evidence needing a lift

In this issue of the Journal of Hospital Medicine, Hansen and colleagues provide a first, early look at the effectiveness of the BOOST intervention to reduce 30‐day readmissions among hospitalized patients.[1] BOOST[2] is 1 of a number of care transition improvement methodologies that have been applied to the problem of readmissions, each of which has evidence to support its effectiveness in its initial settings[3, 4] but has proven to be difficult to translate to other sites.[5, 6, 7]

BOOST stands in contrast with other, largely research protocol‐derived, programs in that it allows sites to tailor adoption of recommendations to local contexts and is therefore potentially more feasible to implement. Feasibility and practicality has led BOOST to be adopted in large national settings, even if it has had little evidence to support its effectiveness to date.

Given the nonstandardized and ad hoc nature of most multicenter collaboratives generally, and the flexibility of the BOOST model specifically, the BOOST authors are to be commended for undertaking any evaluation at all. Perhaps, not surprisingly, they encountered many of the problems associated with a multicenter studydropout of sites, problematic data, and limited evidence for adoption of the intervention at participating hospitals. Although these represent real‐world experiences of a quality‐improvement program, as a group they pose a number of problems that limit the study's robustness, and generate important caveats that readers should use to temper their interpretation of the authors' findings.

The first caveat relates to the substantial number of sites that either dropped out of BOOST or failed to submit data after enlisting in the collaborative. Although this may be common in quality improvement collaboratives, similar problems would not be permissible in a trial of a new drug or device. Dropout and selected ability to contribute data suggest that the ability to fully adopt BOOST may not be universal, and raises the possibility of bias, because the least successful sites may have had less interest in remaining engaged and submitting data.

The second caveat relates to how readmission rates were assessed. Because sites provided rates of readmissions at the unit level rather than the actual counts of admissions or readmissions, the authors were unable to conduct statistical analyses typically performed for these interventions, such as time series or difference‐in‐difference analyses. More importantly, one cannot discern whether their results are driven by a small absolute but large relative change in the number of readmissions at small sites. That is, large percentage changes of low statistical significance could have misleadingly affected the overall results. Conversely, we cannot identify large sites where a similar relative reduction could be statistically significant and more broadly interpreted as representing the real effectiveness of BOOST efforts.

The third caveat is in regard to the data describing the sites' performance. The effectiveness of BOOST in this analysis varied greatly among sites, with only 1 site showing a strong reduction in readmission rate, and nearly all others showing no statistical improvements. In fact, it appears that their overall results were almost entirely driven by the improvements at that 1 site.

Variable effectiveness of an intervention can be related to variable adoption or contextual factors (such as availability of personnel to implement the program). Although these authors have data on BOOST programmatic adoption, they do not have qualitative data on local barriers and facilitators to BOOST implementation, which at this stage of evaluation would be particularly valuable in understanding the results. Analyzing site‐level effectiveness is of growing relevance to multicenter quality improvement collaboratives,[8, 9] but this evaluation provides little insight into reasons for variable success across institutions.

Finally, their study design does not allow us to understand a number of key questions. How many patients were involved in the intervention? How many patients received all BOOST‐recommended interventions? Which of these interventions seemed most effective in which patients? To what degree did patient severity of illness, cognitive status, social supports, or access to primary care influence readmission risk? Such information would help frame cost‐effective deployment of BOOST or related tools.

In the end, it seems unlikely that this iteration of the BOOST program produced broad reductions in readmission rates. Having said this, the authors provide the necessary start down the road toward a fuller understanding of real‐world efforts to reduce readmissions. Stated alternately, the nuances and flaws of this study provide ample fodder for others working in the field. BOOST is in good stead with other care transition models that have not translated well from their initial research environment to real‐world practices. The question now is: Do any of these interventions actually work in clinical practice settings, and will we ever know? Even more fundamentally, how important and meaningful are these hospital‐based care transition interventions? Where is the engagement with primary care? Where are the primary care outcomes? Does BOOST truly impact outcomes other than readmission?[10]

Doing high‐quality research in the context of a rapidly evolving quality improvement program is hard. Doing it at more than 1 site is harder. BOOST's flexibility is both a great source of strength and a clear challenge to rigorous evaluation. However, when the costs of care transition programs are so high, and the potential consequences of high readmission rates are so great for patients and for hospitals, the need to address these issues with real data and better evidence is paramount. We look forward to the next phase of BOOST and to the growth and refinement of the evidence base for how to improve care coordination and transitions effectively.

References
  1. Hansen L, Greenwald JL, Budnitz T, et al. Project BOOST: effectiveness of a multihospital effort to reduce rehospitalization. J Hosp Med. 2013;8:421427.
  2. Williams MV, Coleman E. BOOSTing the hospital discharge. J Hosp Med. 2009;4:209210.
  3. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150:178187.
  4. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  5. Stauffer BD, Fullerton C, Fleming N, et al. Effectiveness and cost of a transitional care program for heart failure: a prospective study with concurrent controls. Arch Intern Med. 2011;171:12381243.
  6. Abelson R. Hospitals question Medicare rules on readmissions. New York Times. March 29, 2013. Available at: http://www.nytimes.com/2013/03/30/business/hospitals‐question‐fairness‐of‐new‐medicare‐rules.html?pagewanted=all
Article PDF
Issue
Journal of Hospital Medicine - 8(8)
Publications
Page Number
468-469
Sections
Article PDF
Article PDF

In this issue of the Journal of Hospital Medicine, Hansen and colleagues provide a first, early look at the effectiveness of the BOOST intervention to reduce 30‐day readmissions among hospitalized patients.[1] BOOST[2] is 1 of a number of care transition improvement methodologies that have been applied to the problem of readmissions, each of which has evidence to support its effectiveness in its initial settings[3, 4] but has proven to be difficult to translate to other sites.[5, 6, 7]

BOOST stands in contrast with other, largely research protocol‐derived, programs in that it allows sites to tailor adoption of recommendations to local contexts and is therefore potentially more feasible to implement. Feasibility and practicality has led BOOST to be adopted in large national settings, even if it has had little evidence to support its effectiveness to date.

Given the nonstandardized and ad hoc nature of most multicenter collaboratives generally, and the flexibility of the BOOST model specifically, the BOOST authors are to be commended for undertaking any evaluation at all. Perhaps, not surprisingly, they encountered many of the problems associated with a multicenter studydropout of sites, problematic data, and limited evidence for adoption of the intervention at participating hospitals. Although these represent real‐world experiences of a quality‐improvement program, as a group they pose a number of problems that limit the study's robustness, and generate important caveats that readers should use to temper their interpretation of the authors' findings.

The first caveat relates to the substantial number of sites that either dropped out of BOOST or failed to submit data after enlisting in the collaborative. Although this may be common in quality improvement collaboratives, similar problems would not be permissible in a trial of a new drug or device. Dropout and selected ability to contribute data suggest that the ability to fully adopt BOOST may not be universal, and raises the possibility of bias, because the least successful sites may have had less interest in remaining engaged and submitting data.

The second caveat relates to how readmission rates were assessed. Because sites provided rates of readmissions at the unit level rather than the actual counts of admissions or readmissions, the authors were unable to conduct statistical analyses typically performed for these interventions, such as time series or difference‐in‐difference analyses. More importantly, one cannot discern whether their results are driven by a small absolute but large relative change in the number of readmissions at small sites. That is, large percentage changes of low statistical significance could have misleadingly affected the overall results. Conversely, we cannot identify large sites where a similar relative reduction could be statistically significant and more broadly interpreted as representing the real effectiveness of BOOST efforts.

The third caveat is in regard to the data describing the sites' performance. The effectiveness of BOOST in this analysis varied greatly among sites, with only 1 site showing a strong reduction in readmission rate, and nearly all others showing no statistical improvements. In fact, it appears that their overall results were almost entirely driven by the improvements at that 1 site.

Variable effectiveness of an intervention can be related to variable adoption or contextual factors (such as availability of personnel to implement the program). Although these authors have data on BOOST programmatic adoption, they do not have qualitative data on local barriers and facilitators to BOOST implementation, which at this stage of evaluation would be particularly valuable in understanding the results. Analyzing site‐level effectiveness is of growing relevance to multicenter quality improvement collaboratives,[8, 9] but this evaluation provides little insight into reasons for variable success across institutions.

Finally, their study design does not allow us to understand a number of key questions. How many patients were involved in the intervention? How many patients received all BOOST‐recommended interventions? Which of these interventions seemed most effective in which patients? To what degree did patient severity of illness, cognitive status, social supports, or access to primary care influence readmission risk? Such information would help frame cost‐effective deployment of BOOST or related tools.

In the end, it seems unlikely that this iteration of the BOOST program produced broad reductions in readmission rates. Having said this, the authors provide the necessary start down the road toward a fuller understanding of real‐world efforts to reduce readmissions. Stated alternately, the nuances and flaws of this study provide ample fodder for others working in the field. BOOST is in good stead with other care transition models that have not translated well from their initial research environment to real‐world practices. The question now is: Do any of these interventions actually work in clinical practice settings, and will we ever know? Even more fundamentally, how important and meaningful are these hospital‐based care transition interventions? Where is the engagement with primary care? Where are the primary care outcomes? Does BOOST truly impact outcomes other than readmission?[10]

Doing high‐quality research in the context of a rapidly evolving quality improvement program is hard. Doing it at more than 1 site is harder. BOOST's flexibility is both a great source of strength and a clear challenge to rigorous evaluation. However, when the costs of care transition programs are so high, and the potential consequences of high readmission rates are so great for patients and for hospitals, the need to address these issues with real data and better evidence is paramount. We look forward to the next phase of BOOST and to the growth and refinement of the evidence base for how to improve care coordination and transitions effectively.

In this issue of the Journal of Hospital Medicine, Hansen and colleagues provide a first, early look at the effectiveness of the BOOST intervention to reduce 30‐day readmissions among hospitalized patients.[1] BOOST[2] is 1 of a number of care transition improvement methodologies that have been applied to the problem of readmissions, each of which has evidence to support its effectiveness in its initial settings[3, 4] but has proven to be difficult to translate to other sites.[5, 6, 7]

BOOST stands in contrast with other, largely research protocol‐derived, programs in that it allows sites to tailor adoption of recommendations to local contexts and is therefore potentially more feasible to implement. Feasibility and practicality has led BOOST to be adopted in large national settings, even if it has had little evidence to support its effectiveness to date.

Given the nonstandardized and ad hoc nature of most multicenter collaboratives generally, and the flexibility of the BOOST model specifically, the BOOST authors are to be commended for undertaking any evaluation at all. Perhaps, not surprisingly, they encountered many of the problems associated with a multicenter studydropout of sites, problematic data, and limited evidence for adoption of the intervention at participating hospitals. Although these represent real‐world experiences of a quality‐improvement program, as a group they pose a number of problems that limit the study's robustness, and generate important caveats that readers should use to temper their interpretation of the authors' findings.

The first caveat relates to the substantial number of sites that either dropped out of BOOST or failed to submit data after enlisting in the collaborative. Although this may be common in quality improvement collaboratives, similar problems would not be permissible in a trial of a new drug or device. Dropout and selected ability to contribute data suggest that the ability to fully adopt BOOST may not be universal, and raises the possibility of bias, because the least successful sites may have had less interest in remaining engaged and submitting data.

The second caveat relates to how readmission rates were assessed. Because sites provided rates of readmissions at the unit level rather than the actual counts of admissions or readmissions, the authors were unable to conduct statistical analyses typically performed for these interventions, such as time series or difference‐in‐difference analyses. More importantly, one cannot discern whether their results are driven by a small absolute but large relative change in the number of readmissions at small sites. That is, large percentage changes of low statistical significance could have misleadingly affected the overall results. Conversely, we cannot identify large sites where a similar relative reduction could be statistically significant and more broadly interpreted as representing the real effectiveness of BOOST efforts.

The third caveat is in regard to the data describing the sites' performance. The effectiveness of BOOST in this analysis varied greatly among sites, with only 1 site showing a strong reduction in readmission rate, and nearly all others showing no statistical improvements. In fact, it appears that their overall results were almost entirely driven by the improvements at that 1 site.

Variable effectiveness of an intervention can be related to variable adoption or contextual factors (such as availability of personnel to implement the program). Although these authors have data on BOOST programmatic adoption, they do not have qualitative data on local barriers and facilitators to BOOST implementation, which at this stage of evaluation would be particularly valuable in understanding the results. Analyzing site‐level effectiveness is of growing relevance to multicenter quality improvement collaboratives,[8, 9] but this evaluation provides little insight into reasons for variable success across institutions.

Finally, their study design does not allow us to understand a number of key questions. How many patients were involved in the intervention? How many patients received all BOOST‐recommended interventions? Which of these interventions seemed most effective in which patients? To what degree did patient severity of illness, cognitive status, social supports, or access to primary care influence readmission risk? Such information would help frame cost‐effective deployment of BOOST or related tools.

In the end, it seems unlikely that this iteration of the BOOST program produced broad reductions in readmission rates. Having said this, the authors provide the necessary start down the road toward a fuller understanding of real‐world efforts to reduce readmissions. Stated alternately, the nuances and flaws of this study provide ample fodder for others working in the field. BOOST is in good stead with other care transition models that have not translated well from their initial research environment to real‐world practices. The question now is: Do any of these interventions actually work in clinical practice settings, and will we ever know? Even more fundamentally, how important and meaningful are these hospital‐based care transition interventions? Where is the engagement with primary care? Where are the primary care outcomes? Does BOOST truly impact outcomes other than readmission?[10]

Doing high‐quality research in the context of a rapidly evolving quality improvement program is hard. Doing it at more than 1 site is harder. BOOST's flexibility is both a great source of strength and a clear challenge to rigorous evaluation. However, when the costs of care transition programs are so high, and the potential consequences of high readmission rates are so great for patients and for hospitals, the need to address these issues with real data and better evidence is paramount. We look forward to the next phase of BOOST and to the growth and refinement of the evidence base for how to improve care coordination and transitions effectively.

References
  1. Hansen L, Greenwald JL, Budnitz T, et al. Project BOOST: effectiveness of a multihospital effort to reduce rehospitalization. J Hosp Med. 2013;8:421427.
  2. Williams MV, Coleman E. BOOSTing the hospital discharge. J Hosp Med. 2009;4:209210.
  3. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150:178187.
  4. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  5. Stauffer BD, Fullerton C, Fleming N, et al. Effectiveness and cost of a transitional care program for heart failure: a prospective study with concurrent controls. Arch Intern Med. 2011;171:12381243.
  6. Abelson R. Hospitals question Medicare rules on readmissions. New York Times. March 29, 2013. Available at: http://www.nytimes.com/2013/03/30/business/hospitals‐question‐fairness‐of‐new‐medicare‐rules.html?pagewanted=all
References
  1. Hansen L, Greenwald JL, Budnitz T, et al. Project BOOST: effectiveness of a multihospital effort to reduce rehospitalization. J Hosp Med. 2013;8:421427.
  2. Williams MV, Coleman E. BOOSTing the hospital discharge. J Hosp Med. 2009;4:209210.
  3. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150:178187.
  4. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281:613620.
  5. Stauffer BD, Fullerton C, Fleming N, et al. Effectiveness and cost of a transitional care program for heart failure: a prospective study with concurrent controls. Arch Intern Med. 2011;171:12381243.
  6. Abelson R. Hospitals question Medicare rules on readmissions. New York Times. March 29, 2013. Available at: http://www.nytimes.com/2013/03/30/business/hospitals‐question‐fairness‐of‐new‐medicare‐rules.html?pagewanted=all
Issue
Journal of Hospital Medicine - 8(8)
Issue
Journal of Hospital Medicine - 8(8)
Page Number
468-469
Page Number
468-469
Publications
Publications
Article Type
Display Headline
BOOST: Evidence needing a lift
Display Headline
BOOST: Evidence needing a lift
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Andrew Auerbach, MD, UCSF Division of Hospital Medicine, Box 0131, 533 Parnassus Ave., San Francisco CA 94143‐0131; Telephone: 415–502‐1412; Fax: 415–514‐2094; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media