User login
Planned, Related or Preventable: Defining Readmissions to Capture Quality of Care
In this issue of the Journal of Hospital Medicine, Ellimoottil and colleagues examine characteristics of readmissions identified as planned by the planned readmission algorithm developed for the Center for Medicare & Medicaid Services (CMS) by using Medicare claims data from 131 hospitals in Michigan.1 They found that a substantial portion of readmissions currently classified as planned by the algorithm appear to be nonelective, as defined by the presence of a charge by an emergency medicine physician or an admission type of emergent or urgent, making those hospitalizations unlikely to be planned. They suggest that the algorithm could be modified to exclude such cases from the planned designation.
To determine whether modifying the algorithm as recommended is a good idea, it is helpful to examine the origins of the existing planned readmission algorithm. The algorithm originated as a consequence of hospital accountability measures for readmissions and was developed by this author in collaboration with colleagues at Yale University and elsewhere.2 Readmission measures have been controversial in part because clearly some (undetermined) fraction of readmissions is unavoidable. Many commentators have asked that readmission measures therefore capture only avoidable or related readmissions. Avoidable readmissions are those that could have been prevented by members of the healthcare system through actions taken during or after hospitalization, such as patient counseling, communication among team members, and guideline-concordant medical care. Related readmissions are those directly stemming from the index admission. However, reliably and accurately defining such events has proven elusive. One study, for instance, found the rate of physician-assessed preventability in published studies ranged from 9% to 48%.3 The challenge is even greater in trying to determine preventability using just claims data, without physician review of charts. Imagine, for instance, a patient with heart failure who is readmitted with heart failure exacerbation. The readmission preceded by a large fast-food meal is likely preventable; although even in this case, some would argue the healthcare system should not be held accountable for a readmission if the patient had been properly counseled about avoiding salty food. The one preceded by progressively worsening systolic function in a patient who reliably takes medications, weighs herself daily, and watches her diet is likely not. But both appear identical in claims. Related is also a difficult concept to operationalize. A recently hospitalized patient readmitted with pneumonia might have acquired it in the hospital (related) or from her grandchild 2 weeks later (unrelated). Again, both appear identical in claims.
In the ideal world, clinicians would be held accountable only for preventable readmissions. In practice, that has not proven to be possible.
Instead, the CMS readmission measures omit readmissions that are thought to be planned in advance: necessary and intentional readmissions. Defining a planned readmission is conceptually easier than defining a preventable readmission, yet even this is not always straightforward. The clearest case might be a person with a longstanding plan to have an elective surgery (say, a hip replacement) who is briefly admitted with something minor enough not to delay a subsequent admission for the scheduled surgery. Other patients are admitted with acute problems that require follow-up hospitalization (for instance, an acute myocardial infarction that requires a coronary artery bypass graft 2 weeks later).4 More ambiguous are patients who are sent home on a course of treatment with a plan for rehospitalization if it fails; for instance, a patient with gangrene is sent home on intravenous antibiotics but fails to improve and is rehospitalized for an amputation. Is that readmission planned or unplanned? Reasonable people might disagree.
Nonetheless, assuming it is desirable to at least try to identify and remove planned readmissions from measures, there are a number of ways in which one might do so. Perhaps the simplest would be to classify each hospitalization as planned or not on the UB-04 claim form. Such a process would be very feasible but also subject to gaming or coding variability. Given that there is some ambiguity and no standard about what types of readmissions are planned and that current policy provides incentives to reduce unplanned readmission rates, hospitals might vary in the cases to which they would apply such a code. This approach, therefore, has not been favored by payers to date. An alternative is to prospectively flag admissions that are expected to result in planned readmissions. In fiscal year 2014, the CMS implemented this option for newborns and patients with acute myocardial infarction by creating new discharge status codes of “discharged to [location] with a planned acute care hospital inpatient readmission.” Institutions can flag discharges that they know at the time of discharge will be followed by a readmission, such as a newborn who requires a repeat hospitalization for repair of a congenital anomaly.5 There is no time span required for the planned readmission to qualify. However, the difficulty in broadening the applicability of this option to all discharges lies in identification and matching; there also remains a possibility for gaming. The code does not specify when the readmission is expected nor for what diagnosis or procedure. How, then, do we know if the subsequent readmission is the one anticipated? Unexpected readmissions may still occur in the interim. Conversely, what if the discharging clinicians don’t know about an anticipated planned procedure? What would stop hospitals from labeling every discharge as expected to be followed by a planned readmission? These considerations have largely prevented the CMS from asking hospitals to apply the new code widely or from applying the code to identify planned readmissions.
Instead, the existing algorithm attempts to identify procedures that might be done on an elective basis and assumes readmissions with these procedures are planned if paired with a nonurgent diagnosis. Ellimoottil and colleagues attempt to verify whether this is accurate using a creative approach of seeking emergency department (ED) charges and admission type of emergent or urgent, and they found that roughly half of planned readmissions are, in fact, likely unplanned. This figure agrees closely with the original chart review validation of the algorithm. In particular, they found that some procedures, such as percutaneous cardiac interventions, appear to be paired regularly with a nonurgent principal diagnosis, such as coronary artery disease, even when done on an urgent basis.
This validation was performed prior to the availability of version 4.0 of the planned readmission algorithm, which removes several high-frequency procedures from the potentially planned readmission list (including cardiac devices and diagnostic cardiac catheterizations) that were very frequently mischaracterized as planned in the original chart validation.6 At least 8 such cases were also identified in this validation according to the table. Therefore, the misclassification rate of the current algorithm version is probably less than that reported in this article. Nonetheless, percutaneous transluminal coronary angioplasty remains on the planned procedure list in version 4.0 and appears to account for a substantial error rate, and it is likely that the authors’ approach would improve the accuracy even of the newer version of the algorithm.
The advantages of the suggested modifications are that they do not require chart review and could be readily adopted by the CMS. Although seeking ED charges for Medicare is somewhat cumbersome in that they are recorded in a different data set than the inpatient hospitalizations, there is no absolute barrier to adding this step to the algorithm, and doing so has substantial face validity. That said, identifying ED visits is not straightforward because nonemergency services can be provided in the ED (ie, critical care or observation care) and because facilities and providers have different billing requirements, producing different estimates depending on the data set used.7 Including admission type would be easier, but it would be less conservative and likely less accurate, as this field has not been validated and is not typically audited. Nonetheless, adding the presence of ED charges seems likely to improve the accuracy of the algorithm. As the CMS continues to refine the planned readmission algorithm, these proposed changes would be very reasonable to study with chart validation and, if valid, to consider adopting.
Disclosure
Dr. Horwitz reports grants from Center for Medicare & Medicaid Services, grants from Agency for Healthcare Research and Quality, during the conduct of the study.
1. Ellimoottil C, Khouri R, Dhir A, Hou H, Miller D, Dupree J. An opportunity to improve Medicare’s planned readmissions measure. J Hosp Med. 2017;12(10):840-842.
2. Horwitz LI, Grady JN, Cohen DB, et al. Development and validation of an algorithm to identify planned readmissions from claims data. J Hosp Med. 2015;10(10):670-677. PubMed
3. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074-1081. PubMed
4. Assmann A, Boeken U, Akhyari P, Lichtenberg A. Appropriate timing of coronary artery bypass grafting after acute myocardial infarction. Thorac Cardiovasc Surg. 2012;60(7):446-451. PubMed
5. Inpatient Prospective Payment System/Long-Term Care Hospital (IPPS/LTCH) Final Rule, 78 Fed. Reg. 27520 (Aug 19, 2013) (to be codified at 42 C.F.R. Parts 424, 414, 419, 424, 482, 485 and 489). http://www.gpo.gov/fdsys/pkg/FR-2013-08-19/pdf/2013-18956.pdf. Accessed on May 4, 2017.
6. Yale New Haven Health Services Corporation Center for Outcomes Research and Evaluation. 2016 Condition-Specific Measures Updates and Specifications Report: Hospital-Level 30-Day Risk-Standardized Readmission Measures. March 2016.
7. Venkatesh AK, Mei H, Kocher KE, et al. Identification of emergency department visits in Medicare administrative claims: approaches and implications. Acad Emerg Med. 2017;24(4):422-431. PubMed
In this issue of the Journal of Hospital Medicine, Ellimoottil and colleagues examine characteristics of readmissions identified as planned by the planned readmission algorithm developed for the Center for Medicare & Medicaid Services (CMS) by using Medicare claims data from 131 hospitals in Michigan.1 They found that a substantial portion of readmissions currently classified as planned by the algorithm appear to be nonelective, as defined by the presence of a charge by an emergency medicine physician or an admission type of emergent or urgent, making those hospitalizations unlikely to be planned. They suggest that the algorithm could be modified to exclude such cases from the planned designation.
To determine whether modifying the algorithm as recommended is a good idea, it is helpful to examine the origins of the existing planned readmission algorithm. The algorithm originated as a consequence of hospital accountability measures for readmissions and was developed by this author in collaboration with colleagues at Yale University and elsewhere.2 Readmission measures have been controversial in part because clearly some (undetermined) fraction of readmissions is unavoidable. Many commentators have asked that readmission measures therefore capture only avoidable or related readmissions. Avoidable readmissions are those that could have been prevented by members of the healthcare system through actions taken during or after hospitalization, such as patient counseling, communication among team members, and guideline-concordant medical care. Related readmissions are those directly stemming from the index admission. However, reliably and accurately defining such events has proven elusive. One study, for instance, found the rate of physician-assessed preventability in published studies ranged from 9% to 48%.3 The challenge is even greater in trying to determine preventability using just claims data, without physician review of charts. Imagine, for instance, a patient with heart failure who is readmitted with heart failure exacerbation. The readmission preceded by a large fast-food meal is likely preventable; although even in this case, some would argue the healthcare system should not be held accountable for a readmission if the patient had been properly counseled about avoiding salty food. The one preceded by progressively worsening systolic function in a patient who reliably takes medications, weighs herself daily, and watches her diet is likely not. But both appear identical in claims. Related is also a difficult concept to operationalize. A recently hospitalized patient readmitted with pneumonia might have acquired it in the hospital (related) or from her grandchild 2 weeks later (unrelated). Again, both appear identical in claims.
In the ideal world, clinicians would be held accountable only for preventable readmissions. In practice, that has not proven to be possible.
Instead, the CMS readmission measures omit readmissions that are thought to be planned in advance: necessary and intentional readmissions. Defining a planned readmission is conceptually easier than defining a preventable readmission, yet even this is not always straightforward. The clearest case might be a person with a longstanding plan to have an elective surgery (say, a hip replacement) who is briefly admitted with something minor enough not to delay a subsequent admission for the scheduled surgery. Other patients are admitted with acute problems that require follow-up hospitalization (for instance, an acute myocardial infarction that requires a coronary artery bypass graft 2 weeks later).4 More ambiguous are patients who are sent home on a course of treatment with a plan for rehospitalization if it fails; for instance, a patient with gangrene is sent home on intravenous antibiotics but fails to improve and is rehospitalized for an amputation. Is that readmission planned or unplanned? Reasonable people might disagree.
Nonetheless, assuming it is desirable to at least try to identify and remove planned readmissions from measures, there are a number of ways in which one might do so. Perhaps the simplest would be to classify each hospitalization as planned or not on the UB-04 claim form. Such a process would be very feasible but also subject to gaming or coding variability. Given that there is some ambiguity and no standard about what types of readmissions are planned and that current policy provides incentives to reduce unplanned readmission rates, hospitals might vary in the cases to which they would apply such a code. This approach, therefore, has not been favored by payers to date. An alternative is to prospectively flag admissions that are expected to result in planned readmissions. In fiscal year 2014, the CMS implemented this option for newborns and patients with acute myocardial infarction by creating new discharge status codes of “discharged to [location] with a planned acute care hospital inpatient readmission.” Institutions can flag discharges that they know at the time of discharge will be followed by a readmission, such as a newborn who requires a repeat hospitalization for repair of a congenital anomaly.5 There is no time span required for the planned readmission to qualify. However, the difficulty in broadening the applicability of this option to all discharges lies in identification and matching; there also remains a possibility for gaming. The code does not specify when the readmission is expected nor for what diagnosis or procedure. How, then, do we know if the subsequent readmission is the one anticipated? Unexpected readmissions may still occur in the interim. Conversely, what if the discharging clinicians don’t know about an anticipated planned procedure? What would stop hospitals from labeling every discharge as expected to be followed by a planned readmission? These considerations have largely prevented the CMS from asking hospitals to apply the new code widely or from applying the code to identify planned readmissions.
Instead, the existing algorithm attempts to identify procedures that might be done on an elective basis and assumes readmissions with these procedures are planned if paired with a nonurgent diagnosis. Ellimoottil and colleagues attempt to verify whether this is accurate using a creative approach of seeking emergency department (ED) charges and admission type of emergent or urgent, and they found that roughly half of planned readmissions are, in fact, likely unplanned. This figure agrees closely with the original chart review validation of the algorithm. In particular, they found that some procedures, such as percutaneous cardiac interventions, appear to be paired regularly with a nonurgent principal diagnosis, such as coronary artery disease, even when done on an urgent basis.
This validation was performed prior to the availability of version 4.0 of the planned readmission algorithm, which removes several high-frequency procedures from the potentially planned readmission list (including cardiac devices and diagnostic cardiac catheterizations) that were very frequently mischaracterized as planned in the original chart validation.6 At least 8 such cases were also identified in this validation according to the table. Therefore, the misclassification rate of the current algorithm version is probably less than that reported in this article. Nonetheless, percutaneous transluminal coronary angioplasty remains on the planned procedure list in version 4.0 and appears to account for a substantial error rate, and it is likely that the authors’ approach would improve the accuracy even of the newer version of the algorithm.
The advantages of the suggested modifications are that they do not require chart review and could be readily adopted by the CMS. Although seeking ED charges for Medicare is somewhat cumbersome in that they are recorded in a different data set than the inpatient hospitalizations, there is no absolute barrier to adding this step to the algorithm, and doing so has substantial face validity. That said, identifying ED visits is not straightforward because nonemergency services can be provided in the ED (ie, critical care or observation care) and because facilities and providers have different billing requirements, producing different estimates depending on the data set used.7 Including admission type would be easier, but it would be less conservative and likely less accurate, as this field has not been validated and is not typically audited. Nonetheless, adding the presence of ED charges seems likely to improve the accuracy of the algorithm. As the CMS continues to refine the planned readmission algorithm, these proposed changes would be very reasonable to study with chart validation and, if valid, to consider adopting.
Disclosure
Dr. Horwitz reports grants from Center for Medicare & Medicaid Services, grants from Agency for Healthcare Research and Quality, during the conduct of the study.
In this issue of the Journal of Hospital Medicine, Ellimoottil and colleagues examine characteristics of readmissions identified as planned by the planned readmission algorithm developed for the Center for Medicare & Medicaid Services (CMS) by using Medicare claims data from 131 hospitals in Michigan.1 They found that a substantial portion of readmissions currently classified as planned by the algorithm appear to be nonelective, as defined by the presence of a charge by an emergency medicine physician or an admission type of emergent or urgent, making those hospitalizations unlikely to be planned. They suggest that the algorithm could be modified to exclude such cases from the planned designation.
To determine whether modifying the algorithm as recommended is a good idea, it is helpful to examine the origins of the existing planned readmission algorithm. The algorithm originated as a consequence of hospital accountability measures for readmissions and was developed by this author in collaboration with colleagues at Yale University and elsewhere.2 Readmission measures have been controversial in part because clearly some (undetermined) fraction of readmissions is unavoidable. Many commentators have asked that readmission measures therefore capture only avoidable or related readmissions. Avoidable readmissions are those that could have been prevented by members of the healthcare system through actions taken during or after hospitalization, such as patient counseling, communication among team members, and guideline-concordant medical care. Related readmissions are those directly stemming from the index admission. However, reliably and accurately defining such events has proven elusive. One study, for instance, found the rate of physician-assessed preventability in published studies ranged from 9% to 48%.3 The challenge is even greater in trying to determine preventability using just claims data, without physician review of charts. Imagine, for instance, a patient with heart failure who is readmitted with heart failure exacerbation. The readmission preceded by a large fast-food meal is likely preventable; although even in this case, some would argue the healthcare system should not be held accountable for a readmission if the patient had been properly counseled about avoiding salty food. The one preceded by progressively worsening systolic function in a patient who reliably takes medications, weighs herself daily, and watches her diet is likely not. But both appear identical in claims. Related is also a difficult concept to operationalize. A recently hospitalized patient readmitted with pneumonia might have acquired it in the hospital (related) or from her grandchild 2 weeks later (unrelated). Again, both appear identical in claims.
In the ideal world, clinicians would be held accountable only for preventable readmissions. In practice, that has not proven to be possible.
Instead, the CMS readmission measures omit readmissions that are thought to be planned in advance: necessary and intentional readmissions. Defining a planned readmission is conceptually easier than defining a preventable readmission, yet even this is not always straightforward. The clearest case might be a person with a longstanding plan to have an elective surgery (say, a hip replacement) who is briefly admitted with something minor enough not to delay a subsequent admission for the scheduled surgery. Other patients are admitted with acute problems that require follow-up hospitalization (for instance, an acute myocardial infarction that requires a coronary artery bypass graft 2 weeks later).4 More ambiguous are patients who are sent home on a course of treatment with a plan for rehospitalization if it fails; for instance, a patient with gangrene is sent home on intravenous antibiotics but fails to improve and is rehospitalized for an amputation. Is that readmission planned or unplanned? Reasonable people might disagree.
Nonetheless, assuming it is desirable to at least try to identify and remove planned readmissions from measures, there are a number of ways in which one might do so. Perhaps the simplest would be to classify each hospitalization as planned or not on the UB-04 claim form. Such a process would be very feasible but also subject to gaming or coding variability. Given that there is some ambiguity and no standard about what types of readmissions are planned and that current policy provides incentives to reduce unplanned readmission rates, hospitals might vary in the cases to which they would apply such a code. This approach, therefore, has not been favored by payers to date. An alternative is to prospectively flag admissions that are expected to result in planned readmissions. In fiscal year 2014, the CMS implemented this option for newborns and patients with acute myocardial infarction by creating new discharge status codes of “discharged to [location] with a planned acute care hospital inpatient readmission.” Institutions can flag discharges that they know at the time of discharge will be followed by a readmission, such as a newborn who requires a repeat hospitalization for repair of a congenital anomaly.5 There is no time span required for the planned readmission to qualify. However, the difficulty in broadening the applicability of this option to all discharges lies in identification and matching; there also remains a possibility for gaming. The code does not specify when the readmission is expected nor for what diagnosis or procedure. How, then, do we know if the subsequent readmission is the one anticipated? Unexpected readmissions may still occur in the interim. Conversely, what if the discharging clinicians don’t know about an anticipated planned procedure? What would stop hospitals from labeling every discharge as expected to be followed by a planned readmission? These considerations have largely prevented the CMS from asking hospitals to apply the new code widely or from applying the code to identify planned readmissions.
Instead, the existing algorithm attempts to identify procedures that might be done on an elective basis and assumes readmissions with these procedures are planned if paired with a nonurgent diagnosis. Ellimoottil and colleagues attempt to verify whether this is accurate using a creative approach of seeking emergency department (ED) charges and admission type of emergent or urgent, and they found that roughly half of planned readmissions are, in fact, likely unplanned. This figure agrees closely with the original chart review validation of the algorithm. In particular, they found that some procedures, such as percutaneous cardiac interventions, appear to be paired regularly with a nonurgent principal diagnosis, such as coronary artery disease, even when done on an urgent basis.
This validation was performed prior to the availability of version 4.0 of the planned readmission algorithm, which removes several high-frequency procedures from the potentially planned readmission list (including cardiac devices and diagnostic cardiac catheterizations) that were very frequently mischaracterized as planned in the original chart validation.6 At least 8 such cases were also identified in this validation according to the table. Therefore, the misclassification rate of the current algorithm version is probably less than that reported in this article. Nonetheless, percutaneous transluminal coronary angioplasty remains on the planned procedure list in version 4.0 and appears to account for a substantial error rate, and it is likely that the authors’ approach would improve the accuracy even of the newer version of the algorithm.
The advantages of the suggested modifications are that they do not require chart review and could be readily adopted by the CMS. Although seeking ED charges for Medicare is somewhat cumbersome in that they are recorded in a different data set than the inpatient hospitalizations, there is no absolute barrier to adding this step to the algorithm, and doing so has substantial face validity. That said, identifying ED visits is not straightforward because nonemergency services can be provided in the ED (ie, critical care or observation care) and because facilities and providers have different billing requirements, producing different estimates depending on the data set used.7 Including admission type would be easier, but it would be less conservative and likely less accurate, as this field has not been validated and is not typically audited. Nonetheless, adding the presence of ED charges seems likely to improve the accuracy of the algorithm. As the CMS continues to refine the planned readmission algorithm, these proposed changes would be very reasonable to study with chart validation and, if valid, to consider adopting.
Disclosure
Dr. Horwitz reports grants from Center for Medicare & Medicaid Services, grants from Agency for Healthcare Research and Quality, during the conduct of the study.
1. Ellimoottil C, Khouri R, Dhir A, Hou H, Miller D, Dupree J. An opportunity to improve Medicare’s planned readmissions measure. J Hosp Med. 2017;12(10):840-842.
2. Horwitz LI, Grady JN, Cohen DB, et al. Development and validation of an algorithm to identify planned readmissions from claims data. J Hosp Med. 2015;10(10):670-677. PubMed
3. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074-1081. PubMed
4. Assmann A, Boeken U, Akhyari P, Lichtenberg A. Appropriate timing of coronary artery bypass grafting after acute myocardial infarction. Thorac Cardiovasc Surg. 2012;60(7):446-451. PubMed
5. Inpatient Prospective Payment System/Long-Term Care Hospital (IPPS/LTCH) Final Rule, 78 Fed. Reg. 27520 (Aug 19, 2013) (to be codified at 42 C.F.R. Parts 424, 414, 419, 424, 482, 485 and 489). http://www.gpo.gov/fdsys/pkg/FR-2013-08-19/pdf/2013-18956.pdf. Accessed on May 4, 2017.
6. Yale New Haven Health Services Corporation Center for Outcomes Research and Evaluation. 2016 Condition-Specific Measures Updates and Specifications Report: Hospital-Level 30-Day Risk-Standardized Readmission Measures. March 2016.
7. Venkatesh AK, Mei H, Kocher KE, et al. Identification of emergency department visits in Medicare administrative claims: approaches and implications. Acad Emerg Med. 2017;24(4):422-431. PubMed
1. Ellimoottil C, Khouri R, Dhir A, Hou H, Miller D, Dupree J. An opportunity to improve Medicare’s planned readmissions measure. J Hosp Med. 2017;12(10):840-842.
2. Horwitz LI, Grady JN, Cohen DB, et al. Development and validation of an algorithm to identify planned readmissions from claims data. J Hosp Med. 2015;10(10):670-677. PubMed
3. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074-1081. PubMed
4. Assmann A, Boeken U, Akhyari P, Lichtenberg A. Appropriate timing of coronary artery bypass grafting after acute myocardial infarction. Thorac Cardiovasc Surg. 2012;60(7):446-451. PubMed
5. Inpatient Prospective Payment System/Long-Term Care Hospital (IPPS/LTCH) Final Rule, 78 Fed. Reg. 27520 (Aug 19, 2013) (to be codified at 42 C.F.R. Parts 424, 414, 419, 424, 482, 485 and 489). http://www.gpo.gov/fdsys/pkg/FR-2013-08-19/pdf/2013-18956.pdf. Accessed on May 4, 2017.
6. Yale New Haven Health Services Corporation Center for Outcomes Research and Evaluation. 2016 Condition-Specific Measures Updates and Specifications Report: Hospital-Level 30-Day Risk-Standardized Readmission Measures. March 2016.
7. Venkatesh AK, Mei H, Kocher KE, et al. Identification of emergency department visits in Medicare administrative claims: approaches and implications. Acad Emerg Med. 2017;24(4):422-431. PubMed
Noise and Light Pollution in the Hospital: A Call for Action
“Unnecessary noise is the most cruel abuse of care which can be inflicted on either the sick or the well.”
–Florence Nightingale1
Motivated by the “unsustainable” rise in noise pollution and its “direct, as well as cumulative, adverse health effects,” an expert World Health Organization (WHO) task force composed the Guidelines for Community Noise, outlining specific noise recommendations for public settings, including hospitals.2 In ward settings, these guidelines mandate that background noise (which is defined as unwanted sound) levels average <35 decibels (dB; ie, a typical library) during the day, average <30 dB at night, and peak no higher than 40 dB (ie, a normal conversation), a level sufficient to awaken someone from sleep.
Since the publication of these guidelines in 1999, substantial new research has added to our understanding of hospital noise levels. Recent research has demonstrated that few, if any, hospitals comply with WHO noise recommendations.3 Moreover, since 1960, hospital sound levels have risen ~4 dB per decade; based on the logarithmic decibel scale, if this trend continues, this translates to a 528% increase in loudness by 2020.3
The overwhelming majority of research on hospital noise has focused on the intensive care unit (ICU), where beeping machines and busy staff often push peak nighttime noise levels over 80 dB (ie, a kitchen blender).4 When evaluated during sleep, noise in the ICU causes frequent arousals and awakenings. When noise is combined with other factors, such as bright light and patient care interactions, poor sleep quality invariably results.4
While it has been known for years that critically ill patients experience markedly fragmented and nonrestorative sleep,5 poor sleep has recently gained attention due to its potential role as a modifiable risk factor for delirium and its associated consequences, including prolonged length of stay and long-lasting neuropsychological and physical impairments.6 Due to this interest, numerous interventions have been attempted,7 including multicomponent bundles to promote sleep,8 which have been shown to reduce delirium in the ICU.9-12 Therefore, efforts to promote sleep in the ICU, including interventions to minimize nighttime noise, are recommended in Society of Critical Care Medicine clinical practice guidelines13 and are listed as a top 5 research priority by an expert panel of ICU delirium researchers.14
In contrast to the ICU, there has been little attention paid to noise in other patient care areas. Existing studies in non-ICU ward settings suggest that excessive noise is common,3 similar to the ICU, and that patients experience poor sleep, with noise being a significant disruptor of sleep.5,15,16 Such poor sleep is thought to contribute to uncontrolled pain, labile blood pressure, and dissatisfaction with care.16,17
In this issue of the Journal of Hospital Medicine, Jaiswal and colleagues18 report on an important study evaluating sound and light levels in both non-ICU and ICU settings within a busy tertiary-care hospital. In 8 general ward, 8 telemetry, and 8 ICU patient rooms, the investigators used meters to record sound and light levels for 24 to 72 hours. In each of these locations, they detected average hourly sound levels ranging from 45 to 54 dB, 47 to 55 dB, and 56 to 60 dB, respectively, with ICUs consistently registering the highest hourly sound levels. Notably, all locations exceeded WHO noise limits at all hours of the day. As a novel measure, the investigators evaluated sound level changes (SLCs), or the difference between peak and background sound levels, based on research suggesting that dramatic SLCs (≥17.5 dB) are more disruptive than constant loud noise.19 The authors observed that SLCs ≥17.5 dB occur predominantly during daytime hours and, interestingly, at a similar rate in the wards versus the ICU.
Importantly, the authors do not link their findings with patient sleep or other patient outcomes but instead focus on employing rigorous methods to gather continuous recordings. By measuring light levels, the authors bring attention to an issue often considered less disruptive to sleep than noise.6,10,20 Similar to prior research,21 Jaiswal and colleagues demonstrate low levels of light at night, with no substantial difference between non-ICU and ICU settings. As a key finding, the authors bring attention to low levels of light during daytime hours, particularly in the morning, when levels range from 22 to 101 lux in the wards and 16 to 39 lux in the ICU. While the optimal timing and brightness of light exposure remains unknown, it is well established that ambient light is the most potent cue for circadian rhythms, with levels >100 lux necessary to suppress melatonin, the key hormone involved in circadian entrainment. Hence, the levels of morning light observed in this study were likely insufficient to maintain healthy circadian rhythms. When exposed to abnormal light levels and factors such as noise, stress, and medications, hospitalized patients are at risk for circadian rhythm misalignment, which can disrupt sleep and trigger a complex molecular cascade, leading to end-organ dysfunction including depressed immunity, glucose dysregulation, arrhythmias, and delirium.22-24
What are the major takeaway messages from this study? First, it confirms that sound levels are not only high in the ICU but also in non-ICU wards. As hospital ratings and reimbursements now rely on favorable patient ratings, future noise-reduction efforts will surely expand more vigorously across patient care areas.25 Second, SLCs and daytime recordings must be included in efforts to understand and improve sleep and circadian rhythms in hospitalized patients. Finally, this study provides a sobering reminder of the challenge of meeting WHO guidelines and facilitating an optimal healing environment for patients. Sadly, hospital sound levels continue to rise, and quiet-time interventions consistently fail to lower noise to levels anywhere near WHO limits.26 Hence, to make any progress, hospitals of the future must entertain novel design modifications (eg, sound-absorbing walls and alternative room layouts), fix common sources of noise pollution (eg, ventilation systems and alarms), and critically evaluate and update interventions aimed at improving sleep and aligning circadian rhythms for hospitalized patients.27
Acknowledgments
B.B.K. is currently supported by a grant through the University of California, Los Angeles Clinical Translational Research Institute and the National Institutes of Health’s National Center for Advancing Translational Sciences (UL1TR000124).
Disclosure
The authors have nothing to disclose.
1. Nightingale F. Notes on Nursing: What It Is, and What It Is Not. Harrison; 1860. PubMed
2. Berglund B, Lindvall T, Schwela DH. Guidelines for Community Noise. Geneva, Switzerland: World Health Organization, 1999. http://www.who.int/docstore/peh/noise/guidelines2.html. Accessed on June 23, 2017.
3. Busch-Vishniac IJ, West JE, Barnhill C, Hunter T, Orellana D, Chivukula R. Noise levels in Johns Hopkins Hospital. J Acoust Soc Am. 2005;118(6):3629-3645. PubMed
4. Kamdar BB, Needham DM, Collop NA. Sleep deprivation in critical illness: its role in physical and psychological recovery. J Intensive Care Med. 2012;27(2):97-111. PubMed
5. Knauert MP, Malik V, Kamdar BB. Sleep and sleep disordered breathing in hospitalized patients. Semin Respir Crit Care Med. 2014;35(5):582-592. PubMed
6. Kamdar BB, Knauert MP, Jones SF, et al. Perceptions and practices regarding sleep in the intensive care unit. A survey of 1,223 critical care providers. Ann Am Thorac Soc. 2016;13(8):1370-1377. PubMed
7. DuBose JR, Hadi K. Improving inpatient environments to support patient sleep. Int J Qual Health Care. 2016;28(5):540-553. PubMed
8. Kamdar BB, Kamdar BB, Needham DM. Bundling sleep promotion with delirium prevention: ready for prime time? Anaesthesia. 2014;69(6):527-531. PubMed
9. Patel J, Baldwin J, Bunting P, Laha S. The effect of a multicomponent multidisciplinary bundle of interventions on sleep and delirium in medical and surgical intensive care patients. Anaesthesia. 2014;69(6):540-549. PubMed
10. Kamdar BB, King LM, Collop NA, et al. The effect of a quality improvement intervention on perceived sleep quality and cognition in a medical ICU. Crit Care Med. 2013;41(3):800-809. PubMed
11. van de Pol I, van Iterson M, Maaskant J. Effect of nocturnal sound reduction on the incidence of delirium in intensive care unit patients: An interrupted time series analysis. Intensive Crit Care Nurs. 2017;41:18-25. PubMed
12. Flannery AH, Oyler DR, Weinhouse GL. The impact of interventions to improve sleep on delirium in the ICU: a systematic review and research framework. Crit Care Med. 2016;44(12):2231-2240. PubMed
13. Barr J, Fraser GL, Puntillo K, et al. Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the intensive care unit. Crit Care Med. 2013;41(1):263-306. PubMed
14. Pandharipande PP, Ely EW, Arora RC, et al. The intensive care delirium research agenda: a multinational, interprofessional perspective [published online ahead of print June 13, 2017]. Intensive Care Med. PubMed
15. Topf M, Thompson S. Interactive relationships between hospital patients’ noise-induced stress and other stress with sleep. Heart Lung. 2001;30(4):237-243. PubMed
16. Tamrat R, Huynh-Le MP, Goyal M. Non-pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29(5):788-795. PubMed
17. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. PubMed
18. Jaiswal SJ, Garcia S, Owens RL. Sound and light levels are similarly disruptive in ICU and non-ICU wards. J Hosp Med. 2017;12(10):798-804. https://doi.org/10.12788/jhm.2826.
19. Stanchina ML, Abu-Hijleh M, Chaudhry BK, Carlisle CC, Millman RP. The influence of white noise on sleep in subjects exposed to ICU noise. Sleep Med. 2005;6(5):423-428. PubMed
20. Freedman NS, Kotzer N, Schwab RJ. Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159(4, Pt 1):1155-1162. PubMed
21. Meyer TJ, Eveloff SE, Bauer MS, Schwartz WA, Hill NS, Millman RP. Adverse environmental conditions in the respiratory and medical ICU settings. Chest. 1994;105(4):1211-1216. PubMed
22. Castro R, Angus DC, Rosengart MR. The effect of light on critical illness. Crit Care. 2011;15(2):218. PubMed
23. Brainard J, Gobel M, Scott B, Koeppen M, Eckle T. Health implications of disrupted circadian rhythms and the potential for daylight as therapy. Anesthesiology. 2015;122(5):1170-1175. PubMed
24. Fitzgerald JM, Adamis D, Trzepacz PT, et al. Delirium: a disturbance of circadian integrity? Med Hypotheses. 2013;81(4):568-576. PubMed
25. Stafford A, Haverland A, Bridges E. Noise in the ICU. Am J Nurs. 2014;114(5):57-63. PubMed
26. Tainter CR, Levine AR, Quraishi SA, et al. Noise levels in surgical ICUs are consistently above recommended standards. Crit Care Med. 2016;44(1):147-152. PubMed
27. Ulrich RS, Zimring C, Zhu X, et al. A review of the research literature on evidence-based healthcare design. HERD. 2008;1(3):61-125. PubMed
“Unnecessary noise is the most cruel abuse of care which can be inflicted on either the sick or the well.”
–Florence Nightingale1
Motivated by the “unsustainable” rise in noise pollution and its “direct, as well as cumulative, adverse health effects,” an expert World Health Organization (WHO) task force composed the Guidelines for Community Noise, outlining specific noise recommendations for public settings, including hospitals.2 In ward settings, these guidelines mandate that background noise (which is defined as unwanted sound) levels average <35 decibels (dB; ie, a typical library) during the day, average <30 dB at night, and peak no higher than 40 dB (ie, a normal conversation), a level sufficient to awaken someone from sleep.
Since the publication of these guidelines in 1999, substantial new research has added to our understanding of hospital noise levels. Recent research has demonstrated that few, if any, hospitals comply with WHO noise recommendations.3 Moreover, since 1960, hospital sound levels have risen ~4 dB per decade; based on the logarithmic decibel scale, if this trend continues, this translates to a 528% increase in loudness by 2020.3
The overwhelming majority of research on hospital noise has focused on the intensive care unit (ICU), where beeping machines and busy staff often push peak nighttime noise levels over 80 dB (ie, a kitchen blender).4 When evaluated during sleep, noise in the ICU causes frequent arousals and awakenings. When noise is combined with other factors, such as bright light and patient care interactions, poor sleep quality invariably results.4
While it has been known for years that critically ill patients experience markedly fragmented and nonrestorative sleep,5 poor sleep has recently gained attention due to its potential role as a modifiable risk factor for delirium and its associated consequences, including prolonged length of stay and long-lasting neuropsychological and physical impairments.6 Due to this interest, numerous interventions have been attempted,7 including multicomponent bundles to promote sleep,8 which have been shown to reduce delirium in the ICU.9-12 Therefore, efforts to promote sleep in the ICU, including interventions to minimize nighttime noise, are recommended in Society of Critical Care Medicine clinical practice guidelines13 and are listed as a top 5 research priority by an expert panel of ICU delirium researchers.14
In contrast to the ICU, there has been little attention paid to noise in other patient care areas. Existing studies in non-ICU ward settings suggest that excessive noise is common,3 similar to the ICU, and that patients experience poor sleep, with noise being a significant disruptor of sleep.5,15,16 Such poor sleep is thought to contribute to uncontrolled pain, labile blood pressure, and dissatisfaction with care.16,17
In this issue of the Journal of Hospital Medicine, Jaiswal and colleagues18 report on an important study evaluating sound and light levels in both non-ICU and ICU settings within a busy tertiary-care hospital. In 8 general ward, 8 telemetry, and 8 ICU patient rooms, the investigators used meters to record sound and light levels for 24 to 72 hours. In each of these locations, they detected average hourly sound levels ranging from 45 to 54 dB, 47 to 55 dB, and 56 to 60 dB, respectively, with ICUs consistently registering the highest hourly sound levels. Notably, all locations exceeded WHO noise limits at all hours of the day. As a novel measure, the investigators evaluated sound level changes (SLCs), or the difference between peak and background sound levels, based on research suggesting that dramatic SLCs (≥17.5 dB) are more disruptive than constant loud noise.19 The authors observed that SLCs ≥17.5 dB occur predominantly during daytime hours and, interestingly, at a similar rate in the wards versus the ICU.
Importantly, the authors do not link their findings with patient sleep or other patient outcomes but instead focus on employing rigorous methods to gather continuous recordings. By measuring light levels, the authors bring attention to an issue often considered less disruptive to sleep than noise.6,10,20 Similar to prior research,21 Jaiswal and colleagues demonstrate low levels of light at night, with no substantial difference between non-ICU and ICU settings. As a key finding, the authors bring attention to low levels of light during daytime hours, particularly in the morning, when levels range from 22 to 101 lux in the wards and 16 to 39 lux in the ICU. While the optimal timing and brightness of light exposure remains unknown, it is well established that ambient light is the most potent cue for circadian rhythms, with levels >100 lux necessary to suppress melatonin, the key hormone involved in circadian entrainment. Hence, the levels of morning light observed in this study were likely insufficient to maintain healthy circadian rhythms. When exposed to abnormal light levels and factors such as noise, stress, and medications, hospitalized patients are at risk for circadian rhythm misalignment, which can disrupt sleep and trigger a complex molecular cascade, leading to end-organ dysfunction including depressed immunity, glucose dysregulation, arrhythmias, and delirium.22-24
What are the major takeaway messages from this study? First, it confirms that sound levels are not only high in the ICU but also in non-ICU wards. As hospital ratings and reimbursements now rely on favorable patient ratings, future noise-reduction efforts will surely expand more vigorously across patient care areas.25 Second, SLCs and daytime recordings must be included in efforts to understand and improve sleep and circadian rhythms in hospitalized patients. Finally, this study provides a sobering reminder of the challenge of meeting WHO guidelines and facilitating an optimal healing environment for patients. Sadly, hospital sound levels continue to rise, and quiet-time interventions consistently fail to lower noise to levels anywhere near WHO limits.26 Hence, to make any progress, hospitals of the future must entertain novel design modifications (eg, sound-absorbing walls and alternative room layouts), fix common sources of noise pollution (eg, ventilation systems and alarms), and critically evaluate and update interventions aimed at improving sleep and aligning circadian rhythms for hospitalized patients.27
Acknowledgments
B.B.K. is currently supported by a grant through the University of California, Los Angeles Clinical Translational Research Institute and the National Institutes of Health’s National Center for Advancing Translational Sciences (UL1TR000124).
Disclosure
The authors have nothing to disclose.
“Unnecessary noise is the most cruel abuse of care which can be inflicted on either the sick or the well.”
–Florence Nightingale1
Motivated by the “unsustainable” rise in noise pollution and its “direct, as well as cumulative, adverse health effects,” an expert World Health Organization (WHO) task force composed the Guidelines for Community Noise, outlining specific noise recommendations for public settings, including hospitals.2 In ward settings, these guidelines mandate that background noise (which is defined as unwanted sound) levels average <35 decibels (dB; ie, a typical library) during the day, average <30 dB at night, and peak no higher than 40 dB (ie, a normal conversation), a level sufficient to awaken someone from sleep.
Since the publication of these guidelines in 1999, substantial new research has added to our understanding of hospital noise levels. Recent research has demonstrated that few, if any, hospitals comply with WHO noise recommendations.3 Moreover, since 1960, hospital sound levels have risen ~4 dB per decade; based on the logarithmic decibel scale, if this trend continues, this translates to a 528% increase in loudness by 2020.3
The overwhelming majority of research on hospital noise has focused on the intensive care unit (ICU), where beeping machines and busy staff often push peak nighttime noise levels over 80 dB (ie, a kitchen blender).4 When evaluated during sleep, noise in the ICU causes frequent arousals and awakenings. When noise is combined with other factors, such as bright light and patient care interactions, poor sleep quality invariably results.4
While it has been known for years that critically ill patients experience markedly fragmented and nonrestorative sleep,5 poor sleep has recently gained attention due to its potential role as a modifiable risk factor for delirium and its associated consequences, including prolonged length of stay and long-lasting neuropsychological and physical impairments.6 Due to this interest, numerous interventions have been attempted,7 including multicomponent bundles to promote sleep,8 which have been shown to reduce delirium in the ICU.9-12 Therefore, efforts to promote sleep in the ICU, including interventions to minimize nighttime noise, are recommended in Society of Critical Care Medicine clinical practice guidelines13 and are listed as a top 5 research priority by an expert panel of ICU delirium researchers.14
In contrast to the ICU, there has been little attention paid to noise in other patient care areas. Existing studies in non-ICU ward settings suggest that excessive noise is common,3 similar to the ICU, and that patients experience poor sleep, with noise being a significant disruptor of sleep.5,15,16 Such poor sleep is thought to contribute to uncontrolled pain, labile blood pressure, and dissatisfaction with care.16,17
In this issue of the Journal of Hospital Medicine, Jaiswal and colleagues18 report on an important study evaluating sound and light levels in both non-ICU and ICU settings within a busy tertiary-care hospital. In 8 general ward, 8 telemetry, and 8 ICU patient rooms, the investigators used meters to record sound and light levels for 24 to 72 hours. In each of these locations, they detected average hourly sound levels ranging from 45 to 54 dB, 47 to 55 dB, and 56 to 60 dB, respectively, with ICUs consistently registering the highest hourly sound levels. Notably, all locations exceeded WHO noise limits at all hours of the day. As a novel measure, the investigators evaluated sound level changes (SLCs), or the difference between peak and background sound levels, based on research suggesting that dramatic SLCs (≥17.5 dB) are more disruptive than constant loud noise.19 The authors observed that SLCs ≥17.5 dB occur predominantly during daytime hours and, interestingly, at a similar rate in the wards versus the ICU.
Importantly, the authors do not link their findings with patient sleep or other patient outcomes but instead focus on employing rigorous methods to gather continuous recordings. By measuring light levels, the authors bring attention to an issue often considered less disruptive to sleep than noise.6,10,20 Similar to prior research,21 Jaiswal and colleagues demonstrate low levels of light at night, with no substantial difference between non-ICU and ICU settings. As a key finding, the authors bring attention to low levels of light during daytime hours, particularly in the morning, when levels range from 22 to 101 lux in the wards and 16 to 39 lux in the ICU. While the optimal timing and brightness of light exposure remains unknown, it is well established that ambient light is the most potent cue for circadian rhythms, with levels >100 lux necessary to suppress melatonin, the key hormone involved in circadian entrainment. Hence, the levels of morning light observed in this study were likely insufficient to maintain healthy circadian rhythms. When exposed to abnormal light levels and factors such as noise, stress, and medications, hospitalized patients are at risk for circadian rhythm misalignment, which can disrupt sleep and trigger a complex molecular cascade, leading to end-organ dysfunction including depressed immunity, glucose dysregulation, arrhythmias, and delirium.22-24
What are the major takeaway messages from this study? First, it confirms that sound levels are not only high in the ICU but also in non-ICU wards. As hospital ratings and reimbursements now rely on favorable patient ratings, future noise-reduction efforts will surely expand more vigorously across patient care areas.25 Second, SLCs and daytime recordings must be included in efforts to understand and improve sleep and circadian rhythms in hospitalized patients. Finally, this study provides a sobering reminder of the challenge of meeting WHO guidelines and facilitating an optimal healing environment for patients. Sadly, hospital sound levels continue to rise, and quiet-time interventions consistently fail to lower noise to levels anywhere near WHO limits.26 Hence, to make any progress, hospitals of the future must entertain novel design modifications (eg, sound-absorbing walls and alternative room layouts), fix common sources of noise pollution (eg, ventilation systems and alarms), and critically evaluate and update interventions aimed at improving sleep and aligning circadian rhythms for hospitalized patients.27
Acknowledgments
B.B.K. is currently supported by a grant through the University of California, Los Angeles Clinical Translational Research Institute and the National Institutes of Health’s National Center for Advancing Translational Sciences (UL1TR000124).
Disclosure
The authors have nothing to disclose.
1. Nightingale F. Notes on Nursing: What It Is, and What It Is Not. Harrison; 1860. PubMed
2. Berglund B, Lindvall T, Schwela DH. Guidelines for Community Noise. Geneva, Switzerland: World Health Organization, 1999. http://www.who.int/docstore/peh/noise/guidelines2.html. Accessed on June 23, 2017.
3. Busch-Vishniac IJ, West JE, Barnhill C, Hunter T, Orellana D, Chivukula R. Noise levels in Johns Hopkins Hospital. J Acoust Soc Am. 2005;118(6):3629-3645. PubMed
4. Kamdar BB, Needham DM, Collop NA. Sleep deprivation in critical illness: its role in physical and psychological recovery. J Intensive Care Med. 2012;27(2):97-111. PubMed
5. Knauert MP, Malik V, Kamdar BB. Sleep and sleep disordered breathing in hospitalized patients. Semin Respir Crit Care Med. 2014;35(5):582-592. PubMed
6. Kamdar BB, Knauert MP, Jones SF, et al. Perceptions and practices regarding sleep in the intensive care unit. A survey of 1,223 critical care providers. Ann Am Thorac Soc. 2016;13(8):1370-1377. PubMed
7. DuBose JR, Hadi K. Improving inpatient environments to support patient sleep. Int J Qual Health Care. 2016;28(5):540-553. PubMed
8. Kamdar BB, Kamdar BB, Needham DM. Bundling sleep promotion with delirium prevention: ready for prime time? Anaesthesia. 2014;69(6):527-531. PubMed
9. Patel J, Baldwin J, Bunting P, Laha S. The effect of a multicomponent multidisciplinary bundle of interventions on sleep and delirium in medical and surgical intensive care patients. Anaesthesia. 2014;69(6):540-549. PubMed
10. Kamdar BB, King LM, Collop NA, et al. The effect of a quality improvement intervention on perceived sleep quality and cognition in a medical ICU. Crit Care Med. 2013;41(3):800-809. PubMed
11. van de Pol I, van Iterson M, Maaskant J. Effect of nocturnal sound reduction on the incidence of delirium in intensive care unit patients: An interrupted time series analysis. Intensive Crit Care Nurs. 2017;41:18-25. PubMed
12. Flannery AH, Oyler DR, Weinhouse GL. The impact of interventions to improve sleep on delirium in the ICU: a systematic review and research framework. Crit Care Med. 2016;44(12):2231-2240. PubMed
13. Barr J, Fraser GL, Puntillo K, et al. Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the intensive care unit. Crit Care Med. 2013;41(1):263-306. PubMed
14. Pandharipande PP, Ely EW, Arora RC, et al. The intensive care delirium research agenda: a multinational, interprofessional perspective [published online ahead of print June 13, 2017]. Intensive Care Med. PubMed
15. Topf M, Thompson S. Interactive relationships between hospital patients’ noise-induced stress and other stress with sleep. Heart Lung. 2001;30(4):237-243. PubMed
16. Tamrat R, Huynh-Le MP, Goyal M. Non-pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29(5):788-795. PubMed
17. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. PubMed
18. Jaiswal SJ, Garcia S, Owens RL. Sound and light levels are similarly disruptive in ICU and non-ICU wards. J Hosp Med. 2017;12(10):798-804. https://doi.org/10.12788/jhm.2826.
19. Stanchina ML, Abu-Hijleh M, Chaudhry BK, Carlisle CC, Millman RP. The influence of white noise on sleep in subjects exposed to ICU noise. Sleep Med. 2005;6(5):423-428. PubMed
20. Freedman NS, Kotzer N, Schwab RJ. Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159(4, Pt 1):1155-1162. PubMed
21. Meyer TJ, Eveloff SE, Bauer MS, Schwartz WA, Hill NS, Millman RP. Adverse environmental conditions in the respiratory and medical ICU settings. Chest. 1994;105(4):1211-1216. PubMed
22. Castro R, Angus DC, Rosengart MR. The effect of light on critical illness. Crit Care. 2011;15(2):218. PubMed
23. Brainard J, Gobel M, Scott B, Koeppen M, Eckle T. Health implications of disrupted circadian rhythms and the potential for daylight as therapy. Anesthesiology. 2015;122(5):1170-1175. PubMed
24. Fitzgerald JM, Adamis D, Trzepacz PT, et al. Delirium: a disturbance of circadian integrity? Med Hypotheses. 2013;81(4):568-576. PubMed
25. Stafford A, Haverland A, Bridges E. Noise in the ICU. Am J Nurs. 2014;114(5):57-63. PubMed
26. Tainter CR, Levine AR, Quraishi SA, et al. Noise levels in surgical ICUs are consistently above recommended standards. Crit Care Med. 2016;44(1):147-152. PubMed
27. Ulrich RS, Zimring C, Zhu X, et al. A review of the research literature on evidence-based healthcare design. HERD. 2008;1(3):61-125. PubMed
1. Nightingale F. Notes on Nursing: What It Is, and What It Is Not. Harrison; 1860. PubMed
2. Berglund B, Lindvall T, Schwela DH. Guidelines for Community Noise. Geneva, Switzerland: World Health Organization, 1999. http://www.who.int/docstore/peh/noise/guidelines2.html. Accessed on June 23, 2017.
3. Busch-Vishniac IJ, West JE, Barnhill C, Hunter T, Orellana D, Chivukula R. Noise levels in Johns Hopkins Hospital. J Acoust Soc Am. 2005;118(6):3629-3645. PubMed
4. Kamdar BB, Needham DM, Collop NA. Sleep deprivation in critical illness: its role in physical and psychological recovery. J Intensive Care Med. 2012;27(2):97-111. PubMed
5. Knauert MP, Malik V, Kamdar BB. Sleep and sleep disordered breathing in hospitalized patients. Semin Respir Crit Care Med. 2014;35(5):582-592. PubMed
6. Kamdar BB, Knauert MP, Jones SF, et al. Perceptions and practices regarding sleep in the intensive care unit. A survey of 1,223 critical care providers. Ann Am Thorac Soc. 2016;13(8):1370-1377. PubMed
7. DuBose JR, Hadi K. Improving inpatient environments to support patient sleep. Int J Qual Health Care. 2016;28(5):540-553. PubMed
8. Kamdar BB, Kamdar BB, Needham DM. Bundling sleep promotion with delirium prevention: ready for prime time? Anaesthesia. 2014;69(6):527-531. PubMed
9. Patel J, Baldwin J, Bunting P, Laha S. The effect of a multicomponent multidisciplinary bundle of interventions on sleep and delirium in medical and surgical intensive care patients. Anaesthesia. 2014;69(6):540-549. PubMed
10. Kamdar BB, King LM, Collop NA, et al. The effect of a quality improvement intervention on perceived sleep quality and cognition in a medical ICU. Crit Care Med. 2013;41(3):800-809. PubMed
11. van de Pol I, van Iterson M, Maaskant J. Effect of nocturnal sound reduction on the incidence of delirium in intensive care unit patients: An interrupted time series analysis. Intensive Crit Care Nurs. 2017;41:18-25. PubMed
12. Flannery AH, Oyler DR, Weinhouse GL. The impact of interventions to improve sleep on delirium in the ICU: a systematic review and research framework. Crit Care Med. 2016;44(12):2231-2240. PubMed
13. Barr J, Fraser GL, Puntillo K, et al. Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the intensive care unit. Crit Care Med. 2013;41(1):263-306. PubMed
14. Pandharipande PP, Ely EW, Arora RC, et al. The intensive care delirium research agenda: a multinational, interprofessional perspective [published online ahead of print June 13, 2017]. Intensive Care Med. PubMed
15. Topf M, Thompson S. Interactive relationships between hospital patients’ noise-induced stress and other stress with sleep. Heart Lung. 2001;30(4):237-243. PubMed
16. Tamrat R, Huynh-Le MP, Goyal M. Non-pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29(5):788-795. PubMed
17. Fillary J, Chaplin H, Jones G, Thompson A, Holme A, Wilson P. Noise at night in hospital general wards: a mapping of the literature. Br J Nurs. 2015;24(10):536-540. PubMed
18. Jaiswal SJ, Garcia S, Owens RL. Sound and light levels are similarly disruptive in ICU and non-ICU wards. J Hosp Med. 2017;12(10):798-804. https://doi.org/10.12788/jhm.2826.
19. Stanchina ML, Abu-Hijleh M, Chaudhry BK, Carlisle CC, Millman RP. The influence of white noise on sleep in subjects exposed to ICU noise. Sleep Med. 2005;6(5):423-428. PubMed
20. Freedman NS, Kotzer N, Schwab RJ. Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159(4, Pt 1):1155-1162. PubMed
21. Meyer TJ, Eveloff SE, Bauer MS, Schwartz WA, Hill NS, Millman RP. Adverse environmental conditions in the respiratory and medical ICU settings. Chest. 1994;105(4):1211-1216. PubMed
22. Castro R, Angus DC, Rosengart MR. The effect of light on critical illness. Crit Care. 2011;15(2):218. PubMed
23. Brainard J, Gobel M, Scott B, Koeppen M, Eckle T. Health implications of disrupted circadian rhythms and the potential for daylight as therapy. Anesthesiology. 2015;122(5):1170-1175. PubMed
24. Fitzgerald JM, Adamis D, Trzepacz PT, et al. Delirium: a disturbance of circadian integrity? Med Hypotheses. 2013;81(4):568-576. PubMed
25. Stafford A, Haverland A, Bridges E. Noise in the ICU. Am J Nurs. 2014;114(5):57-63. PubMed
26. Tainter CR, Levine AR, Quraishi SA, et al. Noise levels in surgical ICUs are consistently above recommended standards. Crit Care Med. 2016;44(1):147-152. PubMed
27. Ulrich RS, Zimring C, Zhu X, et al. A review of the research literature on evidence-based healthcare design. HERD. 2008;1(3):61-125. PubMed
© 2017 Society of Hospital Medicine
A Search for Tools to Support Decision-Making for PIVC Use
Peripheral intravenous catheters (PIVCs) are the most frequently used vascular access devices (VADs) in all patient populations and practice settings. Because of its invasive nature and the fact that PIVCs are placed and medications are administered directly into the bloodstream, vascular access is risky. There are multiple factors to consider when placing a PIVC, the least of which is determining the most appropriate device for the patient based on the prescribed therapy.
VAD planning and assessment needs to occur at the first patient encounter so that the most appropriate device is selected and it aligns with the duration of the treatment, minimizes the number of unnecessary VADs placed, and preserves veins for any future needs. The level of the clinician’s expertise, coupled with challenging environments of care, add to the complexity of what most perceive to be a “simple” procedure—placing a PIVC. For these reasons, it’s imperative that clinicians are competent in the use and placement of VADs to ensure safe patient care.
Carr and colleagues1 performed a notable scoping review to determine the existence of tools, clinical prediction rules, and algorithms (TRAs) that would support decision-making for the use of PIVCs and promote first-time insertion success (FTIS). They refined their search strategy to studies that described the use or development of any TRA regarding PIVC insertion in hospitalized adult patients.
The team identified 36 references for screening and based on their inclusion and exclusion criteria, were left with 13 studies in the final review. Inclusion criteria included TRAs for PIVC insertion in hospitalized adult patients using a traditional insertion approach, which was defined as “an assessment and/or insertion with touch and feel, therefore, without vessel locating technology such as ultrasound and/or near infrared technology.” 1 Of note is that some of the exclusion criteria included pediatric studies, TRAs focused on postinsertion assessment, studies that examined VADs other than PIVCs, and studies in which vascular visualization techniques were used.
In general, the authors were unable to find reported evidence that the study recommendations were adopted in clinical practice or to what degree any TRA had on the success of a PIVC insertion. As a result, they were unable to determine what, if any, clinical value the TRAs had.
The review of the studies, however, identified 3 variables that had an impact on PIVC insertion success: patient, clinician, and product characteristics. Vein characteristics, such as the number, size, and location of veins, and patients’ clinical conditions, such as diabetes, sickle cell anemia, and intravenous drug abuse, were noted as predictors of PIVC insertion success. In 7 papers, the primary focus was on patients with a history of difficult intravenous access (DIVA). The definition of DIVA varied from time to insertion of the PIVC to the number of failed attempts, ranging from 1 to 3 or more attempts.
Clinician variables, such as specialty nurse certification, years of experience, and self-reporting skill level, were associated with successful insertions, and clinicians who predicted FTIS were likely to have FTIS. Product variables included PIVC gauge size and the number of vein options and the relationship with successful first attempts.
Limitations noted by the researchers were a lack of sufficient published evidence for TRAs for PIVC insertion and standardized definitions for DIVA and expert inserters. The number of variables and the dearth of standardized terms may also influence the ability to adopt any TRAs.
While the purpose of the research was to identify TRAs that could guide clinical practice for the use of PIVCs and successful insertions, the authors make an important point that dwell time was not considered. While a TRA may lead to a successful insertion, it may not transcend the intended life of the PIVC or the duration of the therapy. Therefore, TRAs should embed steps that ensure the appropriate device is selected at the start of the patient’s treatment.
The authors identified a need for undertaking and providing research in a critical area of patient care and safety. This article increases awareness of issues related to PIVCs and the impact they have on patient care. FTIS rates vary and the implications of their use are many. Patient satisfaction, no delay in treatment, vein preservation, a decreased risk of complications, and the cost of labor and products are factors to consider. Tools to improve patient outcomes related to device insertion, care, and management need to be developed and validated. The authors also note that future TRAs should integrate the use of ultrasound and vascular visualization technologies.
In a complex, challenging healthcare environment, tools and guidance that enhance practice do not only help clinicians; they have a positive impact on patient care. The need for research, so that gaps in knowledge and science can be bridged, is clear. Gaps must be identified, research conducted, and TRAs developed and adopted to enhance patient outcomes.
Disclosure
The author reports no conflicts of interest.
1. Carr PJ, Higgins NS, Rippey J, Cooke ML, Rickard CM. Tools, clinical prediction rules, and algorithms for the insertion of peripheral intravenous catheters in adult hospitalized patients: a systematic scoping review of literature. J Hosp Med. 2017; 12(10):851-858
Peripheral intravenous catheters (PIVCs) are the most frequently used vascular access devices (VADs) in all patient populations and practice settings. Because of its invasive nature and the fact that PIVCs are placed and medications are administered directly into the bloodstream, vascular access is risky. There are multiple factors to consider when placing a PIVC, the least of which is determining the most appropriate device for the patient based on the prescribed therapy.
VAD planning and assessment needs to occur at the first patient encounter so that the most appropriate device is selected and it aligns with the duration of the treatment, minimizes the number of unnecessary VADs placed, and preserves veins for any future needs. The level of the clinician’s expertise, coupled with challenging environments of care, add to the complexity of what most perceive to be a “simple” procedure—placing a PIVC. For these reasons, it’s imperative that clinicians are competent in the use and placement of VADs to ensure safe patient care.
Carr and colleagues1 performed a notable scoping review to determine the existence of tools, clinical prediction rules, and algorithms (TRAs) that would support decision-making for the use of PIVCs and promote first-time insertion success (FTIS). They refined their search strategy to studies that described the use or development of any TRA regarding PIVC insertion in hospitalized adult patients.
The team identified 36 references for screening and based on their inclusion and exclusion criteria, were left with 13 studies in the final review. Inclusion criteria included TRAs for PIVC insertion in hospitalized adult patients using a traditional insertion approach, which was defined as “an assessment and/or insertion with touch and feel, therefore, without vessel locating technology such as ultrasound and/or near infrared technology.” 1 Of note is that some of the exclusion criteria included pediatric studies, TRAs focused on postinsertion assessment, studies that examined VADs other than PIVCs, and studies in which vascular visualization techniques were used.
In general, the authors were unable to find reported evidence that the study recommendations were adopted in clinical practice or to what degree any TRA had on the success of a PIVC insertion. As a result, they were unable to determine what, if any, clinical value the TRAs had.
The review of the studies, however, identified 3 variables that had an impact on PIVC insertion success: patient, clinician, and product characteristics. Vein characteristics, such as the number, size, and location of veins, and patients’ clinical conditions, such as diabetes, sickle cell anemia, and intravenous drug abuse, were noted as predictors of PIVC insertion success. In 7 papers, the primary focus was on patients with a history of difficult intravenous access (DIVA). The definition of DIVA varied from time to insertion of the PIVC to the number of failed attempts, ranging from 1 to 3 or more attempts.
Clinician variables, such as specialty nurse certification, years of experience, and self-reporting skill level, were associated with successful insertions, and clinicians who predicted FTIS were likely to have FTIS. Product variables included PIVC gauge size and the number of vein options and the relationship with successful first attempts.
Limitations noted by the researchers were a lack of sufficient published evidence for TRAs for PIVC insertion and standardized definitions for DIVA and expert inserters. The number of variables and the dearth of standardized terms may also influence the ability to adopt any TRAs.
While the purpose of the research was to identify TRAs that could guide clinical practice for the use of PIVCs and successful insertions, the authors make an important point that dwell time was not considered. While a TRA may lead to a successful insertion, it may not transcend the intended life of the PIVC or the duration of the therapy. Therefore, TRAs should embed steps that ensure the appropriate device is selected at the start of the patient’s treatment.
The authors identified a need for undertaking and providing research in a critical area of patient care and safety. This article increases awareness of issues related to PIVCs and the impact they have on patient care. FTIS rates vary and the implications of their use are many. Patient satisfaction, no delay in treatment, vein preservation, a decreased risk of complications, and the cost of labor and products are factors to consider. Tools to improve patient outcomes related to device insertion, care, and management need to be developed and validated. The authors also note that future TRAs should integrate the use of ultrasound and vascular visualization technologies.
In a complex, challenging healthcare environment, tools and guidance that enhance practice do not only help clinicians; they have a positive impact on patient care. The need for research, so that gaps in knowledge and science can be bridged, is clear. Gaps must be identified, research conducted, and TRAs developed and adopted to enhance patient outcomes.
Disclosure
The author reports no conflicts of interest.
Peripheral intravenous catheters (PIVCs) are the most frequently used vascular access devices (VADs) in all patient populations and practice settings. Because of its invasive nature and the fact that PIVCs are placed and medications are administered directly into the bloodstream, vascular access is risky. There are multiple factors to consider when placing a PIVC, the least of which is determining the most appropriate device for the patient based on the prescribed therapy.
VAD planning and assessment needs to occur at the first patient encounter so that the most appropriate device is selected and it aligns with the duration of the treatment, minimizes the number of unnecessary VADs placed, and preserves veins for any future needs. The level of the clinician’s expertise, coupled with challenging environments of care, add to the complexity of what most perceive to be a “simple” procedure—placing a PIVC. For these reasons, it’s imperative that clinicians are competent in the use and placement of VADs to ensure safe patient care.
Carr and colleagues1 performed a notable scoping review to determine the existence of tools, clinical prediction rules, and algorithms (TRAs) that would support decision-making for the use of PIVCs and promote first-time insertion success (FTIS). They refined their search strategy to studies that described the use or development of any TRA regarding PIVC insertion in hospitalized adult patients.
The team identified 36 references for screening and based on their inclusion and exclusion criteria, were left with 13 studies in the final review. Inclusion criteria included TRAs for PIVC insertion in hospitalized adult patients using a traditional insertion approach, which was defined as “an assessment and/or insertion with touch and feel, therefore, without vessel locating technology such as ultrasound and/or near infrared technology.” 1 Of note is that some of the exclusion criteria included pediatric studies, TRAs focused on postinsertion assessment, studies that examined VADs other than PIVCs, and studies in which vascular visualization techniques were used.
In general, the authors were unable to find reported evidence that the study recommendations were adopted in clinical practice or to what degree any TRA had on the success of a PIVC insertion. As a result, they were unable to determine what, if any, clinical value the TRAs had.
The review of the studies, however, identified 3 variables that had an impact on PIVC insertion success: patient, clinician, and product characteristics. Vein characteristics, such as the number, size, and location of veins, and patients’ clinical conditions, such as diabetes, sickle cell anemia, and intravenous drug abuse, were noted as predictors of PIVC insertion success. In 7 papers, the primary focus was on patients with a history of difficult intravenous access (DIVA). The definition of DIVA varied from time to insertion of the PIVC to the number of failed attempts, ranging from 1 to 3 or more attempts.
Clinician variables, such as specialty nurse certification, years of experience, and self-reporting skill level, were associated with successful insertions, and clinicians who predicted FTIS were likely to have FTIS. Product variables included PIVC gauge size and the number of vein options and the relationship with successful first attempts.
Limitations noted by the researchers were a lack of sufficient published evidence for TRAs for PIVC insertion and standardized definitions for DIVA and expert inserters. The number of variables and the dearth of standardized terms may also influence the ability to adopt any TRAs.
While the purpose of the research was to identify TRAs that could guide clinical practice for the use of PIVCs and successful insertions, the authors make an important point that dwell time was not considered. While a TRA may lead to a successful insertion, it may not transcend the intended life of the PIVC or the duration of the therapy. Therefore, TRAs should embed steps that ensure the appropriate device is selected at the start of the patient’s treatment.
The authors identified a need for undertaking and providing research in a critical area of patient care and safety. This article increases awareness of issues related to PIVCs and the impact they have on patient care. FTIS rates vary and the implications of their use are many. Patient satisfaction, no delay in treatment, vein preservation, a decreased risk of complications, and the cost of labor and products are factors to consider. Tools to improve patient outcomes related to device insertion, care, and management need to be developed and validated. The authors also note that future TRAs should integrate the use of ultrasound and vascular visualization technologies.
In a complex, challenging healthcare environment, tools and guidance that enhance practice do not only help clinicians; they have a positive impact on patient care. The need for research, so that gaps in knowledge and science can be bridged, is clear. Gaps must be identified, research conducted, and TRAs developed and adopted to enhance patient outcomes.
Disclosure
The author reports no conflicts of interest.
1. Carr PJ, Higgins NS, Rippey J, Cooke ML, Rickard CM. Tools, clinical prediction rules, and algorithms for the insertion of peripheral intravenous catheters in adult hospitalized patients: a systematic scoping review of literature. J Hosp Med. 2017; 12(10):851-858
1. Carr PJ, Higgins NS, Rippey J, Cooke ML, Rickard CM. Tools, clinical prediction rules, and algorithms for the insertion of peripheral intravenous catheters in adult hospitalized patients: a systematic scoping review of literature. J Hosp Med. 2017; 12(10):851-858
© 2017 Society of Hospital Medicine
Bedside manners: How to deal with delirium
During my training in Leiden, Netherlands, I was infused with the lessons of Herman Boerhaave (1668–1738), the professor who is considered the pioneer of bedside teaching.1 This practice had begun in Padua and was then brought to Leiden, where Boerhaave transformed it into an art form. At the Caecilia hospital, the municipal clerics provided Boerhaave with 2 wards for teaching; 1 with 6 beds for men and the other with 6 beds for women. Medical historian Henry Sigerist2 has commented that half the physicians of Europe were trained on these 12 beds.
Boerhaave made daily rounds with his students, examining the patients, reviewing their histories, and inspecting their urine. He considered postmortem examinations essential and made his students attend the autopsies of patients who died: “In spite of the most detailed description of all disease phenomena one does not know anything of the cause until one has opened the body.”2
What was once the basis of clinical medicine is now fading, with both clinical rounds and autopsies being replaced by imaging techniques of body parts and automated analysis of Vacutainer samples. These novelties provide us with far more diagnostic accuracy than Boerhaave had, and randomized controlled trials provide us with an evidence base. But bedside observation and case reports are still relevant,3 and autopsies still reveal important, clinically missed diagnoses.4
In this issue of the Journal, Imm et al5 describe a case of presumed postoperative delirium in a 64-year-old hospitalized patient. They remind us that crucial signs and symptoms can guide how to use our modern diagnostic tools.
DELIRIUM IS OFTEN OVERLOOKED
Delirium is often overlooked by physicians. But why? The characteristic disturbances in attention and cognition are easy to interpret, while the various observation scales have high sensitivity and should signal the need for further interrogation. Perhaps the reason we often overlook the signs and symptoms is that we assume that delirium is just normal sickness behavior.
Another reason we may fail to recognize the syndrome is more fundamental and closely related to how we practice medicine. These days, we place such trust in high-tech diagnostics that we feel the modern procedures supersede the classic examination of patients. But mental disturbances can only be detected by history-taking and clinical observation.
Moreover, the actual mental state is less important than the subtle changes in it. A continuously disturbed mind does not pose a problem, but a casual remark by a family member or informal caregiver that “his mood has changed” should seize our attention.6
Here, the fragmented and disconnected practice of modern medicine takes its toll. Shorter working hours have helped to preserve our own mental performance, but at the cost of being less able to follow the patient’s mental status over time and to recognize a change of behavior. Applying repeated, standardized assessments of these vital signs may help solve the problem, but repeated observations are easily neglected, as are body temperature, blood pressure, and others.
DELIRIUM IS SERIOUS
Imm et al also remind us that delirium is serious. The case-fatality rate in delirium equals that in acute cardiovascular events or metastatic cancer, even though its impact is often not thought to be as severe. Far too often the mental symptoms are dismissed and judged to be something to be handled in the outpatient clinic after the acute problems are addressed.
In part, this may be because no professional society or advocacy group is promoting the recognition, diagnosis, and treatment of delirium or pushing for incentives to do so. We have cardiologists and oncologists but not deliriologists. But in a way, it may be a good thing that no specialist group “owns” delirium, as the syndrome is elicited by various underlying disease mechanisms, and every physician should be vigilant to recognize it.
DELIRIUM REQUIRES PROMPT MANAGEMENT
If delirium is a life-threatening condition, it necessitates a prompt and coordinated series of diagnostic actions, judgments, and decisions.7 Although most delirious patients are not admitted to an intensive care unit, they should be considered critically ill and must be provided a corresponding level of care. Here, the old clinical aphorism holds: action should be taken before the sun sets or rises. Attention should be on worsening of the underlying disease, unexpected comorbid conditions, and side effects of our interventions.
As the case reported by Imm et al shows, the causative factors may be recognized only after in-depth examination.4 The pathogenesis of delirium is poorly understood, and there is no specific therapy for it. There is not even conclusive evidence that the standard use of antipsychotics is beneficial, whereas their side effects cannot be overestimated.7 Our interventions are aimed at eliminating the underlying pathologies that have triggered the delirious state, as well as on preventing complications of the mental disturbance.
Many of us have had the experience of watching one of our children develop fever and confusion. When our older patients become delirious, it should raise the same level of alarm and activity as when it happens in a child.
- Koehler U, Hildebrandt O, Koehler J, Nell C. The pioneer of bedside teaching—Herman Boerhaave (1668–1738). Dtsch Med Wochenschr 2014; 139:2655–2659.
- Sigerist HE. A History of Medicine. New York: Oxford University Press 1951;1. [According to Walker HK. Chapter 1. The origins of the history and physical examination. In: Walker HK, Hall WD, Hurst JW, editors. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd ed. Boston: Buterworths, 1990.] www.ncbi.nlm.nih.gov/books/NBK458. Accessed August 7, 2017.
- Vandenbroucke JP. In defense of case reports and case series. Ann Intern Med 2001; 134:330–334.
- Shojania KG, Burton EC, McDonald KM, Goldman M. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
- Imm M, Torres LF, Kottapally M. Postoperative delirium in a 64-year-old woman. Cleve Clinic J Med 2017; 84:690–698.
- Steis MR, Evans L, Hirschman KB, et al. Screening for delirium using family caregivers: convergent validity of the family confusion assessment method and interviewer-rated confusion assessment method. J Am Geriatr Soc 2012; 60:2121–2126.
- Inouye SK, Westendorp RGJ, Saczynski JS. Delirium in elderly people Lancet 2014; 383:911–922.
During my training in Leiden, Netherlands, I was infused with the lessons of Herman Boerhaave (1668–1738), the professor who is considered the pioneer of bedside teaching.1 This practice had begun in Padua and was then brought to Leiden, where Boerhaave transformed it into an art form. At the Caecilia hospital, the municipal clerics provided Boerhaave with 2 wards for teaching; 1 with 6 beds for men and the other with 6 beds for women. Medical historian Henry Sigerist2 has commented that half the physicians of Europe were trained on these 12 beds.
Boerhaave made daily rounds with his students, examining the patients, reviewing their histories, and inspecting their urine. He considered postmortem examinations essential and made his students attend the autopsies of patients who died: “In spite of the most detailed description of all disease phenomena one does not know anything of the cause until one has opened the body.”2
What was once the basis of clinical medicine is now fading, with both clinical rounds and autopsies being replaced by imaging techniques of body parts and automated analysis of Vacutainer samples. These novelties provide us with far more diagnostic accuracy than Boerhaave had, and randomized controlled trials provide us with an evidence base. But bedside observation and case reports are still relevant,3 and autopsies still reveal important, clinically missed diagnoses.4
In this issue of the Journal, Imm et al5 describe a case of presumed postoperative delirium in a 64-year-old hospitalized patient. They remind us that crucial signs and symptoms can guide how to use our modern diagnostic tools.
DELIRIUM IS OFTEN OVERLOOKED
Delirium is often overlooked by physicians. But why? The characteristic disturbances in attention and cognition are easy to interpret, while the various observation scales have high sensitivity and should signal the need for further interrogation. Perhaps the reason we often overlook the signs and symptoms is that we assume that delirium is just normal sickness behavior.
Another reason we may fail to recognize the syndrome is more fundamental and closely related to how we practice medicine. These days, we place such trust in high-tech diagnostics that we feel the modern procedures supersede the classic examination of patients. But mental disturbances can only be detected by history-taking and clinical observation.
Moreover, the actual mental state is less important than the subtle changes in it. A continuously disturbed mind does not pose a problem, but a casual remark by a family member or informal caregiver that “his mood has changed” should seize our attention.6
Here, the fragmented and disconnected practice of modern medicine takes its toll. Shorter working hours have helped to preserve our own mental performance, but at the cost of being less able to follow the patient’s mental status over time and to recognize a change of behavior. Applying repeated, standardized assessments of these vital signs may help solve the problem, but repeated observations are easily neglected, as are body temperature, blood pressure, and others.
DELIRIUM IS SERIOUS
Imm et al also remind us that delirium is serious. The case-fatality rate in delirium equals that in acute cardiovascular events or metastatic cancer, even though its impact is often not thought to be as severe. Far too often the mental symptoms are dismissed and judged to be something to be handled in the outpatient clinic after the acute problems are addressed.
In part, this may be because no professional society or advocacy group is promoting the recognition, diagnosis, and treatment of delirium or pushing for incentives to do so. We have cardiologists and oncologists but not deliriologists. But in a way, it may be a good thing that no specialist group “owns” delirium, as the syndrome is elicited by various underlying disease mechanisms, and every physician should be vigilant to recognize it.
DELIRIUM REQUIRES PROMPT MANAGEMENT
If delirium is a life-threatening condition, it necessitates a prompt and coordinated series of diagnostic actions, judgments, and decisions.7 Although most delirious patients are not admitted to an intensive care unit, they should be considered critically ill and must be provided a corresponding level of care. Here, the old clinical aphorism holds: action should be taken before the sun sets or rises. Attention should be on worsening of the underlying disease, unexpected comorbid conditions, and side effects of our interventions.
As the case reported by Imm et al shows, the causative factors may be recognized only after in-depth examination.4 The pathogenesis of delirium is poorly understood, and there is no specific therapy for it. There is not even conclusive evidence that the standard use of antipsychotics is beneficial, whereas their side effects cannot be overestimated.7 Our interventions are aimed at eliminating the underlying pathologies that have triggered the delirious state, as well as on preventing complications of the mental disturbance.
Many of us have had the experience of watching one of our children develop fever and confusion. When our older patients become delirious, it should raise the same level of alarm and activity as when it happens in a child.
During my training in Leiden, Netherlands, I was infused with the lessons of Herman Boerhaave (1668–1738), the professor who is considered the pioneer of bedside teaching.1 This practice had begun in Padua and was then brought to Leiden, where Boerhaave transformed it into an art form. At the Caecilia hospital, the municipal clerics provided Boerhaave with 2 wards for teaching; 1 with 6 beds for men and the other with 6 beds for women. Medical historian Henry Sigerist2 has commented that half the physicians of Europe were trained on these 12 beds.
Boerhaave made daily rounds with his students, examining the patients, reviewing their histories, and inspecting their urine. He considered postmortem examinations essential and made his students attend the autopsies of patients who died: “In spite of the most detailed description of all disease phenomena one does not know anything of the cause until one has opened the body.”2
What was once the basis of clinical medicine is now fading, with both clinical rounds and autopsies being replaced by imaging techniques of body parts and automated analysis of Vacutainer samples. These novelties provide us with far more diagnostic accuracy than Boerhaave had, and randomized controlled trials provide us with an evidence base. But bedside observation and case reports are still relevant,3 and autopsies still reveal important, clinically missed diagnoses.4
In this issue of the Journal, Imm et al5 describe a case of presumed postoperative delirium in a 64-year-old hospitalized patient. They remind us that crucial signs and symptoms can guide how to use our modern diagnostic tools.
DELIRIUM IS OFTEN OVERLOOKED
Delirium is often overlooked by physicians. But why? The characteristic disturbances in attention and cognition are easy to interpret, while the various observation scales have high sensitivity and should signal the need for further interrogation. Perhaps the reason we often overlook the signs and symptoms is that we assume that delirium is just normal sickness behavior.
Another reason we may fail to recognize the syndrome is more fundamental and closely related to how we practice medicine. These days, we place such trust in high-tech diagnostics that we feel the modern procedures supersede the classic examination of patients. But mental disturbances can only be detected by history-taking and clinical observation.
Moreover, the actual mental state is less important than the subtle changes in it. A continuously disturbed mind does not pose a problem, but a casual remark by a family member or informal caregiver that “his mood has changed” should seize our attention.6
Here, the fragmented and disconnected practice of modern medicine takes its toll. Shorter working hours have helped to preserve our own mental performance, but at the cost of being less able to follow the patient’s mental status over time and to recognize a change of behavior. Applying repeated, standardized assessments of these vital signs may help solve the problem, but repeated observations are easily neglected, as are body temperature, blood pressure, and others.
DELIRIUM IS SERIOUS
Imm et al also remind us that delirium is serious. The case-fatality rate in delirium equals that in acute cardiovascular events or metastatic cancer, even though its impact is often not thought to be as severe. Far too often the mental symptoms are dismissed and judged to be something to be handled in the outpatient clinic after the acute problems are addressed.
In part, this may be because no professional society or advocacy group is promoting the recognition, diagnosis, and treatment of delirium or pushing for incentives to do so. We have cardiologists and oncologists but not deliriologists. But in a way, it may be a good thing that no specialist group “owns” delirium, as the syndrome is elicited by various underlying disease mechanisms, and every physician should be vigilant to recognize it.
DELIRIUM REQUIRES PROMPT MANAGEMENT
If delirium is a life-threatening condition, it necessitates a prompt and coordinated series of diagnostic actions, judgments, and decisions.7 Although most delirious patients are not admitted to an intensive care unit, they should be considered critically ill and must be provided a corresponding level of care. Here, the old clinical aphorism holds: action should be taken before the sun sets or rises. Attention should be on worsening of the underlying disease, unexpected comorbid conditions, and side effects of our interventions.
As the case reported by Imm et al shows, the causative factors may be recognized only after in-depth examination.4 The pathogenesis of delirium is poorly understood, and there is no specific therapy for it. There is not even conclusive evidence that the standard use of antipsychotics is beneficial, whereas their side effects cannot be overestimated.7 Our interventions are aimed at eliminating the underlying pathologies that have triggered the delirious state, as well as on preventing complications of the mental disturbance.
Many of us have had the experience of watching one of our children develop fever and confusion. When our older patients become delirious, it should raise the same level of alarm and activity as when it happens in a child.
- Koehler U, Hildebrandt O, Koehler J, Nell C. The pioneer of bedside teaching—Herman Boerhaave (1668–1738). Dtsch Med Wochenschr 2014; 139:2655–2659.
- Sigerist HE. A History of Medicine. New York: Oxford University Press 1951;1. [According to Walker HK. Chapter 1. The origins of the history and physical examination. In: Walker HK, Hall WD, Hurst JW, editors. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd ed. Boston: Buterworths, 1990.] www.ncbi.nlm.nih.gov/books/NBK458. Accessed August 7, 2017.
- Vandenbroucke JP. In defense of case reports and case series. Ann Intern Med 2001; 134:330–334.
- Shojania KG, Burton EC, McDonald KM, Goldman M. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
- Imm M, Torres LF, Kottapally M. Postoperative delirium in a 64-year-old woman. Cleve Clinic J Med 2017; 84:690–698.
- Steis MR, Evans L, Hirschman KB, et al. Screening for delirium using family caregivers: convergent validity of the family confusion assessment method and interviewer-rated confusion assessment method. J Am Geriatr Soc 2012; 60:2121–2126.
- Inouye SK, Westendorp RGJ, Saczynski JS. Delirium in elderly people Lancet 2014; 383:911–922.
- Koehler U, Hildebrandt O, Koehler J, Nell C. The pioneer of bedside teaching—Herman Boerhaave (1668–1738). Dtsch Med Wochenschr 2014; 139:2655–2659.
- Sigerist HE. A History of Medicine. New York: Oxford University Press 1951;1. [According to Walker HK. Chapter 1. The origins of the history and physical examination. In: Walker HK, Hall WD, Hurst JW, editors. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd ed. Boston: Buterworths, 1990.] www.ncbi.nlm.nih.gov/books/NBK458. Accessed August 7, 2017.
- Vandenbroucke JP. In defense of case reports and case series. Ann Intern Med 2001; 134:330–334.
- Shojania KG, Burton EC, McDonald KM, Goldman M. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003; 289:2849–2856.
- Imm M, Torres LF, Kottapally M. Postoperative delirium in a 64-year-old woman. Cleve Clinic J Med 2017; 84:690–698.
- Steis MR, Evans L, Hirschman KB, et al. Screening for delirium using family caregivers: convergent validity of the family confusion assessment method and interviewer-rated confusion assessment method. J Am Geriatr Soc 2012; 60:2121–2126.
- Inouye SK, Westendorp RGJ, Saczynski JS. Delirium in elderly people Lancet 2014; 383:911–922.
Renal denervation: Are we on the right path?
When renal sympathetic denervation, an endovascular procedure designed to treat resistant hypertension, failed to meet its efficacy goal in the SYMPLICITY HTN-3 trial,1 the news was disappointing.
In this issue of the Cleveland Clinic Journal of Medicine, Shishehbor et al2 provide a critical review of the findings of that trial and summarize its intricacies, as well as the results of other important trials of renal denervation therapy for hypertension. To their excellent observations, we would like to add some of our own.
HYPERTENSION: COMMON, OFTEN RESISTANT
The worldwide prevalence of hypertension is increasing. In the year 2000, about 26% of the adult world population had hypertension; by the year 2025, the number is projected to rise to 29%—1.56 billion people.3
Only about 50% of patients with hypertension are treated for it and, of those, about half have it adequately controlled. In one report, about 30% of US patients with hypertension had adequate blood pressure control.4
Patients who have uncontrolled hypertension are usually older and more obese, have higher baseline blood pressure and excessive salt intake, and are more likely to have chronic kidney disease, diabetes, obstructive sleep apnea, and aldosterone excess.5 Many of these conditions are also associated with increased sympathetic nervous system activity.6
Resistance and pseudoresistance
But lack of control of blood pressure is not the same as resistant hypertension. It is important to differentiate resistant hypertension from pseudoresistant hypertension, ie, hypertension that only seems to be resistant.7 Resistant hypertension affects 12.8% of all drug-treated hypertensive patients in the United States, according to data from the National Health and Nutrition Examination Survey.8
Factors that can cause pseudoresistant hypertension include:
Suboptimal antihypertensive regimens (truly resistant hypertension means blood pressure that remains high despite concurrent treatment with 3 antihypertensive drugs of different classes, 1 of which is a diuretic, in maximal doses)
The white coat effect (higher blood pressure in the office than at home, presumably due to the stress of an office visit)
- Suboptimal blood pressure measurement techniques (eg, use of a cuff that is too small, causing falsely high readings)
- Physician inertia (eg, failure to change a regimen that is not working)
- Lifestyle factors (eg, excessive sodium intake)
- Medications that interfere with blood pressure control (eg, nonsteroidal anti-inflammatory drugs)
- Poor adherence to prescribed medications.
Causes of secondary hypertension such as obstructive sleep apnea, primary aldosteronism, and renal artery stenosis should also be ruled out before concluding that a patient has resistant hypertension.
Treatment prevents complications
Hypertension causes a myriad of medical diseases, including accelerated atherosclerosis, myocardial ischemia and infarction, both systolic and diastolic heart failure, rhythm problems (eg, atrial fibrillation), and stroke.
Most patients with resistant hypertension have no identifiable reversible causes of it, exhibit increased sympathetic nervous system activity, and have increased risk of cardiovascular events. The risk can be reduced by treatment.9,10
Adequate and sustained treatment of hypertension prevents and mitigates its complications. The classic Veterans Administration Cooperative Study in the 1960s demonstrated a 96% reduction in cardiovascular events over 18 months with the use of 3 antihypertensive medications in patients with severe hypertension.11 A reduction of as little as 2 mm Hg in the mean blood pressure has been associated with a 10% reduction in the risk of stroke mortality and a 7% decrease in ischemic heart disease mortality.12 This is an important consideration when evaluating the clinical end points of hypertension trials.
SYMPLICITY HTN-3 TRIAL: WHAT DID WE LEARN?
As controlling blood pressure is paramount in reducing cardiovascular complications, it is only natural to look for innovative strategies to supplement the medical treatments of hypertension.
The multicenter SYMPLICITY HTN-3 trial1 was undertaken to establish the efficacy of renal-artery denervation using radiofrequency energy delivered by a catheter-based system (Symplicity RDN, Medtronic, Dublin, Ireland). This randomized, sham-controlled, blinded study did not show a benefit from this procedure with respect to either of its efficacy end points—at 6 months, a reduction in office systolic blood pressure of at least 5 mm Hg more than with medical therapy alone, or a reduction in mean ambulatory systolic pressure of at least 2 mm Hg more than with medical therapy alone.
Despite the negative results, this medium-size (N = 535) randomized clinical trial still represents the highest-level evidence in the field, and we ought to learn something from it.
Limitations of SYMPLICITY HTN-3
Several factors may have contributed to the negative results of the trial.
Patient selection. For the most part, patients enrolled in renal denervation trials, including SYMPLICITY HTN-3, were not selected on the basis of heightened sympathetic nervous system activity. Assessment of sympathetic nervous system activity may identify the population most likely to achieve an adequate response.
Of note, the baseline blood pressure readings of patients in this trial were higher in the office than on ambulatory monitoring. Patients with white coat hypertension have increased sympathetic nervous system activity and thus might actually be good candidates for renal denervation therapy.
Adequacy of ablation was not measured. Many argue that an objective measure of the adequacy of the denervation procedure (qualitative or quantitative) should have been implemented and, if it had been, the results might have been different. For example, when ablation is performed in the main renal artery as well as the branches, the efficacy in reducing levels of norepinephrine is improved.13
Blood pressure fell in both groups. In SYMPLICITY HTN-3 and many other renal denervation trials, patients were assessed using both office and ambulatory blood pressure measurements. The primary end point was the office blood pressure measurement, with a 5-mm Hg difference in reduction chosen to define the superiority margin. This margin was chosen because even small reductions in blood pressure are known to decrease adverse events caused by hypertension. Notably, blood pressure fell significantly in both the control and intervention groups, with an intergroup difference of 2.39 mm Hg (not statistically significant) in favor of denervation.
Medication questions. The SYMPLICITY HTN-3 patients were supposed to be on stable medical regimens with maximal tolerated doses before the procedure. However, it was difficult to assess patients’ adherence to and tolerance of medical therapies. Many (about 40%) of the patients had their medications changed during the study.1
Therefore, a critical look at the study enrollment criteria may shed more light on the reasons for the negative findings. Did these patients truly have resistant hypertension? Before they underwent the treatment, was their prestudy pharmacologic regimen adequately intensified?
ONGOING STUDIES
After the findings of the SYMPLICITY HTN-3 study were released, several other trials—such as the Renal Denervation for Hypertension (DENERHTN)14 and Prague-15 trials15—reported conflicting results. Notably, these were not sham-controlled trials.
Newer studies with robust trial designs are ongoing. A quick search of www.clinicaltrials.gov reveals that at least 89 active clinical trials of renal denervation are registered as of the date of this writing. Excluding those with unknown status, there are 63 trials open or ongoing.
Clinical trials are also ongoing to determine the effects of renal denervation in patients with heart failure, atrial fibrillation, sleep apnea, and chronic kidney disease, all of which are known to involve heightened sympathetic nervous system activity.
NOT READY FOR CLINICAL USE
Although nonpharmacologic treatments of hypertension continue to be studied and are supported by an avalanche of trials in animals and small, mostly nonrandomized trials in humans, one should not forget that the SYMPLICITY HTN-3 trial simply did not meet its primary efficacy end points. We need definitive clinical evidence showing that renal denervation reduces either blood pressure or clinical events before it becomes a mainstream therapy in humans.
Additional trials are being conducted that were designed in accordance with the recommendations of the European Clinical Consensus Conference for Renal Denervation16 in terms of study population, design, and end points. Well-designed studies that conform to those recommendations are critical.
Finally, although our enthusiasm for renal denervation as a treatment of hypertension is tempered, there have been no noteworthy safety concerns related to the procedure, which certainly helps maintain the research momentum in this field.
- Bhatt DL, Kandzari DE, O’Neill WW, et al; SYMPLICITY HTN-3 Investigators. A controlled trial of renal denervation for resistant hypertension. N Engl J Med 2014; 370:1393–1401.
- Shishehbor MH, Hammad TA, Thomas G. Renal denervation: what happened, and why? Cleve Clin J Med 2017; 84:681–686.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Global burden of hypertension: analysis of worldwide data. Lancet 2005; 365:217–223.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Worldwide prevalence of hypertension: a systematic review. J Hypertens 2004; 22:11–19.
- Calhoun DA, Jones D, Textor S, et al; American Heart Association Professional Education Committee. Resistant hypertension: diagnosis, evaluation, and treatment: a scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research. Circulation 2008; 117:e510–e526.
- Tsioufis C, Papademetriou V, Thomopoulos C, Stefanadis C. Renal denervation for sleep apnea and resistant hypertension: alternative or complementary to effective continuous positive airway pressure treatment? Hypertension 2011; 58:e191–e192.
- Calhoun DA, Jones D, Textor S, et al. Resistant hypertension: diagnosis, evaluation, and treatment. A scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research.Hypertension 2008; 51:1403–1419.
- Persell SD. Prevalence of resistant hypertension in the United States, 2003–2008. Hypertension 2011; 57:1076–1080.
- Papademetriou V, Doumas M, Tsioufis K. Renal sympathetic denervation for the treatment of difficult-to-control or resistant hypertension. Int J Hypertens 2011; 2011:196518.
- Doumas M, Faselis C, Papademetriou V. Renal sympathetic denervation in hypertension. Curr Opin Nephrol Hypertens 2011; 20:647–653.
- Veterans Administration Cooperative Study Group on Antihypertensive Agents. Effect of treatment on morbidity in hypertension: results in patients with diastolic blood pressures averaging 115 through 129 mm Hg. JAMA 1967; 202:1028–1034.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Henegar JR, Zhang Y, Hata C, Narciso I, Hall ME, Hall JE. Catheter-based radiofrequency renal denervation: location effects on renal norepinephrine. Am J Hypertens 2015; 28:909–914.
- Azizi M, Sapoval M, Gosse P, et al; Renal Denervation for Hypertension (DENERHTN) investigators. Optimum and stepped care standardised antihypertensive treatment with or without renal denervation for resistant hypertension (DENERHTN): a multicentre, open-label, randomised controlled trial. Lancet 2015; 385:1957–1965.
- Rosa J, Widimsky P, Waldauf P, et al. Role of adding spironolactone and renal denervation in true resistant hypertension: one-year outcomes of randomized PRAGUE-15 study. Hypertension 2016; 67:397–403.
- Mahfoud F, Bohm M, Azizi M, et al. Proceedings from the European Clinical Consensus Conference for Renal Denervation: Considerations on Future Clinical Trial Design. Eur Heart J 2015; 6:2219–2227.
When renal sympathetic denervation, an endovascular procedure designed to treat resistant hypertension, failed to meet its efficacy goal in the SYMPLICITY HTN-3 trial,1 the news was disappointing.
In this issue of the Cleveland Clinic Journal of Medicine, Shishehbor et al2 provide a critical review of the findings of that trial and summarize its intricacies, as well as the results of other important trials of renal denervation therapy for hypertension. To their excellent observations, we would like to add some of our own.
HYPERTENSION: COMMON, OFTEN RESISTANT
The worldwide prevalence of hypertension is increasing. In the year 2000, about 26% of the adult world population had hypertension; by the year 2025, the number is projected to rise to 29%—1.56 billion people.3
Only about 50% of patients with hypertension are treated for it and, of those, about half have it adequately controlled. In one report, about 30% of US patients with hypertension had adequate blood pressure control.4
Patients who have uncontrolled hypertension are usually older and more obese, have higher baseline blood pressure and excessive salt intake, and are more likely to have chronic kidney disease, diabetes, obstructive sleep apnea, and aldosterone excess.5 Many of these conditions are also associated with increased sympathetic nervous system activity.6
Resistance and pseudoresistance
But lack of control of blood pressure is not the same as resistant hypertension. It is important to differentiate resistant hypertension from pseudoresistant hypertension, ie, hypertension that only seems to be resistant.7 Resistant hypertension affects 12.8% of all drug-treated hypertensive patients in the United States, according to data from the National Health and Nutrition Examination Survey.8
Factors that can cause pseudoresistant hypertension include:
Suboptimal antihypertensive regimens (truly resistant hypertension means blood pressure that remains high despite concurrent treatment with 3 antihypertensive drugs of different classes, 1 of which is a diuretic, in maximal doses)
The white coat effect (higher blood pressure in the office than at home, presumably due to the stress of an office visit)
- Suboptimal blood pressure measurement techniques (eg, use of a cuff that is too small, causing falsely high readings)
- Physician inertia (eg, failure to change a regimen that is not working)
- Lifestyle factors (eg, excessive sodium intake)
- Medications that interfere with blood pressure control (eg, nonsteroidal anti-inflammatory drugs)
- Poor adherence to prescribed medications.
Causes of secondary hypertension such as obstructive sleep apnea, primary aldosteronism, and renal artery stenosis should also be ruled out before concluding that a patient has resistant hypertension.
Treatment prevents complications
Hypertension causes a myriad of medical diseases, including accelerated atherosclerosis, myocardial ischemia and infarction, both systolic and diastolic heart failure, rhythm problems (eg, atrial fibrillation), and stroke.
Most patients with resistant hypertension have no identifiable reversible causes of it, exhibit increased sympathetic nervous system activity, and have increased risk of cardiovascular events. The risk can be reduced by treatment.9,10
Adequate and sustained treatment of hypertension prevents and mitigates its complications. The classic Veterans Administration Cooperative Study in the 1960s demonstrated a 96% reduction in cardiovascular events over 18 months with the use of 3 antihypertensive medications in patients with severe hypertension.11 A reduction of as little as 2 mm Hg in the mean blood pressure has been associated with a 10% reduction in the risk of stroke mortality and a 7% decrease in ischemic heart disease mortality.12 This is an important consideration when evaluating the clinical end points of hypertension trials.
SYMPLICITY HTN-3 TRIAL: WHAT DID WE LEARN?
As controlling blood pressure is paramount in reducing cardiovascular complications, it is only natural to look for innovative strategies to supplement the medical treatments of hypertension.
The multicenter SYMPLICITY HTN-3 trial1 was undertaken to establish the efficacy of renal-artery denervation using radiofrequency energy delivered by a catheter-based system (Symplicity RDN, Medtronic, Dublin, Ireland). This randomized, sham-controlled, blinded study did not show a benefit from this procedure with respect to either of its efficacy end points—at 6 months, a reduction in office systolic blood pressure of at least 5 mm Hg more than with medical therapy alone, or a reduction in mean ambulatory systolic pressure of at least 2 mm Hg more than with medical therapy alone.
Despite the negative results, this medium-size (N = 535) randomized clinical trial still represents the highest-level evidence in the field, and we ought to learn something from it.
Limitations of SYMPLICITY HTN-3
Several factors may have contributed to the negative results of the trial.
Patient selection. For the most part, patients enrolled in renal denervation trials, including SYMPLICITY HTN-3, were not selected on the basis of heightened sympathetic nervous system activity. Assessment of sympathetic nervous system activity may identify the population most likely to achieve an adequate response.
Of note, the baseline blood pressure readings of patients in this trial were higher in the office than on ambulatory monitoring. Patients with white coat hypertension have increased sympathetic nervous system activity and thus might actually be good candidates for renal denervation therapy.
Adequacy of ablation was not measured. Many argue that an objective measure of the adequacy of the denervation procedure (qualitative or quantitative) should have been implemented and, if it had been, the results might have been different. For example, when ablation is performed in the main renal artery as well as the branches, the efficacy in reducing levels of norepinephrine is improved.13
Blood pressure fell in both groups. In SYMPLICITY HTN-3 and many other renal denervation trials, patients were assessed using both office and ambulatory blood pressure measurements. The primary end point was the office blood pressure measurement, with a 5-mm Hg difference in reduction chosen to define the superiority margin. This margin was chosen because even small reductions in blood pressure are known to decrease adverse events caused by hypertension. Notably, blood pressure fell significantly in both the control and intervention groups, with an intergroup difference of 2.39 mm Hg (not statistically significant) in favor of denervation.
Medication questions. The SYMPLICITY HTN-3 patients were supposed to be on stable medical regimens with maximal tolerated doses before the procedure. However, it was difficult to assess patients’ adherence to and tolerance of medical therapies. Many (about 40%) of the patients had their medications changed during the study.1
Therefore, a critical look at the study enrollment criteria may shed more light on the reasons for the negative findings. Did these patients truly have resistant hypertension? Before they underwent the treatment, was their prestudy pharmacologic regimen adequately intensified?
ONGOING STUDIES
After the findings of the SYMPLICITY HTN-3 study were released, several other trials—such as the Renal Denervation for Hypertension (DENERHTN)14 and Prague-15 trials15—reported conflicting results. Notably, these were not sham-controlled trials.
Newer studies with robust trial designs are ongoing. A quick search of www.clinicaltrials.gov reveals that at least 89 active clinical trials of renal denervation are registered as of the date of this writing. Excluding those with unknown status, there are 63 trials open or ongoing.
Clinical trials are also ongoing to determine the effects of renal denervation in patients with heart failure, atrial fibrillation, sleep apnea, and chronic kidney disease, all of which are known to involve heightened sympathetic nervous system activity.
NOT READY FOR CLINICAL USE
Although nonpharmacologic treatments of hypertension continue to be studied and are supported by an avalanche of trials in animals and small, mostly nonrandomized trials in humans, one should not forget that the SYMPLICITY HTN-3 trial simply did not meet its primary efficacy end points. We need definitive clinical evidence showing that renal denervation reduces either blood pressure or clinical events before it becomes a mainstream therapy in humans.
Additional trials are being conducted that were designed in accordance with the recommendations of the European Clinical Consensus Conference for Renal Denervation16 in terms of study population, design, and end points. Well-designed studies that conform to those recommendations are critical.
Finally, although our enthusiasm for renal denervation as a treatment of hypertension is tempered, there have been no noteworthy safety concerns related to the procedure, which certainly helps maintain the research momentum in this field.
When renal sympathetic denervation, an endovascular procedure designed to treat resistant hypertension, failed to meet its efficacy goal in the SYMPLICITY HTN-3 trial,1 the news was disappointing.
In this issue of the Cleveland Clinic Journal of Medicine, Shishehbor et al2 provide a critical review of the findings of that trial and summarize its intricacies, as well as the results of other important trials of renal denervation therapy for hypertension. To their excellent observations, we would like to add some of our own.
HYPERTENSION: COMMON, OFTEN RESISTANT
The worldwide prevalence of hypertension is increasing. In the year 2000, about 26% of the adult world population had hypertension; by the year 2025, the number is projected to rise to 29%—1.56 billion people.3
Only about 50% of patients with hypertension are treated for it and, of those, about half have it adequately controlled. In one report, about 30% of US patients with hypertension had adequate blood pressure control.4
Patients who have uncontrolled hypertension are usually older and more obese, have higher baseline blood pressure and excessive salt intake, and are more likely to have chronic kidney disease, diabetes, obstructive sleep apnea, and aldosterone excess.5 Many of these conditions are also associated with increased sympathetic nervous system activity.6
Resistance and pseudoresistance
But lack of control of blood pressure is not the same as resistant hypertension. It is important to differentiate resistant hypertension from pseudoresistant hypertension, ie, hypertension that only seems to be resistant.7 Resistant hypertension affects 12.8% of all drug-treated hypertensive patients in the United States, according to data from the National Health and Nutrition Examination Survey.8
Factors that can cause pseudoresistant hypertension include:
Suboptimal antihypertensive regimens (truly resistant hypertension means blood pressure that remains high despite concurrent treatment with 3 antihypertensive drugs of different classes, 1 of which is a diuretic, in maximal doses)
The white coat effect (higher blood pressure in the office than at home, presumably due to the stress of an office visit)
- Suboptimal blood pressure measurement techniques (eg, use of a cuff that is too small, causing falsely high readings)
- Physician inertia (eg, failure to change a regimen that is not working)
- Lifestyle factors (eg, excessive sodium intake)
- Medications that interfere with blood pressure control (eg, nonsteroidal anti-inflammatory drugs)
- Poor adherence to prescribed medications.
Causes of secondary hypertension such as obstructive sleep apnea, primary aldosteronism, and renal artery stenosis should also be ruled out before concluding that a patient has resistant hypertension.
Treatment prevents complications
Hypertension causes a myriad of medical diseases, including accelerated atherosclerosis, myocardial ischemia and infarction, both systolic and diastolic heart failure, rhythm problems (eg, atrial fibrillation), and stroke.
Most patients with resistant hypertension have no identifiable reversible causes of it, exhibit increased sympathetic nervous system activity, and have increased risk of cardiovascular events. The risk can be reduced by treatment.9,10
Adequate and sustained treatment of hypertension prevents and mitigates its complications. The classic Veterans Administration Cooperative Study in the 1960s demonstrated a 96% reduction in cardiovascular events over 18 months with the use of 3 antihypertensive medications in patients with severe hypertension.11 A reduction of as little as 2 mm Hg in the mean blood pressure has been associated with a 10% reduction in the risk of stroke mortality and a 7% decrease in ischemic heart disease mortality.12 This is an important consideration when evaluating the clinical end points of hypertension trials.
SYMPLICITY HTN-3 TRIAL: WHAT DID WE LEARN?
As controlling blood pressure is paramount in reducing cardiovascular complications, it is only natural to look for innovative strategies to supplement the medical treatments of hypertension.
The multicenter SYMPLICITY HTN-3 trial1 was undertaken to establish the efficacy of renal-artery denervation using radiofrequency energy delivered by a catheter-based system (Symplicity RDN, Medtronic, Dublin, Ireland). This randomized, sham-controlled, blinded study did not show a benefit from this procedure with respect to either of its efficacy end points—at 6 months, a reduction in office systolic blood pressure of at least 5 mm Hg more than with medical therapy alone, or a reduction in mean ambulatory systolic pressure of at least 2 mm Hg more than with medical therapy alone.
Despite the negative results, this medium-size (N = 535) randomized clinical trial still represents the highest-level evidence in the field, and we ought to learn something from it.
Limitations of SYMPLICITY HTN-3
Several factors may have contributed to the negative results of the trial.
Patient selection. For the most part, patients enrolled in renal denervation trials, including SYMPLICITY HTN-3, were not selected on the basis of heightened sympathetic nervous system activity. Assessment of sympathetic nervous system activity may identify the population most likely to achieve an adequate response.
Of note, the baseline blood pressure readings of patients in this trial were higher in the office than on ambulatory monitoring. Patients with white coat hypertension have increased sympathetic nervous system activity and thus might actually be good candidates for renal denervation therapy.
Adequacy of ablation was not measured. Many argue that an objective measure of the adequacy of the denervation procedure (qualitative or quantitative) should have been implemented and, if it had been, the results might have been different. For example, when ablation is performed in the main renal artery as well as the branches, the efficacy in reducing levels of norepinephrine is improved.13
Blood pressure fell in both groups. In SYMPLICITY HTN-3 and many other renal denervation trials, patients were assessed using both office and ambulatory blood pressure measurements. The primary end point was the office blood pressure measurement, with a 5-mm Hg difference in reduction chosen to define the superiority margin. This margin was chosen because even small reductions in blood pressure are known to decrease adverse events caused by hypertension. Notably, blood pressure fell significantly in both the control and intervention groups, with an intergroup difference of 2.39 mm Hg (not statistically significant) in favor of denervation.
Medication questions. The SYMPLICITY HTN-3 patients were supposed to be on stable medical regimens with maximal tolerated doses before the procedure. However, it was difficult to assess patients’ adherence to and tolerance of medical therapies. Many (about 40%) of the patients had their medications changed during the study.1
Therefore, a critical look at the study enrollment criteria may shed more light on the reasons for the negative findings. Did these patients truly have resistant hypertension? Before they underwent the treatment, was their prestudy pharmacologic regimen adequately intensified?
ONGOING STUDIES
After the findings of the SYMPLICITY HTN-3 study were released, several other trials—such as the Renal Denervation for Hypertension (DENERHTN)14 and Prague-15 trials15—reported conflicting results. Notably, these were not sham-controlled trials.
Newer studies with robust trial designs are ongoing. A quick search of www.clinicaltrials.gov reveals that at least 89 active clinical trials of renal denervation are registered as of the date of this writing. Excluding those with unknown status, there are 63 trials open or ongoing.
Clinical trials are also ongoing to determine the effects of renal denervation in patients with heart failure, atrial fibrillation, sleep apnea, and chronic kidney disease, all of which are known to involve heightened sympathetic nervous system activity.
NOT READY FOR CLINICAL USE
Although nonpharmacologic treatments of hypertension continue to be studied and are supported by an avalanche of trials in animals and small, mostly nonrandomized trials in humans, one should not forget that the SYMPLICITY HTN-3 trial simply did not meet its primary efficacy end points. We need definitive clinical evidence showing that renal denervation reduces either blood pressure or clinical events before it becomes a mainstream therapy in humans.
Additional trials are being conducted that were designed in accordance with the recommendations of the European Clinical Consensus Conference for Renal Denervation16 in terms of study population, design, and end points. Well-designed studies that conform to those recommendations are critical.
Finally, although our enthusiasm for renal denervation as a treatment of hypertension is tempered, there have been no noteworthy safety concerns related to the procedure, which certainly helps maintain the research momentum in this field.
- Bhatt DL, Kandzari DE, O’Neill WW, et al; SYMPLICITY HTN-3 Investigators. A controlled trial of renal denervation for resistant hypertension. N Engl J Med 2014; 370:1393–1401.
- Shishehbor MH, Hammad TA, Thomas G. Renal denervation: what happened, and why? Cleve Clin J Med 2017; 84:681–686.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Global burden of hypertension: analysis of worldwide data. Lancet 2005; 365:217–223.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Worldwide prevalence of hypertension: a systematic review. J Hypertens 2004; 22:11–19.
- Calhoun DA, Jones D, Textor S, et al; American Heart Association Professional Education Committee. Resistant hypertension: diagnosis, evaluation, and treatment: a scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research. Circulation 2008; 117:e510–e526.
- Tsioufis C, Papademetriou V, Thomopoulos C, Stefanadis C. Renal denervation for sleep apnea and resistant hypertension: alternative or complementary to effective continuous positive airway pressure treatment? Hypertension 2011; 58:e191–e192.
- Calhoun DA, Jones D, Textor S, et al. Resistant hypertension: diagnosis, evaluation, and treatment. A scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research.Hypertension 2008; 51:1403–1419.
- Persell SD. Prevalence of resistant hypertension in the United States, 2003–2008. Hypertension 2011; 57:1076–1080.
- Papademetriou V, Doumas M, Tsioufis K. Renal sympathetic denervation for the treatment of difficult-to-control or resistant hypertension. Int J Hypertens 2011; 2011:196518.
- Doumas M, Faselis C, Papademetriou V. Renal sympathetic denervation in hypertension. Curr Opin Nephrol Hypertens 2011; 20:647–653.
- Veterans Administration Cooperative Study Group on Antihypertensive Agents. Effect of treatment on morbidity in hypertension: results in patients with diastolic blood pressures averaging 115 through 129 mm Hg. JAMA 1967; 202:1028–1034.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Henegar JR, Zhang Y, Hata C, Narciso I, Hall ME, Hall JE. Catheter-based radiofrequency renal denervation: location effects on renal norepinephrine. Am J Hypertens 2015; 28:909–914.
- Azizi M, Sapoval M, Gosse P, et al; Renal Denervation for Hypertension (DENERHTN) investigators. Optimum and stepped care standardised antihypertensive treatment with or without renal denervation for resistant hypertension (DENERHTN): a multicentre, open-label, randomised controlled trial. Lancet 2015; 385:1957–1965.
- Rosa J, Widimsky P, Waldauf P, et al. Role of adding spironolactone and renal denervation in true resistant hypertension: one-year outcomes of randomized PRAGUE-15 study. Hypertension 2016; 67:397–403.
- Mahfoud F, Bohm M, Azizi M, et al. Proceedings from the European Clinical Consensus Conference for Renal Denervation: Considerations on Future Clinical Trial Design. Eur Heart J 2015; 6:2219–2227.
- Bhatt DL, Kandzari DE, O’Neill WW, et al; SYMPLICITY HTN-3 Investigators. A controlled trial of renal denervation for resistant hypertension. N Engl J Med 2014; 370:1393–1401.
- Shishehbor MH, Hammad TA, Thomas G. Renal denervation: what happened, and why? Cleve Clin J Med 2017; 84:681–686.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Global burden of hypertension: analysis of worldwide data. Lancet 2005; 365:217–223.
- Kearney PM, Whelton M, Reynolds K, Whelton PK, He J. Worldwide prevalence of hypertension: a systematic review. J Hypertens 2004; 22:11–19.
- Calhoun DA, Jones D, Textor S, et al; American Heart Association Professional Education Committee. Resistant hypertension: diagnosis, evaluation, and treatment: a scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research. Circulation 2008; 117:e510–e526.
- Tsioufis C, Papademetriou V, Thomopoulos C, Stefanadis C. Renal denervation for sleep apnea and resistant hypertension: alternative or complementary to effective continuous positive airway pressure treatment? Hypertension 2011; 58:e191–e192.
- Calhoun DA, Jones D, Textor S, et al. Resistant hypertension: diagnosis, evaluation, and treatment. A scientific statement from the American Heart Association Professional Education Committee of the Council for High Blood Pressure Research.Hypertension 2008; 51:1403–1419.
- Persell SD. Prevalence of resistant hypertension in the United States, 2003–2008. Hypertension 2011; 57:1076–1080.
- Papademetriou V, Doumas M, Tsioufis K. Renal sympathetic denervation for the treatment of difficult-to-control or resistant hypertension. Int J Hypertens 2011; 2011:196518.
- Doumas M, Faselis C, Papademetriou V. Renal sympathetic denervation in hypertension. Curr Opin Nephrol Hypertens 2011; 20:647–653.
- Veterans Administration Cooperative Study Group on Antihypertensive Agents. Effect of treatment on morbidity in hypertension: results in patients with diastolic blood pressures averaging 115 through 129 mm Hg. JAMA 1967; 202:1028–1034.
- Lewington S, Clarke R, Qizilbash N, Peto R, Collins R; Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Henegar JR, Zhang Y, Hata C, Narciso I, Hall ME, Hall JE. Catheter-based radiofrequency renal denervation: location effects on renal norepinephrine. Am J Hypertens 2015; 28:909–914.
- Azizi M, Sapoval M, Gosse P, et al; Renal Denervation for Hypertension (DENERHTN) investigators. Optimum and stepped care standardised antihypertensive treatment with or without renal denervation for resistant hypertension (DENERHTN): a multicentre, open-label, randomised controlled trial. Lancet 2015; 385:1957–1965.
- Rosa J, Widimsky P, Waldauf P, et al. Role of adding spironolactone and renal denervation in true resistant hypertension: one-year outcomes of randomized PRAGUE-15 study. Hypertension 2016; 67:397–403.
- Mahfoud F, Bohm M, Azizi M, et al. Proceedings from the European Clinical Consensus Conference for Renal Denervation: Considerations on Future Clinical Trial Design. Eur Heart J 2015; 6:2219–2227.
Reducing Routine Labs—Teaching Residents Restraint
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74.
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.
Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.
Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7
Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10
In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.
This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.
However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.
A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.
Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.
Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.
Disclosure
The authors declare no conflicts of interest.
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74.
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74.
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed
© 2017 Society of Hospital Medicine
Does the Week-End Justify the Means?
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?
Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.
Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.
A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11
Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.
To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.
One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.
Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.
But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.
The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”
Disclosure
The authors have nothing to disclose.
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed
© 2017 Society of Hospital Medicine
Inpatient Thrombophilia Testing: At What Expense?
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.
However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.
In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.
These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7
It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8
How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.
In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.
Disclosure
Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed
© 2017 Society of Hospital Medicine
Certification of Point-of-Care Ultrasound Competency
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
Any conversation about point-of-care ultrasound (POCUS) inevitably brings up discussion about credentialing, privileging, and certification. While credentialing and privileging are institution-specific processes, competency certification can be extramural through a national board or intramural through an institutional process.
Some institutions have begun to develop intramural certification pathways for POCUS competency in order to grant privileges to hospitalists. In this edition of the Journal of Hospital Medicine, Mathews and Zwank2 describe a multidisciplinary collaboration to provide POCUS training, intramural certification, and quality assurance for hospitalists at one hospital in Minnesota. This model serves as a real-world example of how institutions are addressing the need to certify hospitalists in basic POCUS competency. After engaging stakeholders from radiology, critical care, emergency medicine, and cardiology, institutional standards were developed and hospitalists were assessed for basic POCUS competency. Certification included assessments of hospitalists’ knowledge, image acquisition, and image interpretation skills. The model described by Mathews did not assess competency in clinical integration but laid the groundwork for future evaluation of clinical outcomes in the cohort of certified hospitalists.
Although experts may not agree on all aspects of competency in POCUS, most will agree with the basic principles outlined by Mathews and Zwank. Initial certification should be based on training and an initial assessment of competency. Components of training should include ultrasound didactics, mentored hands-on practice, independent hands-on practice, and image interpretation practice. Ongoing certification should be based on quality assurance incorporated with an ongoing assessment of skills. Additionally, most experts will agree that competency can be recognized, and formative and summative assessments that combine a gestalt of provider skills with quantitative scoring systems using checklists are likely the best approach.
The real question is, what is the goal of certification of POCUS competency? Development of an institutional certification process demands substantive resources of the institution and time of the providers. Institutions would have to invest in equipment and staff to operate a full-time certification program, given the large number of providers that use POCUS and justify why substantive resources are being dedicated to certify POCUS skills and not others. Providers may be dissuaded from using POCUS if certification requirements are burdensome, which has potential negative consequences, such as reverting back to performing bedside procedures without ultrasound guidance or referring all patients to interventional radiology.
Conceptually, one may speculate that certification is required for providers to bill for POCUS exams, but certification is not required to bill, although institutions may require certification before granting privileges to use POCUS. However, based on the emergency medicine experience, a specialty that has been using POCUS for more than 20 years, billing may not be the main driver of POCUS use. A recent review of 2012 Medicare data revealed that <1% of emergency medicine providers received reimbursement for limited ultrasound exams.3 Despite the Accreditation Council for Graduate Medical Education (ACGME) requirement for POCUS competency of all graduating emergency medicine residents since 2001 and the increasing POCUS use reported by emergency medicine physicians,4,5 most emergency medicine physicians are not billing for POCUS exams. Maybe use of POCUS as a “quick look” or extension of the physical examination is more common than previously thought. Although billing for POCUS exams can generate some clinical revenue, the benefits for the healthcare system by expediting care,6,7 reducing ancillary testing,8,9 and reducing procedural complications10,11 likely outweigh the small gains from billing for limited ultrasound exams. As healthcare payment models evolve to reward healthcare systems that achieve good outcomes rather than services rendered, certification for the sole purpose of billing may become obsolete. Furthermore, concerns about billing increasing medical liability from using POCUS are likely overstated because few lawsuits have resulted from missed diagnoses by POCUS, and most lawsuits have been from failure to perform a POCUS exam in a timely manner.12,13
Many medical students graduating today have had some training in POCUS14 and, as this new generation of physicians enters the workforce, they will likely view POCUS as part of their routine bedside evaluation of patients. If POCUS training is integrated into medical school and residency curricula, and national board certification incorporates basic POCUS competency, then most institutions may no longer feel obligated to certify POCUS competency locally, and institutional certification programs, such as the one described by Mathews and Zwank, would become obsolete.
For now, until all providers enter the workforce with basic competency in POCUS and medical culture accepts that ultrasound is a diagnostic tool available to any trained provider, hospitalists may need to provide proof of their competence through intramural or extramural certification. The work of Mathews and Zwank provides an example of how local certification processes can be established. In a future edition of the Journal of Hospital Medicine, the Society of Hospital Medicine Point-of-Care Ultrasound Task Force will present a position statement with recommendations for certification of competency in bedside ultrasound-guided procedures.
Disclosure
Nilam Soni receives support from the U.S. Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI) Partnered Evaluation Initiative Grant (HX002263-01A1). Brian P. Lucas receives support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development and Dartmouth SYNERGY, National Institutes of Health, National Center for Translational Science (UL1TR001086). The contents of this publication do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
1. Bahner DP, Hughes D, Royall NA. I-AIM: a novel model for teaching and performing focused sonography. J Ultrasound Med. 2012;31:295-300. PubMed
2. Mathews BK, Zwank M. Hospital Medicine Point of Care Ultrasound Credentialing: An Example Protocol. J Hosp Med. 2017;12(9):767-772. PubMed
3. Hall MK, Hall J, Gross CP, et al. Use of Point-of-Care Ultrasound in the Emergency Department: Insights From the 2012 Medicare National Payment Data Set. J Ultrasound Med. 2016;35:2467-2474. PubMed
4. Amini R, Wyman MT, Hernandez NC, Guisto JA, Adhikari S. Use of Emergency Ultrasound in Arizona Community Emergency Departments. J Ultrasound Med. 2017;36(5):913-921. PubMed
5. Herbst MK, Camargo CA, Jr., Perez A, Moore CL. Use of Point-of-Care Ultrasound in Connecticut Emergency Departments. J Emerg Med. 2015;48:191-196. PubMed
6. Kory PD, Pellecchia CM, Shiloh AL, Mayo PH, DiBello C, Koenig S. Accuracy of ultrasonography performed by critical care physicians for the diagnosis of DVT. Chest. 2011;139:538-542. PubMed
7. Lucas BP, Candotti C, Margeta B, et al. Hand-carried echocardiography by hospitalists: a randomized trial. Am J Med. 2011;124:766-774. PubMed
8. Oks M, Cleven KL, Cardenas-Garcia J, et al. The effect of point-of-care ultrasonography on imaging studies in the medical ICU: a comparative study. Chest. 2014;146:1574-1577. PubMed
9. Koenig S, Chandra S, Alaverdian A, Dibello C, Mayo PH, Narasimhan M. Ultrasound assessment of pulmonary embolism in patients receiving CT pulmonary angiography. Chest. 2014;145:818-823. PubMed
10. Mercaldi CJ, Lanes SF. Ultrasound guidance decreases complications and improves the cost of care among patients undergoing thoracentesis and paracentesis. Chest. 2013;143:532-538. PubMed
11. Patel PA, Ernst FR, Gunnarsson CL. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures. J Clin Ultrasound. 2012;40:135-141. PubMed
12. Stolz L, O’Brien KM, Miller ML, Winters-Brown ND, Blaivas M, Adhikari S. A review of lawsuits related to point-of-care emergency ultrasound applications. West J Emerg Med. 2015;16:1-4. PubMed
13. Blaivas M, Pawl R. Analysis of lawsuits filed against emergency physicians for point-of-care emergency ultrasound examination performance and interpretation over a 20-year period. Am J Emerg Med. 2012;30:338-341. PubMed
14. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. medical schools: results of a national survey. Acad Med. 2014;89:1681-1686. PubMed
© 2017 Society of Hospital Medicine